5 use cases for machine learning in the insurance industry – Digital Insurance

In 2020, the U.S. insurance industry was worth a whopping $1.28 trillion. The American insurance industry is one of the largest markets in the world. The massive amount of premiums means there is an astronomical amount of data involved. Without artificial intelligence technology like machine learning, insurance companies will have a near-impossible time processing all that data, which will create greater opportunities for insurance fraud to happen.

Insurance data is vast and complex, composed of many individuals with many instances and many factors used in determining the claims. Moreover, the type of insurance increases the complexity of data ingestion and processing. Life insurance is different from automobile insurance, health insurance is different from property insurance, and so forth. While some of the processes are similar, the data can vary greatly.

As a result, insurance enterprises must prioritize digital initiatives to handle huge volumes of data and support vital business objectives. In the insurance industry, advanced technologies are critical for improving operational efficiency, providing excellent customer service, and, ultimately, increasing the bottom line.

ML can handle the size and complexity of insurance data. It can be implemented in multiple aspects of the insurance practice, and facilitates improvements in customer experiences, claims processing, risk management, and other general operational efficiencies. Most importantly, ML can mitigate the risk of insurance fraud, which plagues the entire industry. It is a big development in fraud detection and insurance organizations must add it to their fraud prevention toolkit.

In this post, we lay out how insurance companies are using ML to improve their insurance processes and flag insurance fraud before it affects their bottom lines. Read on to see how ML can fit within your insurance organization.

ML is a technology under the AI umbrella. ML is designed to analyze data so computers can make predictions and decisions based on the identification of patterns and historical data. All of this is without being explicitly programmed and with minimal human intervention. With more data production comes smarter ML solutions as they adapt autonomously and are constantly learning. Ultimately, AI/ML will handle menial tasks and free human agents to perform more complex requests and analyses.

There are several use cases for ML within an insurance organization regardless of insurance type. Below are some top areas for ML application in the insurance industry:

For insurers and salespeople, ML can identify leads using valuable insights from data. ML can even personalize recommendations according to the buyer's previous actions and history, which enables salespeople to have more effective conversations with buyers.

For a majority of customers, insurance can seem daunting, complex, and unclear. It's important for insurance companies to assist their customers at every stage of the process in order to increase customer acquisition and retention. ML via chatbots on messaging apps can be very helpful in guiding users through claims processing and answering basic frequently asked questions. These chatbots use neural networks, which can be developed to comprehend and answer most customer inquiries via chat, email, or even phone calls. Additionally, ML can take data and determine the risk of customers. This information can be used to recommend the best offer that has the highest likelihood of retaining a customer.

ML utilizes data and algorithms to instantly detect potentially abnormal or unexpected activity, making ML a crucial tool in loss prediction and risk management. This is vital for usage-based insurance devices, which determine auto insurance rates based on specific driving behaviors and patterns.

Unfortunately, fraud is rampant in the insurance industry. Property and casualty insurance alone loses about $30 billion to fraud every year, and fraud occurs in nearly 10% of all P&C losses. ML can mitigate this issue by identifying potential claim situations early in the process. Flagging early allows insurers to investigate and correctly identify a fraudulent claim.

Claims processing is notoriously arduous and time-consuming. ML technology is a tool to reduce processing costs and time, from the initial claim submission to reviewing coverages. Moreover, ML supports a great customer experience because it allows the insured to check the status of their claim without having to reach out to their broker/adjuster.

Fraud is one of the biggest problems for the insurance industry, so let's return to the fraud detection stage in the insurance lifecycle and detail the benefits of ML for this common issue. Considering the insurance industry consists of more than 7,000 companies that collect more than $1 trillion in premiums each year, there are huge opportunities and incentives for insurance fraud to occur.

Insurance fraud is an issue that has worsened since the COVID-19 pandemic began. Some industry professionals believe that the number of claims with some element of fraud has almost doubled since the pandemic.

Below are the various stages in which insurance fraud can occur during the insurance lifecycle:

Based on the amount of fraud and the different types of fraud, insurance companies should consider adding ML to their fraud detection toolkits. Without ML, insurance agents can be overwhelmed with the time-consuming process of investigating each case. The ML approaches and algorithms that facilitate fraud detection are the following:

ML is instrumental in fraud prevention and detection. It allows companies to identify claims suspected of fraud quickly and accurately, process data efficiently, and avoid wasting valuable human resources.

Implementing digital technologies, like ML, is vital for insurance businesses to handle their data and analytics. It allows insurance companies to increase operational efficiency and mitigate the top-of-mind risk of insurance fraud.

Read the original:
5 use cases for machine learning in the insurance industry - Digital Insurance

How avatars and machine learning are helping this company to fast track digital transformation – ZDNet

Image: LNER

Digital transformation is all about delivering change, so how do you do that in an industry that's traditionally associated with largescale infrastructures and embedded operational processes?

Danny Gonzalez, chief digital and innovation officer (CDIO) at London North Eastern Railway (LNER), says the answer is to place technology at the heart of everything your business does.

"We firmly believe that digital is absolutely crucial," he says. "We must deliver the experiences that meet or exceed customers' expectations."

Delivering to that agenda is no easy task. Gonzalez says the rail journey is "absolutely full" of elements that can go wrong for a passenger, from buying a ticket, to getting to the train station, to experiencing delays on-board, and onto struggling to get away from the station when they reach their destination.

SEE: Digital transformation: Trends and insights for success

LNER aims to fix pain points across customer journeys, but it must make those changes in a sector where legacy systems and processes still proliferate. Gonzalez says some of the technology being used is often more than 30 years' old.

"There's still an incredible amount of paper and spreadsheets being used across vast parts of the rail industry," he says.

"Our work is about looking at how things like machine learning, automation and integrated systems can really transform what we do and what customers receive."

Gonzalez says that work involves a focus on the ways technology can be used to improve how the business operates and delivers services to its customers.

This manifests as an in-depth blueprint for digital transformation, which Gonzalez refers to as LNER's North Star: "That gives everyone a focus on the important things to do."

As CDIO, he's created a 38-strong digital directorate of skilled specialists that step out of traditional railways processes and governance and into innovation and the generation of creative solutions to intractable challenges.

"It's quite unusual for a railway company to give more permission for people to try things and fail," he says.

Since 2020, the digital directorate in combination with its ecosystem of enterprise and startup partners has launched more than 60 tools and trialled 15 proof-of-concepts.

One of these concepts is an in-station avatar that has been developed alongside German national railway company Deutsche Bahn AG.

LNER ran a trial in Newcastle that allowed customers to interact in free-flowing conversations with an avatar at a dedicated booth at the station. The avatar plugged into LNER's booking engine, so customers could receive up-to-date information on service availability. Following the successful trial, LNER is now looking to procure a final solution for wider rollout.

The company is also working on what Gonzalez refers to as a "door-to-door" mobility-as-a-service application, which will keep customers up to date on the travel situation and provide hooks into other providers, such as taxi firms or car- and bike-hire specialists.

"It's about making sure the whole journey is seamlessly integrated," he says. "As a customer, you feel in control and you know we're making sure that if anything is going wrong through the process that we're putting it right."

When it comes to behind-the-scenes operational activities, LNER is investing heavily in machine-learning technology. Gonzalez's team has run a couple of impactful concepts that are now moving into production.

SEE:What is digital transformation? Everything you need to know about how technology is reshaping business

One of these is a technology called Quantum, which processes huge amounts of historical data and helps LNER's employees to reroute train services in the event of a disruption and to minimise the impact on customers.

"Quantum uses machine learning to learn the lessons of the past. It looks at the decisions that have been made historically and the impact they have made on the train service," he says.

Gonzalez: "We firmly believe that digital is absolutely crucial."

"It computes hundreds of thousands of potential eventualities of what might happen when certain decisions are made. It's completely transforming the way that our service delivery teams manage trains when there's disruption to services."

To identify and exploit new technologies, Gonzalez's team embracesconsultant McKinsey's three horizon model, delivering transformation across three key areas that allows LNER to assess potential opportunities for growth without neglecting performance in the present.

Horizon one focuses on "big, meaty products" that are essential to everyday operations, such as booking and reservations systems, while horizon two encompasses emerging opportunities that are currently being scoped out by the business.

Gonzalez says a lot of his team's activity is now focused on horizon three, which McKinsey suggests includes creative ideas for long-term profitable growth.

He says that process involves giving teams quite a lot of freedom to get on and try stuff, run proof of concepts, and actually understand where the technology works.

Crucial to this work isan accelerator called FutureLabs, where LNER works with the startup community to see if they can help push digital transformation in new and exciting directions.

"We go out with key problem statements across the business and ask the innovators to come and help us solve our challenges and that's led to some of the most impactful things that we've done as a business," says Gonzalez.

FutureLabs has already produced pioneering results. Both the Quantum machine-learning tool and the "door-to-door" mobility service have been developed alongside startup partners JNCTION and IOMOB respectively.

LNER continues to search for new inspiration and has just run the third cohort of its accelerator. Selected startups receive mentoring and funding opportunities to develop and scale up technology solutions.

Gonzalez says this targeted approach brings structure to LNER's interactions and investments in the startup community and that brings a competitive advantage.

"It's not like where I've seen in other places, where innovation initiatives tend to involve 'spray and pray'," he says. "The startups we work with are clear on the problems they're trying to solve, which leads to a much greater success rate."

SEE: Four ways to get noticed in the changing world of work

Gonzalez's advises other professionals to be crystal clear on the problems they're trying to solve through digital transformation.

"Know what the priorities are and bring the business along with you. Its really important the business understands the opportunities digital can bring in terms of how you work as an organisation," he says.

"We're fortunate that we've got a board that understood that rail wasn't where it needed to be in terms of its digital proposition. But we've put a lot of work into creating an understanding of where issues existed and the solutions that we needed if we're going to compete in the future."

More:
How avatars and machine learning are helping this company to fast track digital transformation - ZDNet

Everything Youve Ever Wanted to Know About Machine Learning – KDnuggets

Looking for a fun introduction to AI with a sense of humor? Look no further than Making Friends with machine learning (MFML), a lovable free YouTube course designed with everyone in mind. Yes,everyone. If youre reading this, the course is for you!

Image by Randall Munroe,xkcd.comCC.

Short form videos:Most of the videos below are 15 minutes long, which means you get to upgrade your knowledge in bite-sized, well, bites. Tasty bites! Dive right in at the beginning or scroll down to find the topic youd like to learn more about.

Long form videos:For those who prefer to learn in 12 hour feasts, the course is also available as 4 longer installmentshere.

Making Friends with machine learningwas an internal-only Google course specially created to inspire beginners and amuse experts.* Today, it is available to everyone!

The course is designed to give you the tools you need for effective participation in machine learning for solving business problems and for being a good citizen in an increasingly AI-fueled world. MFML is perfect for all humans; it focuses on conceptual understanding (rather than the mathematical and programming details) and guides you through the ideas that form the basis of successful approaches to machine learning. It has something for everyone!

After completing this course, you will:

I was simply blown away by the quality of her presentation. This was a 6-hour(!) tour de force; through every minute of it, Cassie was clear, funny, energetic, approachable, insightful and informative.Hal Ableson, Professor of Computer Science at MIT

I cannot emphasize enough how valuable it was that this course was targeted towards a general audience. Human resources specialist

Fantastic class, plus it is hilarious! Software engineer

I now feel more confident in my understanding of ML Loved it. Communications manager

More useful than any of the courses I took in university on this stuff. Reliability engineer

I loved how she structured the course, knowing the content, and navigating this full-day course without getting us bored. So I learned two things in this lesson. 1) Machine learning, and 2) Presentation skills. Executive

Great Stuff: I would recommend it. ML Research Scientist

always interesting and keeps my attention. Senior leader, Engineering

well structured, clear, pitched in at the right level for people like me and full of useful visuals and stories to help me understand and remember. I learnt a ton. Senior leader, Sales

Read more here:
Everything Youve Ever Wanted to Know About Machine Learning - KDnuggets

Getting Value Out of An ML with Philip Howes – InfoQ.com

Subscribe on: Apple Podcasts Google Podcasts Soundcloud Spotify Overcast Podcast Feed

Struggling to scale your IoT infrastructure? Get more performance out of your network today with EMQ. From proof-of-concept to large-scale deployment, EMQ enables mission-critical IoT solutions at the edge, in the cloud, and across multi-cloud.EMQ: Mission-critical data infrastructure for IoT.

Roland Meertens: Welcome to the new episode of the InfoQ podcast. Today, I, Roland Meertens, am going to interview Philip Howes. In the past, he was a machine learning engineer and currently he is a chief scientist and co-founder in Baseten. He has worked with neural networks for a long time, of which we have an interesting story at the end of the podcast.

Because of his work at Baseten, Philip and I will talk about how to go from an idea to a deployed model, as fast as possible, and how to improve their model afterwards in the most efficient way. We will also discuss how the future of engineering teams looks like and what the role of data scientist is there. Please enjoy listening to this episode.

Welcome, Philip, to the InfoQ podcast. The first topic we want to discuss is going from zero to one and minimizing time to value. What do you mean by that?

Philip Howes: I guess what I mean is, how do we make sure that machine learning projects actually leave the notebook or your development environment? So much of what I see in my work is these data science projects or machine learning projects that have these aspirations and they fall flat for all sorts of different reasons. And really, what we're trying to do is get the models into the hands of the downstream users or the stakeholders as fast as possible.

Roland Meertens: So, really trying to get your model into deployment. What kind of tools do you like to use for that? Or what kind of tools would you recommend for that?

Philip Howes: I keep saying that we're in the Wild West and I keep having to sort of temperature check. Is it still the Wild West? And it turns out from this report last week that I had read, yes, it is.

I think at least in enterprise, most people are doing everything sort of in-house. They're sort of building their own tools. I think this is even more the case in startup land, people hiring and building rather than using that many off-the-shelf tools.

I think that there has been this good ecosystem that's starting to form around getting to value as quickly as possible. Obviously, the company I started with my co-founders is operating in this space, but there are other great ones, even in the space of just out of these Jupyter notebooks. There's like Voila. And then some more commonly known things like GradIO, Streamlit, Data Bricks, all the way up to, I guess, the big cloud players like Amazon and others.

Roland Meertens: Do you remember the name of the report? Or can we put it in the show notes somehow?

Philip Howes: I think it's just an S&P global report on MLOps. I'll try and find a link and we can share it.

Roland Meertens: Yes, then I'll share it at the end of that podcast or on the InfoQ website. So, if we're talking about deploying things, what are good practices then around this process? Are there any engineering best practices at the moment?

Philip Howes: I mean, I think this is a really interesting area because engineering as a field is such a well established field. We really have, through the course of time, iterated on and developed these best practices for how to package applications, how to do separations of concerns.

And, with regards to machine learning, it's kind of like, well, the paradigm is very different. You're going from something which is very deterministic to something that's probabilistic. And you're using models in place of deterministic logic. And so, some of the patents aren't quite the same. And the actual applications that you're building typically are quite different, as well, because you're trying to make predictions around things. And so, the types of applications that make predictions are pretty fundamentally different from applications that serve some sort of very deterministic process.

I think there's certainly some similarities.

I think it's really important to involve all the stakeholders as early as possible. And this is why minimizing time to value is such an important thing to be thinking about as you're doing development in machine learning applications. Because at the end of the day, a machine learning application is just a means to an end. You're building this model because it's going to unlock some value for someone.

And usually, the stakeholder is not the machine learning engineer or the data scientist. It's somebody who's doing some operationally heavy thing. It might be some toy app that is facing consumers who might be doing recommendations. But as long as the stakeholders aren't involved, you're really limiting your ability to close that feedback loop between, what is the value of this thing and how am I producing this thing?

And so, I think this is true in both engineering and machine learning. The best products are the ones that have very clear feedback loops between the development of the product and the actual use of the product.

And then, of course there are other things that we have to think about in the machine learning world around understanding, again, we're training these models on large amounts of data. We don't really have the capacity to look at every data point. We have to look at these things statistically. And because of that, we start to introduce bias. And where are we getting bias from? Where is data coming from? And the models that we're developing to put into these operational flows, are they reinforcing existing structural biases that are inherent in the data? What are the limitations of the models?

And so, thinking about data is also really important.

Roland Meertens: The one thing which always scares me is that, if I have a model and I update it and put it in production again, will it still work? Is everything still the same? Am I still building on the assumptions I had in the past? Do you have some guard rails there? Or are there guard rails necessary when you want to update those machine learning models all the time?

Philip Howes: Absolutely. I mean, there's, of course, best practices around just making sure things stay stable as you are updating. But coming from an engineering background, what is the equivalent of doing unit tests for machine learning models? How do we make sure that the model continues to behave in a way...

At the end of the day, you're optimizing over some metric, whether it be accuracy or something a little bit more exotic. You're optimizing over something. And so you're following that number. You're following the metric. You're not really following sort of, what does that actually mean?

And so it's always good to think about, "Okay, well, how do I think about what this model should be doing as I iterate on it?" And making sure that, "Hey, can I make sure that, if I understand biases in the data or if I understand where I need the model to perform well, and incorporating those understandings as kind of tests that I do, whether or not they're in an automated way or an ad hoc way..."

I think obviously automation is the key to doing things in these really closed tight feedback loops. But if I understand, "Hey, for this customer segment, this model should be saying this kind of thing," and I can build some statistics around making sure that the model is not moving too much, then I think that's the kind of thing that you've got to be thinking about.

Roland Meertens: I think we now talked a bit about going from zero and having nothing to one where you create some value. And you already mentioned the data a couple of times. So, how would you go at extending your data in a valuable way?

Philip Howes: I guess fundamentally we have to think about, why is data important to machine learning?

Most machine learning models, they're trained doing some sort of supervised learning. Without sufficient amount of data, you're not going to be able to extract enough signal so that your model is able to perform on something.

At the end of the day, that is also changing. The world around you is changing and the way that your model needs to perform in that world has to also adapt to a changing world. So, we've got to of think about how to evolve.

Actually, one sort of little tangent, I was reading the Chinchilla paper recently. And what was really interesting is, data is now becoming the bottleneck in improvements to a model. So, this is one of these things that I think, for a very long time, we thought, "Hey, big neural nets. How do we make them better? We add more parameters to the model. We get better performance by creating bigger models."

And it turns out that maybe actually data is now becoming the bottleneck. This paper showed that basically, the model size... Well, I guess the loss associated with the model is linear in the inverses of both the model size and the size of the data that you use to train it. So, there is this trade off that you have to think about, at least in the forefront of machine learning, where we're starting to get this point where data becomes a bottleneck.

So, data's obviously very important.

Then the question is, "Okay, how do we get data?"

Obviously, there are open data sets and that usually gives us a great place to start. But how many domain specific data sets are there? There's not that many. So, we have to think about, how do we actually start collecting and generating data? There is a few different ways.

I think some of the more novel ways are in synthesizing data. I think that's a whole nother topic. But I think for the majority of people, what we end up doing is, getting some unlabeled data and then figuring out, "Okay, how do we start labeling?" And there's this whole ecosystem that exists in the labeling tools and labeling machine learning models. And if we go back to our initial discussion around, "Hey, zero to one, you're trying to build this model," labeling is this process in which you start with the data, but the end product is both labeled data and also the model that is able to score well on your data set, as you are labeling.

Roland Meertens: I think often it's not only the availability of data. Data is relatively cheap to generate. But having high quality labels with this data and selecting the correct data is, in my opinion, the bigger problem. So, how would you select your data, depending on what your use case is? Would you have some tips for this?

Philip Howes: Yes, absolutely. You're presented with a large data set. And you're trying to think, "Okay, well, what is the most efficient way for me to pull signal out of this data set in such a way that I can give my model meaningful information, so that it can learn something?"

And generally, data is somewhat cheap to find. Labels is expensive. It's expensive because it's usually very time consuming to label data, particularly if there's this time-quality trade off. The more time you spend on annotating your data, the higher value it's going to have. But also, because it's time, it's also cost, right? It's certainly something that you want to optimize over.

And so, there are lots of interesting ways to think about, how should I label in my data?

And so, let's just set up a flow.

I have some unlabeled data. And I have some labeling interface. We can talk about, there's a bunch of different labeling tools out there. You can build your own labeling tools. You can use enterprise labeling tools. And you're effectively trying to figure out, "Okay, well, what data should I use such that I can create some signal for my model?"

And then once I have some initial set of data, I can start training a model. And it's obviously going to have relatively low performance, but I can use that model as part of my data labeling loop. And this is where the area of active learning comes in. The question is, "Okay, so how do I select the correct data set to label?"

And so, I guess what we're really doing is, we're querying our data set somewhat intelligently around, where is the data points in this data set such that I'm going to get some useful information?

And we can do this. Let's say that we have some initial model. What we can do is start scoring the data on that model and say, "Hey, what data is this model most uncertain about?" We can start sampling from our data set in terms of uncertainty. And so, through sampling there, we're going to be able to give new labels to the next iteration of the model, such that it is now more certain around the areas of uncertainty.

Another thing which maybe creates robustness in your model is maybe that we have some collection of models that can do some sort of weak classification on our data. And they are going to have some amount of disagreement. One model says this, another model says B, A and B. And so, I want to form a committee of my models and say, "Hey, where is there disagreement amongst you?" And then, I can select data that way.

I mean, obviously there are lots of different querying strategies that we could use. We could think about maybe, how do I optimize over error reduction? Or how much it's going to impact my model?

But I guess the takeaway is that there's lots of intelligent ways for different use cases in data selection.

Roland Meertens: And you mentioned initial models. What is your opinion on those large scale, foundational models, which you see nowadays? Or using pre-trained models? So, with foundational models, I mean like GPT-3 or CLIP.

Philip Howes: I think that there's a cohort of people in the world that are going to say that, basically, it's foundational models or nothing. It's kind of foundational models will eat machine learning. And it's just a matter of time.

Roland Meertens: It's general AI.

Philip Howes: Yes, something like that.

I mean, I think to the labeling example, it's like, "Yeah, these foundational models are incredibly good." Think of something like CLIP that is this model, which is conditioned over text and images. And let's say I have some image classification task. I can use CLIP as a way to bootstrap my labeling process. And then, as I add more and more labels, I can start thinking about, "Okay, I can not just use it to bootstrap my labeling process. I can also use it to bootstrap my model. And I can start fine tuning one of these foundational models on my specific task."

And I think that there is a lot of value in these foundational models in terms of their ability to generalize and particularly generalize when you are able to do some fine tuning on them.

But I think it raises this very important question because, you mentioned GPT-3, this is a closed source model. And so, it's kind of worrying to live in this world where few very large companies control the keys to these foundational models. And that's why I think the open science initiatives that are happening in the machine learning world, like big science. I think, as of time of recording this, I'm not sure when this comes out, but a couple days ago, the stable diffusion model came out, which is super exciting, which is essentially a DALL-E-type model that does image generation based off text, which does amazing high quality images from text.

Certainly, the openness around foundational models is going to be pretty fundamental to making sure that machine learning is a democratized thing.

Roland Meertens: And are you at all concerned about how well models generalize or what kind of model psychology is going on? Overall problems a model can solve? Or what abstractions it learned?

Philip Howes: Yes. I mean, it's like just going back to stable diffusion.

Of course, obviously the first thing I did when I see this model get released, I pulled down a version. And this is great because this is a model that is able to run on consumer hardware. And the classic thing that you do with this model is you say astronaut riding horse. And then, of course, it produces this beautiful image of an astronaut riding a horse. And if you stop to think about it a little bit and look at the image, it's like, "Oh, it's really learnt so much. There's nothing in reality which actually looks like this, but I can ask for a photograph of an astronaut riding a horse, and it's able to produce one for me."

And it's not just the astronaut riding a horse. It understands the context around, there's space in the background. And it understands that astronauts happen to live in space. And you're like, "Oh, wow, it's really understood my prompt in a way that it's filled in all the gaps that I've left."

And then, of course, you write, "Horse riding astronaut." And you know what the response is from the model? It's an astronaut riding a horse.

And so, clearly that there is some limitation in the model because it's understood the relationship between all these things in the data distributions that it's been trained on. And it's able to fill in the gaps and extrapolate around somewhat plausible things. But when you ask it to do something that seems really implausible, it's so far out of its model of the world that it just defaults back to, "Oh, you must have meant this. You must have met the inverse because there's no such thing as a horse that rides an astronaut."

Roland Meertens: Oh, interesting. I'm always super amazed at how, if you ask the model, for example, to draw an elephant with a snorkel, it actually understands that elephants might breathe not through their mouth. So, it draws to snorkel in a different place than you would expect. So, it has a really good understanding of where to put things you would put on humans, but put on animals.

I'm always very amazed at how it gets more concepts than I could have programmed myself manually.

Philip Howes: I think it's amazing how well these things tend to generalize in directions that kind of make sense. And I feel as though this is where a lot of the open questions exist. It's just like, where are these boundaries around generalization?

And I don't think that the tools really exist today that really give us some systematic way of encapsulating, what is it that this model has learned? And very often, it's upon sort of the trainers of the model, the machine learning experts, to maybe know enough about the distributions of the data and about the architecture of the model to start poking it in the places where maybe these limitations might exist.

And this is where bias in machine learning is really frightening because you just really don't know. How do I understand what's being baked into this model in a way that is transparent to me as the creator of the thing?

Roland Meertens: Yes, the bias is very real. I think yesterday I tried to generate a picture of a really good wolf, like a really friendly wolf meeting the Pope. But all the images generated were of an evil-looking wolf, which I guess is the bias on the internet towards wolves. And you don't realize it until you start generating these images.

Did you see this implicit bias from the training data come through your results in ways you don't expect?

Philip Howes: And I think this is where AI, not just on the data bias in the technical sense, but also in the ethical sense, is to really start thinking about how these things get used. And obviously, the world's changing very rapidly in this regard. And people are trying to understand these things as best they can, but I think it just underscores the need to involve the stakeholders in the downstream tasks of how you're using these models.

I think data scientists and machine learning engineers, they're very good at understanding and solving technical problems. And they've basically mapped something from the real world into something which is inherently dataset-centric. And there's this translation back to the real world that I think really needs to be done in tandem with people who understand how this model is going to be used and how it's going to impact people.

Roland Meertens: Yes. If we're talking about that, we already now talked about minimizing the time to value and extending your data in a valuable way. So, who would you have in a team if you are setting this up at a company?

Philip Howes: I think this is a really big question. And I think depends on how end to end you want to talk about this.

I think machine learning projects start at problem definition to problem solution. And problem definition and solution generally operate in the real world. And the job of the data scientists is usually in the data domain. So, everything gets mapped down to this domain, which is very technical and mathematical. And there are all sorts of requirements that you have on the team there in terms of data scientists. Data scientist means so many different things. It's like this title that means everything from doing ETL, to feature engineering, to training models, to deploying models, to monitoring models. It also includes things that happen orthogonally, maybe like business analyst.

But I think on the machine learning side of things, there's a lot of engineering problems that's starting to get specialized in terms of, on the data side of things, understanding how to operate over large data sets, data engineering. Then you have your data scientist who is maybe doing feature engineering and model architecture design and training these things.

And then it's like, "Okay, well now you have this model. How do I actually operationalize this in a way that is now tapping into the inherent value of the thing?" And so, how do you tap into the value? You basically make it available to be used somewhere.

And so there's traditional DevOps, ML ops engineering that's required. And then, of course, at the end of the day, these things end up in products. So, there's product engineering. There's design. And then surrounding all of this thing is the domain in which you're operating, so there are the domain experts.

And so, there's all sorts of considerations in terms of the team. And what tends to happen more often than not is, in companies that are smaller than Google and Microsoft and Uber, a lot of people get tasked with wearing a lot of hats. And so, I think when it comes to building a team, you have to think about, how can I do more with less?

And I think it becomes the question around, what am I good at? And what are the battles that I want to pick? Do I want be an infrastructure engineer or do I want to train models? And so, if I don't want to be an infrastructure engineer and learn Kubernetes and about scalability and reliability, all these kinds of things, what tools exist that are going to be able to support me for the size and the stage of the company that I'm in?

Particularly in smaller companies, there's a huge number of skills that are required to extract value out of a machine learning project. And this is why I love to operate in this space, because I think machine learning has got so much potential for impact in the world. And it's about finding, how do you give people superpowers and allow them to specialize in the things that create the most value where humans need to be involved and how to allow them to extract that value in the real world?

Roland Meertens: If you're just having a smaller company, how would you deal with lacking skills or responsibilities? Can this be filled with tools or education?

Philip Howes: It's a combination of tools and education. I think one of the great things about the machine learning world is it's very exciting. And exciting things tend to attract lots of interest. And with lots of interest, lots of tools proliferate. And so, I think that there's certainly no lack of tools.

I think what's clear to me is that the space is evolving so quickly and the needs are evolving so quickly and what's possible is evolving so quickly that the tools are always playing in this feedback loop, with research tooling and people of, what are the right tools for the right job at the right time? And I think that it hasn't settled. There's no stable place in this machine learning world. And I think that there are different tools that are really useful for different use cases. And lots of the time, there are different tools for different sizes and stages of your machine learning journey.

And there are fantastic educational resources out there, of course. I particularly like blogs, because I feel as though they're really good at distilling the core concepts, but also doing exposition and some demonstration of things. And they usually end up leading you to the right set of tools.

What becomes really hard is understanding the trade offs and making sure that you straddle the build versus by hire versus by line effectively. And I don't think that there is a solution to this. I think it's about just kind of staying attuned to what's happening in the world.

Roland Meertens: And if we're coming back to all the new AI technologies, do you think that there will be new disciplines showing up in the near future to extend on the data scientist role to be more specialist?

Philip Howes: Yes, absolutely. I mean, I think one of the things that's happened over the last few years is that specializations are really starting to solidify around data engineering, model development, ML, engineers, ML ops engineers.

But I think going back to our conversation around some of these foundational models, if you are to say that these things are really going to play a pretty central role in machine learning, then what kind of roles might end up appearing here? Because model fine tuning of a foundational model is a very different kind of responsibility, maybe technically lighter but maybe requires more sort of domain knowledge. And so, it's this kind of hybrid data scientist, domain expert kind of position.

I think tooling will exist to really give people the ability to do fine tuning on these foundational models. And so, I think maybe there is an opportunity for the model fine tuner thing.

I think going back to stable, diffusional or DALL-E type models, I think astronaut riding horse, you get an astronaut riding a horse. Horse riding astronaut, you get an astronaut riding a horse. But if you prompt the model in the right way, if you say maybe not horse riding astronaut, but rather horse sitting on back of astronaut, and maybe with additional prompting, you might actually able to get what you need to do. But that really requires a deep understanding of the model and how the model is thinking about the world.

And so, I think what's really interesting is this idea that these model are pretty opaque. And so, I think you mentioned model psychology earlier. Is there opportunity for model psychologists? Who's still going to be the Sigmund Freud of machine learning and develop this theory about how do I psychoanalyze the model and understand, what is this model thinking about the world? What is its opinion and abstractions that it's learned around the world of the data that it's built?

Roland Meertens: And maybe even know that if you want to have specific outputs, you should really go for one model rather than another. I really like your example of the horse thing on the back of an astronaut because I just typed it into DALL-E and even the Open AI website or so can't create horses on the back of astronauts. So, listeners can send us a message if they manage to create one.

As a last thing, you mentioned that you have extensive experience in neural networks and horses. Can you explain how you started working with neural networks?

Philip Howes: This is a bit of a stretch. But when I grew up, my dad was, let's say, an avid investor at the horse track. And so, one of the things I remember as the child back in the early 90's was we'd go to the horse track and there'd be a little rating given to each horse and provide some number. And I learned that N stood for neural network. And so, these people building these MLPs to basically score horses. And so this was a very early exposure to neural networks.

And so, I did a little digging as a kid. And obviously, it was over my head. But as I sort of progressed through teenage years and into university, I was getting exposed to these things again in the context of mathematical modeling. And this is how I entered the world of machine learning, was initially with the Netflix Prize and realizing that, "Hey, everyone's just doing SVD to win this million dollar prize." I'm like, "Hey, maybe mathematics is useful outside of this world."

And yeah, I made this transition into machine learning and haven't looked back. Neural networks.

Roland Meertens: Fantastic. I really love the story. So, yeah, thank you for being on the podcast.

Philip Howes: Thanks for having me, Roland. It's a real pleasure.

Roland Meertens: Thank you very much for listening to this podcast and thank you Philip for joining the podcast.

If you look at the show notes on InfoQ.com, you will find more information about the Chinchilla paper we talked about and the S&P Global Market Intelligence report. Thanks again for listening to the InfoQ podcast.

Read the original:
Getting Value Out of An ML with Philip Howes - InfoQ.com

How technologies such as AI, ML, and deep learning are paving the way for the digitalization of the constructi – Times of India

The construction industry has been long reliant on tedious manual work, obsolete practices, legacy systems and exhausting paperwork. Moreover, the global construction industry has been relatively slow to adopt the technology. However, the advent of cutting-edge technologies such as AI, ML and Deep Learning in the AEC (Architecture, Engineering & Construction) sector is transforming the way things are done in the industry.

AI-led Digital Transformation

By harnessing the power of AI, construction industry stakeholders can boost efficiency, cost-savings, agility & profitability through automation of tedious manual processes and replacement of legacy systems and paperwork with digitalization. Moreover, AI-powered autonomous and semi-autonomous functionalities aid in streamlining and pacing up construction project completion. Digitalizing the construction projects leveraging AI enables AEC stakeholders to mitigate risk and enforce safety protocols for better work operations. AI makes the identification of pre and post-construction issues easier and also helps in finding timely solutions to crises proactively. AI can help in empowering real-time decisions through automatic alerts and notifications. AI helps the construction industry to overcome the challenges such as the collaboration of offsite and onsite resources, inappropriate safety measures, labor shortages, cost overruns and improper schedule management.

Proactive planning and management with ML

In the current times, machine learning has been steadily gaining buzz in the AEC industry.

Machine learning is dynamically helping to improve safety, and boost productivity, quality and vital measures. The innovative technology helps in improving construction designs and planning processes with a high level of precision and estimation. While ML-led digitalization in the construction industry can enable AEC firms to bolster decision-making, make informed predictions, streamline business and workflow operations, proactively manage clients expectations and be future-ready.

Streamlined processes with Deep Learning

Leveraging deep learning not just helps in proactive streamlining of the construction processes but also effectively manages construction project biddings, site planning and management, resource & asset planning, risk management, cost management, communication with clients, and health and safety management.

We hope you can envision what we see that AI, ML & Deep Learning in the AEC industry have even more exciting possibilities. These technologies will influence the future of construction and bring about a positive transformation.

Views expressed above are the author's own.

END OF ARTICLE

Read more:
How technologies such as AI, ML, and deep learning are paving the way for the digitalization of the constructi - Times of India

Canadian company uses machine learning to promote DEI in the hiring process – IT World Canada

Toronto-based software company, Knockri has developed an AI-powered interview assessment tool to help companies reduce bias and bolster diversity, equity and inclusion (DEI) in the job hiring process.

Knockris interview assessment tool uses Natural Language Processing (NLP) to evaluate only the transcript of an interview, overlooking non-verbal cues, including facial expressions, body language or audio tonality. In addition, race, gender, age, ethnicity, accent, appearance, or sexual preference, reportedly, do not impact the interviewees score.

To achieve objective scoring, Faisal Ahmed, co-founder and chief technical officer (CTO) of Knockri, says that the company adopts a holistic and strategic approach in training their model, including constantly trying new and different data, training, and tests, that covers a wide range of representation in terms of race, ethnicity, gender, and accent, as well as job roles and choices. After training the model, the company conducts quality checks and adverse impacts analysis to analyze scoring patterns and ensure quality candidates do not fall through the cracks.

Though working with clients with high volume hiring such as IBM, Novartis, Deloitte, and the Canadian Department of National Defence, Ahmed says their model is not able to analyze for every job in the world. Once we have new customers, new geographies, new job roles or even new experience levels that were working with, we will wait to get an update on that, benchmark, retrain, and then push scores. Were very transparent about this with our customers.

To ensure that the data fed into the AI is not itself biased, Ahmed adds that the company avoids using data from past hiring practices, such as looking at resumes or successful hires from ten years ago, as they may have been recruiting using biased or discriminatory practices. Instead, Ahmed says, the AI model is driven by Industrial and Organizational (IO) psychology to focus purely on identifying the kind of behaviors or work activities needed for specific jobs. For example, if a customer service role requires empathy, the model will identify behaviors from the candidates past experiences and words that reflect that specific trait, Ahmed says.

He recommends that customers use Knockri at the beginning of the interview process when there is a reasonably high volume of applications, and the same experience, scoring criteria, and opportunities can be deployed for all candidates.

Ahmed says their technology seeks to help businesses lay a foundation for a fair and equitable assessment of candidates, and is not meant to replace a human interviewer. Decisions made by Knockri are reviewed by a human being, and later stages of the interview process will inevitably involve human interviewers.

Were not going to solve all your problems, but were going to set you on the right path, concludes Ahmed.

Read more from the original source:
Canadian company uses machine learning to promote DEI in the hiring process - IT World Canada

Heard on the Street 9/12/2022 – insideBIGDATA

Welcome to insideBIGDATAs Heard on the Street round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Eliminating off-label AI. Commentary by Triveni Gandhi, Responsible AI Lead at Dataiku

The healthcare industry is well known for its off-label drug use. We see this all the time where a drug approved for heart concerns may later be prescribed to improve mental health outcomes even though it was never formally reviewed for that purpose. Off-label use proliferates for many reasons: perhaps there are no suitable approved drugs for a condition or other approved drugs havent worked. Surprisingly to many, this happens all the time. In AI, many practitioners have taken a similar approach, and its a grave mistake. Off-label AI is when practitioners take a successful model for a certain situation and re-use it for others. For example, in the legal field judges have used AI-informed sentencing guidelines, which turned out to be heavily biased against people of color. However, the model used was actually taken from a different application intended to identify potential criminal re-offenders and offer support to minimize recidivism. This copy-paste approach to AI embodies the perils of off-label AI even with the best intentions and must be eliminated to build trust in the field.

How MLOps can be something of a silver bullet in the era of digital transformation and complex data problems if used strategically. Commentary by Egido Terra, Senior Data Product Manager, Talend

As data volume and complexity continues to grow, ML is gaining more importance to ensure data health. The value of mature data management is already immeasurable. However, many professionals fail to understand the requirements of successful automation. In order to unleash the full potential of ML, MLOps must be leveraged for solving complex problems with highly specific, tailored solutions. MLOps the discipline of deploying and monitoring machine learning models can be something of a silver bullet in the era of digital transformation and complex data problems if used strategically. Automation is a must when it comes to properly managing data and ML; developing models wont be sufficient unless MLOps is used to quickly identify problems, optimize operations, find issues in the data, and allow smooth and successful execution of ML applications. The alternative is hard-to-manage ad-hoc deployments and longer release cycles, where time-consuming human intervention and error is all too common. The benefits of issue-specific ML applications for data health are endless. A dedicated investment in MLops to ensure your automation priorities are well-structured will pay off in the short and long term. As a result, harmful data will be kept out of applications, and solutions will come quicker with a significant impact.

How To Level Up Data Storage In The Growing Datasphere. Commentary by Jeff Fochtman, senior vice president of marketing, Seagate Technology

The global dataspherethat is, all the data created, consumed, and stored in the worlddoubles in size every 3 years. Its a mind-blowing growth. How business leaders treat all this data matters. It matters because data is an immensely valuable, if overlooked, business currency. Organizations that find themselves deluged by more and more data should focus on converting this data into insights, and those insights into business value. Likely, if your organization is handling data sets that are 100TB and more, you already store some of this data in the multicloud. Unfortunately, 73% of business leaders report that they can only save and use a fraction of their data because of growing costs associated with data storage and movement. What can you do about it today? Learn from companies that win at business by taking 5 practical steps: 1) They are a lot more likely to consistently use predictive third-party software tools that help anticipate and measure the costs of cloud resources for every deployment decision. Do thatevery time. 2) Make sure to evaluate deployment criteria (performance, API, etc.) prior to deploying applications. 3) Monitor those characteristics once applications are up and running. 4) Invest in tools in addition to training. 5) Automate security and protection. What can you do about it in the near future? Some storage companies offer vendor-agnostic, frictionless data services with transparent, predictable pricing and no egress or API fees. To reclaim control over your data, look for those solutions.

New bill brings US closer to sustainability goals IoT will help us cross the finish line. Commentary by Syam Madanapalli, Director, IoT Solutions at NTT DATA Services

As the US pushes forward toward its sustainability goals with recent legislation that provides the most climate funding the country has ever seen, cleantech is at the forefront of our economy. Internet of Things (IoT) technology has the potential to play a key role in this sector through the reduction of carbon emissions and adoption of sustainable practices and to have far-reaching positive impacts both on business operations and for our environment. IoT and digital twin technologies allow for the connection of complex ecosystems, providing real-time data from the large variety of meters, sensors, systems, devices, and more that anorganization might use to measure carbon emissions, giving more insight into their carbon footprint than ever before. Once that IoT data is connected to digital twins in the cloud, advanced analytics can be used to identify and predict issues along the value chain and optimize operations. This will be an area of growth as leaders continue to look for ways to improve operations and reduce environmental impact.

Leverage existing AI/ML capabilities right now. Commentary by Don Kaye, CCO, Exasol

In todays data-driven world, there is a definitive need for organizations to use artificial intelligence (AI) and machine learning (ML) to move beyond simple reports and dashboards describing what has happened to predict with confidence what will happen. Forward-thinking companies are embracing AI and ML in an effort to develop thorough data strategies that link to their overall business objectives. Business processes today are not instantaneous but business leaders expect data-driven outcomes to be. This often leaves decision-makers and their teams in a pinch, especially as data consumerization continues to increase, and fast. This is where artificial intelligence and machine learning play an integral role. While these capabilities are often integrated within an organizations current technology stack, they are not getting leveraged to their fullest potential. Companies must use their existing AI/ML capabilities to improve access flows to data, gain commonality across various points of view at scale, and all within a fraction of the time it takes to sift through the typical data sets analysts are tasked with.

Its Time to Put Credit Scores in Context. Commentary by Steve Lappenbusch, Head of Privacy at People Data Labs

Last week, The Wall Street Journal reported that a coding error at consumer credit reporting agency Equifax lead the credit giant to report millions of erroneous credit scores to lenders across a three-week period in April and May of this year, a major challenge for lenders and credit seekers.While a credit score can shed some essential light on the subjects credit history and past interaction with lenders and payees, enriching a record with alternative data like work history, past addresses, and social media profiles can substantially expand a lenders understanding of who the customer is, and how legitimate their application may be. A history of social media profiles, email, and phone contacts with a long history of use, and a valid work history will all help to expedite the process of weeding out synthetic identities and other invalid applications fast, freeing up time to service legitimate applicants. Credit scores arent going anywhere. Theyll remain a critical tool for lenders looking to understand ability to repay, and the only permissible tool for determining credit worthiness. However, its easy to imagine a world in which alternative data sources can diminish the impact of inevitable errors like the one reported today. By providing a backstop of additional context and a new layer of identity on top of traditional credit bureau records, lenders no longer need to be tied to a single source of truth.

The value of embedded analytics driving product-led growth. Commentary by Sumeet Arora, Chief Development Officer, ThoughtSpot

Nowadays, our whole world is made up of data and that data presents an opportunity to create personalized, actionable insights that drive the business forward. But far too often, we see products that fail to equip users with data analysis within their natural workflow and without the need to toggle to another application. Today, in-app data exploration, or embedded analytics, is table stakes for product developers as it has become the new frontier for creating engaging experiences that keep users coming back for more. For example, an app like Fitbit doesnt just count steps and read heart rates. It gives users an overview of health and recommends actions that should be taken to keep moving, get better sleep, and improve overall well-being. Thats what employees and customers want to see in business applications. Insights should not be limited to business intelligence dashboards; they should be seamlessly integrated everywhere. Whether users are creating an HR app for recruiting, or a supply chain app for managing suppliers, embedded analytics can provide real-time intelligence in all these applications by putting the end-user in the drivers seat and giving them insights that are flexible and personalized.

Whats the deal with Docker (containers), and Kleenex (tissues)? Commentary by Don Boxley, CEO and Co-Founder, DH2i

I suppose you can say that Docker is to containers, what Kleenex is to tissues. However, the truth is that Docker was just the first to really bring containers into the mainstream. And, while Docker did it in a big way, there are other containerization alternatives out there. And thats a good thing because organizations are starting to adopt containers in production at breakneck speed in this era of big data and digital transformation. In doing so, organizations are enjoying major increases in portability, scalability and speed of deployment all checkboxes for organizations looking to embrace a cloud-based future. I am always excited to learn about how it is going for customers leveraging containers in production. Many have even arrived at the point of deploying their most business-critical SQL Server workloads in containers. The skys the limit for deployments of this sort, but only if you do it thoughtfully. Without a doubt, containerization adds another layer of complexity to the high availability (HA) equation, and you certainly cant just jump into it with container orchestration alone. What is necessary is approaching HA in a containerized SQL Server environment with a solution that enables fully-automatic failover of SQL Server Availability Groups in Kubernetesenabling true, bulletproof protection for any containerized SQL Server environment.

Why data collection must be leveraged to personalize customer experiences beyond retail. Commentary by Stanley Huang, co-founder and CTO, Moxo

Today, as more customers prefer to manage their business online, these interactions can feel impersonal. Its common for the retail industry to leverage data collection and spending algorithms in order to create customer profiles and predict the next best offer, as retail is a highly customer-centric business with buyers requiring on-demand service. Beyond the retail industry, high-touch industries, such as legal and financial services, are beginning to utilize data collection in order to more effectively service clients. By analyzing collected data from previous touchpoints, companies can create a holistic 360-degree view of each customer and gain a better understanding of how to interact with them based on their individual preferences. Data collected from a users past is the most relevant source to help contextualize client interactions and enable businesses to personalize the entire customer journey moving forward. This historical data collected from client interactions allows businesses to identify client pain points in the service process and make improvements in real time. In addition, the automation of processes can enable businesses to more quickly analyze collected data and reduce friction in the customer service process.

Its Time To Tap Into the Growing Role of AI, Data and Machine Learning in Our Supply Chain. Commentary by Don Burke, CIO at Transflo

The area of machine learning, AI, contextual search, natural language processing, neural networks and other evolving technologies allows for enhanced operational agility with the supply chain like never before.These technologies allow for adaptable digital workflows driving speed, efficiencies and cost savings. Digitizing and automating workflows allow organizations to scale, grow revenues, adapt faster and deliver a superior customer experience. For example, transportation generates volumes of required documents necessary to complete financial transactions among supply chain partners. The ability to apply deep learning machine models to classify and extract data from complex, unstructured documents (i.e., emails, PDFs, handwritten memos, etc.) not only drives efficient processing but unlocks actionable data accelerating business processes and decision-making! This equates to a real economic impact, whether by customer service excellence, speed of invoicing or significant cost savings.Above and beyond automating routine tasks and freeing up human resources for more high-valued opportunities, data becomes a valuable and harvestable area. Using these technologies to extract, process and merge data connects both the front and back office; allowing for hidden analytical insights and unseen patterns to be discovered and improve organizational decision-making by understanding customer behaviors, profitability of products/facilities, market awareness and more. In transportation, the sheer number of documents such as BOLs, PODs, Rate confirmation and accessorial hold untapped and unlocked insight that can be applied to reducing friction and complexity within the supply chain.

Bad Actors Still Want Your Data, But Are Changing Tactics of How to Get it. Commentary by Aaron Cockerill, chief strategy officer, Lookout

Bad actors are zeroing in on endpoints, apps and data being outside of the original corporate perimeter. In fact, theres a plethora of threat intelligence reports about how bad actors have moved from trying attacks on infrastructure to trying to attack endpoints, apps and data that are outside that perimeter. For example, many companies have had to move apps and servers that were behind a firewall, into the cloud (IaaS environments) and run them so they are internet accessible, but many of these apps and servers werent designed to be internet accessible and moving them outside of the perimeter introduces vulnerabilities that werent there when they were inside the corporate perimeter. Many server attacks these days leverage RDP; something that would not have been possible had the servers been behind a corporate perimeter. The same is true of endpoints, although the way an attack occurs tends to be less around gaining access to RDP and more frequently involving phishing and social engineering to gain access and move laterally to critical infrastructure and sensitive data.So, the attack surface has changed instead of looking for vulnerabilities inside the organizations perimeter, we are now looking for vulnerabilities in servers in the cloud and on endpoints that are no longer protected by the perimeter. But what has not changed is what the bad actors are seeking and it is very much focused on data. We hear a lot about ransomware, but what is not well understood yet, in the broader sense, is that ransomware typically is only successful when the bad actor has considerable leverage and the leverage they obtain is always through the theft of data and then the threat of exposure of the data what we call double extortion.

What is Vertical Intelligence? Commentary by Daren Trousdell, Chairman and CEO of NowVertical Group

Data transformation begins from the inside out. Businesses greatest challenge is staying hyper-competitive in an overly complicated world. Vertical Intelligence empowers enterprises to uplift existing tech stacks and staff with platform-agnostic solutions that can scale the modern enterprise. Vertical Intelligence is the key to unlocking the potential from within to bring transformation to the forefront.The idea of a purpose-built, top-to-bottom automation solution is antiquated. Yet the future is malleable: We see it as a flexible network of technologies that are platform-agnostic and prioritized to industry-specific use cases and needs. Most AI solutions currently available either require massive multi-year investment or for companies to mold their decision-making automation around a prefabricated solution that was either not built for their business or requires them to conform to specific constructs. We believe that technology should be made to serve a customer, not the other way around, and thats why weve brought together the best industry-specific technologies and thought leaders to shape the experience and prioritize the most critical use cases.

Digital acceleration is just a dream without open solution development acceleration platforms.Commentary by Petteri Vainikka, Cognite

We are in the era of the new, open platform architecture model. Businesses now stand a greater chance of truly transforming by thinking bigger and more broadly across their operations. Businesses that cling to the past and maintain fixed, impenetrable silos of data are doomed to stay in the past. Contrary to those maintaining past operating models, businesses that place their bets on open, truly accessible, and data product-centric digital platform architectures will be the ones experiencing the most rapid and rewarding digital acceleration. Because there is no single data product management platform that can meet all the various needs of a data-rich, complex industrial enterprise, open data domain specialized platforms are rising to the occasion. Such open platforms meet operations business needs by offering specialized industrial data operations technology packaged with proven composable reference applications to boost the ROI of data in a faster, more predictable way. With greater openness, domain specialization, and pre-built interoperability at the core, businesses can boost their data platform capabilities and simultaneously realize new data-rich solutions in less than three months. To stay in the lead in the digital transformation race, businesses must think about operationalizing and scaling hundreds of use cases rather than one-offs or single-case proofs of concept. They need open, composable platform architectures that serve to tear down data siloes while simultaneously delivering high-value business solutions with instant business impact. This will only happen with the right mix of specialized open data platform services orchestrated to work together like a symphony. Digital acceleration is just a dream without open solution development acceleration platforms.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Excerpt from:
Heard on the Street 9/12/2022 - insideBIGDATA

TEACHER VOICE: How the sad shadow of book banning shuts down conversations and lacerates librarians – The Hechinger Report

At the high school where I last worked, the librarian had what we all understood to be an ironic trinket sitting on her office shelf: an action figure of a librarian that made an amazing shushing action when you pushed a button, providing welcome levity. Thats all the action figure could do; todays librarians, who must confront increasing ranks of violent protesters, could use a lot more features to fight back.

With school politics proving a strategic wedge issue for Republicans from Washington State to Virginia to Florida, more and more school boards are glomming onto the convenient optics of book banning. At least 1,586 individual books were banned from July 2021 through March of this year, PEN America reports, citing an alarming spike compared with previous years.

And yes, they are coming after librarians, too, the people who meet you in public spaces, listen to you and share inspiration in bundles that you can take out free of charge.

These underpaid civil servants are being called pedophiles and purveyors of pornography. They are receiving death threats and termination notices and facing lawsuits and criminal charges over what are perceived as obscene materials.

The tome-length stories they curate, of Tuscan gardens or fantastical undersea worlds, are being subsumed by the temporal template of outrage: headlines, tweets and three-minute local news segments.

Librarians are facing actual danger, but we all face harm if we demand that students reading material be less interesting, challenging and complex than their real-life experiences.

As an educator, I have seen this shadow of book banning shut down conversations, foment distrust among students and parents and put well-meaning school administrators on their heels as they perform lexical jujitsu: Their task is to both sponsor courageous conversations about thought-provoking, topical material and identify books that are perceived to cause undue discomfort. If the broad aim of education is to prepare students to become citizens in a pluralistic, often contentious society, trying to maintain this difficult balance can be stultifying.

Related: OPINION: Why Floridas ban on textbooks is just another scare tactic

I saw these trends play out in real time last year in my hometown, where my daughters go to school. A teacher read a passage from Sherman Alexies The Absolutely True Diary of a Part-Time Indian to his eighth grade language arts students, saying the full N-word, while offering no trigger warning and little contextualizing before or after.

This upset a student attending the class remotely, and after a few days of muddled conversations among parents, teachers and principals, the superintendent (who has since retired) decided it best to put the book on pause.

Related: COLUMN: A lesson in hypocrisy whats really behind the parental rights movement

Reactions varied from enraged to eloquent, though I felt the most poignant came from the 25 or so eighth graders who formed the group Students for Free Speech, and whose ranks included the student who was initially vocal about being discomfited.

They met biweekly and co-authored a letter to their administrators: Most of us didnt know about the conditions of life on Native American reservations before conducting research . . . and reading the book. Weve managed to go 13 or 14 years, nine years of in-school education, and learn absolutely nothing about this issue. And just after we started learning about it, we stopped.

Had the book not been paused on page 64, they would have discovered that the white character who uttered the racial slur (Roger) to the Native American protagonist (Junior) would have a moral education of his own.

Through my nearly 30 years of teaching high school English, Im hard-pressed to think of a single worthy book that couldnt somehow be perceived as offensive to someone.

This would slowly lead Roger toward respect for and connection with Junior, his basketball teammate. The weeks of classroom discussions that would follow this developing relationship, by turns and degrees, would also have examined Juniors own racial biases as he moved each day between the rez and his predominantly white high school.

In these discussions, students would invariably confront their own biases and learn that forgiveness, redemption and mercy are integral for any community attempting to move beyond surface judgments into something more sustainable.

But these points about the actual book were never mentioned in the public forum, leaving me to wonder who had actually read the book.

This made the next sentence of that student letter really sting: Exposing us, your students, to new ideas is an instrumental part of learning. Whether you or we agree with them or not, we need to be exposed to more perspectives.

Banning books that openly discuss racism, violence and human pain does not protect students from these realities, and only lessens their capacities to contend with them in nonfictional spaces.

Through my nearly 30 years of teaching high school English, Im hard-pressed to think of a single worthy book that couldnt somehow be perceived as offensive to someone. So, avoiding offense is not the point.

My concern when selecting reading material is whether the story moves with good character development and a compelling plot if its teachable. When they find themselves vicariously at odds with the lives they read in context, students learn how to articulate their own beliefs.

To put devices away for an hour, to drill into a passage or two, to wring their connections and suggestions, to move beyond binaries into more subtle degrees: This is the work of English class. Have faith in it.

Amid this noisy volley of book banning, we lose the value of these protracted, deliberate, reflective conversations.

Tim Donahue teaches English at the Ethical Culture Fieldston School in New York City.

This story about banning books was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechingers newsletter.

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn't mean it's free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.

See the original post:

TEACHER VOICE: How the sad shadow of book banning shuts down conversations and lacerates librarians - The Hechinger Report

Last Call for 9.6.22 A prime-time read of what’s going down in Florida politics – Florida Politics

Last Call A prime-time read of whats going down in Florida politics.

First Shot

Chief Financial Officer Jimmy Patronis is out with his first TV ad of the election cycle, taking aim at Big Tech companies and the liberals in California who run them.

The hottest trade on the market today is you. Big Tech is too powerful, Patronis says in the ad. They know where you are, they know what youre reading, they know what you ate for lunch.

Patronis then introduces himself as the CFO and says he wants to stop them.

These tech liberals in California think they can cancel us on social media, they can sell our data to big corporations and get rich off our backs. We can stop them, he says, calling on viewers to tell Big Tech youre not for sale.

The CFO, a Republican, has often railed against tech companies such as Twitter for allegedly censoring or shadow banning conservative users, though he has also made overtures to lure them to the Sunshine State.

Notably, when Elon Musk announced his intention to buy Twitter, Patronis sent him a letter urging him to relocate the company to Florida. There was no formal response and Musk has since tried to back out of the deal.

Patronis has served as CFO since 2017, when he was appointed to the post by then-Gov. Rick Scott. He won election in 2018 by a comfortable 3.5% and heads into the 2022 General Election with a sizable fundraising advantage over his Democratic challenger, former state Rep. Adam Hattersley.

As of Aug. 26, the incumbent had a little over $1 million in hard money and an added $3.46 million in his political committee, Treasure Florida. Hattersley, meanwhile, has less than $5,000 banked between his two accounts despite facing no opposition in the Primary.

To watch the ad, please click on the image below:

___

Agriculture Commissioner Nikki Fried sent a letter to the Department of States Office of Inspector General demanding an investigation into the recent voter fraud bust.

The statewide dragnet resulted in 20 arrests of convicted felons who registered to vote and cast ballots. Though voters approved an amendment allowing felons to automatically regain voting rights after completing their sentences, those convicted of murder or sex crimes were specifically excluded.

Fried and others have argued that confusion over the amendment, its implementing bill, and the lack of any mention of the carveout on voter registration forms likely led those arrested to unknowingly violate the law. Fried also noted that the Department of State had approved the voter registrations.

In the letter, addressed to Inspector General David Ulewicz, Fried says the new Office of Election Crimes and Security was overzealous in its enforcement of state election laws and that the bust was essentially a publicity stunt at the expense of those arrested.

While under current law, these individuals were not eligible to vote, the persecution of this predominantly Black group of Floridians who broke the law without intent is not only disproportionate punishment but cruel, she wrote.

That cruelty is even more so as it becomes evident that what should have been investigated was how and why the state provided ineligible voters with registrations and that their traumatic arrests appear to have been done for pure publicity purposes, stoking fear and discouraging others who are eligible from exercising their rights to vote in the future.

Evening Reads

Money talks: Ron DeSantis goes after small-scale voter crimes, is silent on FPL and Matrix via Mary Ellen Klas and Nicholas Nehamas of the Miami Herald

DeSantis targeted LGBTQ Floridians like no previous Governor. Now theyre working to defeat him. via Zac Anderson of the USA Today Network-Florida

Val Demings doubles down on dismissing voters concerns on inflation: What my opponent says are important via Rebecca Downs of Townhall

Budget panel to approve $175M in local projects via Gray Rohrer of Florida Politics

Rick Scott amps up feud with Mitch McConnell about GOP Senate candidates via A.G. Gancarski of Florida Politics

More than 1 in 2 Americans will have an election denier on the ballot this fall via FiveThirtyEight

Florida company pays quick cash to list your home. The catch? A 40-year contract via Rebecca Liebson of the Tampa Bay Times

Rents are starting to come down, but the trend may not hold via Adriana Morga of The Associated Press

Gas prices turn back downward through Labor Day weekend via Scott Powers of Florida Politics

Schools are back and confronting severe learning losses via Scott Calvert of The Wall Street Journal

Number of students in Broward public schools declines for third straight year via Lisa J. Huriash of the South Florida Sun-Sentinel

Quote of the Day

Powerful people rigging elections is far more dangerous than 20 people allegedly voting illegally. But power gets you privileges and exceptions that dont apply to the rest of us. Money talks. Money is power. The people whove been charged with voter fraud have no power. DeSantis is making them into props for his reelection campaign and his bid for President.

Miami Center for Racial Justice President Marvin Dunn, on the recent voter fraud arrests.

Bill Days Latest

Breakthrough Insights

Post Views:0

Read the original post:

Last Call for 9.6.22 A prime-time read of what's going down in Florida politics - Florida Politics

Will India and China Escape the Thucydides’ Trap? – The Diplomat

Advertisement

About 10 days after the U.S. House of Representatives Speaker Nancy Pelosis visit to Taiwan, India finally broke its studied silence over both the trip and Chinas consequent unprecedented military exercises and live-fire air and sea drills that encircled Taiwan, heralding the onset of the Fourth Taiwan Strait Crisis. On August 12, while answering questions on these recent developments in the Taiwan Strait as part of a weekly media briefing at the Ministry of External Affairs, Indias Official Spokesperson Arindam Bagchi, without naming any parties, urged exercising restraint and avoiding unilateral actions to change the status quo, so as to de-escalate tensions and maintain peace and stability in the region.

Even as no loud official proclamation was expected, the supposedly non-descript nature of Indias statement was in keeping with the prevalent regional provocation-averse ethos vis--vis China. Even the Association of Southeast Asian Nations (ASEAN) foreign ministers statement on the cross-strait development, while warning about the unpredictable consequences of open conflicts and miscalculations between major powers, reiterated each members support for its respective One China policy. However, Indias short, yet stern, statement was marked by its refusal to abide by Beijings call to reiterate the One China policy, simply because Indias relevant policies are well-known and consistent.

The U.S.-led battle of democracies versus autocracies in a bid to coalesce like-minded partners, through U.S. bilateral military alliances (e.g. in Northeast Asia) and minilateral security frameworks aimed at containing China, such as the Quadrilateral Security Dialogue (Quad, comprising Australia, India, Japan, and the U.S.) and the Australia-U.K.-U.S. (AUKUS) defense pact, has further precipitated the steep incline into the Thucydides Trap. The uneven rhetoric and ambiguous policy on Taiwan are only compounding the stress. These exigent circumstances have naturally put the Indo-Pacific states on high alert, especially India due to its ongoing border dispute with China in the Himalayan region since 2020. However, the Wests well-founded fear of Xi Jinpings road to rejuvenation and the ensuing power play, including the recent U.S. offensive in the Indo-Pacific, are also a boon for Indias geopolitical ambitions.

Hence, Indias cautiously bold stance has to be taken in concert with the regions fragile peace, which depends on not provoking China while asserting New Delhis newly ascendant power-parity equation with Beijing, which is strengthened by Indias proactive foreign policy outlook of multi- and pointed-alignment geared to strategic autonomy goals. Against such a non-linear equation, how far is the Thucydides Trap linked to Indias China calculus? Could India be pushed to a war-like precipice to defend its status and security dilemmas?

Get briefed on the story of the week, and developing stories to watch across the Asia-Pacific.

Deterring the New Normal in the Himalayas

Enjoying this article? Click here to subscribe for full access. Just $5 a month.

China has in the past adapted its successful maneuvers in one disputed territory to another: Its South China Sea salami tactics were employed in Ladakh. In the current Taiwan case, besides military tactics, China has intensified its economic, diplomatic, and disinformation maneuvers by banning trade of specific products with Taiwan, crusading for international support for its One China principle, and exaggerating the extent of Peoples Liberation Army (PLA) capabilities. In that vein, India would be worried about this so-called new normal in the Taiwan Strait being replicated along its borders, too, as it is consistent with Chinas policy of using military, political, and economic means to achieve its national interest goals. For example, Chinas call to compartmentalize cooperation and incompatibilities in the bilateral relationship are a tool for such an action.

The recent (post-2020) era spans multiple ongoing crises, including the COVID-19 pandemic, the Galwan dispute, the Russia-Ukraine war, and the Taiwan Strait new normal. In all of these, a common link is the centrality of China as a revolutionary revisionist power on its way to upend the global world order (by 2049 to be precise) be it via its collusion with Russia and other politically weak authoritarian states like Afghanistan under Taliban or collision with the U.S. and its partners. This makes it imperative to explore the increasingly strategic, adversarial equation between India and China through the lens of the Thucydidian dynamic, which is primarily reserved for describing the only great power rivalry of our times (namely the United States and China).

As a corollary to the China-U.S. hegemonic contest, however, the concept naturally extends to China (a dominant regional power with notions of global supremacy) and India (an emerging regional rival power with global ambitions), especially amid the Russia-Ukraine war, when Indias global prospects are on the rise thanks to its central role as a regional security provider and as a key strategic partner to manage, if not contain, China.

Moreover, Indias diplomatic courting by the West has become important enough that even China is reining in its otherwise fierce hostility (in Chinese media and official rhetoric) and advertising positive signals for reconciliation with India, even as China docks a research ship (also labelled a dual-use spy ship by Indian media) in Sri Lankas Hambantota port despite Indias concerns. Thus, the fragile and certainly temporary thaw only complicates the existing situation, but does not influence Indias core view of China especially post-Galwan as a clear and abidingadversary.

To that effect, India has continued to heighten its strategic deterrence measures against China since 2020, and is unimpressed by Chinas present overtures, showing little inclination to compromise. This is evidenced by Indias External Affairs Minister S. Jaishankars continued stress on the border relations casting a shadow over wider cooperation goals and calling the China-India relationship a one-way street.

Leveraging the Current Trigger: Avoiding or Inducing War?

Taiwans growing importance in Indias foreign policy framework, not just as an economic partner but as a security leverage, is increasingly evident. For example, Indias refusal to reiterate the One China policy in official rhetoric (including joint statements) since 2010 is not just a reaction to stapled visas but also leverage against China. It seeks to remind Beijing that its lack of an independent policy on India and refusal to acknowledge the unofficial yet politically significant One India outlook (e.g. Chinese Foreign Minister Wang Yi echoing the Organization of Islamic Cooperations stance on Kashmirs right to self-determination in March this year) will come at a cost.

So far, Indias sovereignty disputes with China have constrained New Delhis ambiguous approach to Taiwan (restricted to economic engagement), but the heightened global reaction to Pelosis visit and Chinas own fear-inducing military maneuvers might propel the strategic discourse on the changing trajectory of Indias One China policy, which was already underway post Galwan.

Xis potential reunification (and rejuvenation) plans amid attempts to change the status quo around the Taiwan Strait have received a fillip with the announcement of Chinas latest white paper on Taiwan, wherein Taiwans status (a special administrative region) post-reunification would also be conditional under the One China principle: Two Systems is subordinate to and derives from One Country. Whether peaceful or forceful, Chinas potential occupation of Taiwan will bring a complete breakdown of the already low trust level between India and China, and could trigger a military intervention with India, exposing both its status and security dilemmas.

In short, for India, the Fourth Taiwan Crisis might become an extraneous vulnerability that could set up a catastrophic spiral toward a limited war in the Himalayas, especially because it is an intended consequence of Chinas military invasion tactics. Obviously, Indias complicated China dilemma which spans a long-standing mutual mistrust versus the veritable necessity (plus viability) of economic and regional cooperation in concert with Asias fragile, explosive security landscape posits the inevitability of such an event. Moreover, India will note that Taiwans economic cooperation with China has only made coercion more potent; thus, the balancing policy of economic and strategic goals will have to be sharpened.

Go here to read the rest:

Will India and China Escape the Thucydides' Trap? - The Diplomat