Daily Archives: September 15, 2022

By Reading Brainwaves, an A.I. Aims to Predict What Words People Listened to – Smithsonian Magazine

Posted: September 15, 2022 at 10:08 pm

The artificial intelligence has looked for patterns between audio recordings and the brain activity of people listening to those recordings. John M Lund Photography Inc / Getty Images

Scientists are trying to use artificial intelligence to translate brain activity into language.

An A.I. program analyzedsnippets of brain activity from people who were listening to recorded speech. It tried to match these brainwavesto a long list of possible speech segments that the person may have heard, writes Science News Jonathan Moens. The algorithm produced its prediction of the ten most likely possibilities, and over 70 percent of the time, its top-ten lists contained the correct answer.

The study, conducted by a team at Facebooks parent company, Meta, was posted in August to the preprint server arXiv and has not been peer reviewed yet.

In the past, much of the work to decode speech from brain activity has relied on invasive methods that require surgery, writes Jean-Rmi King, a Meta A.I. researcher and a neuroscientist at the cole Normale Suprieure in France, in a blog post. In the new research, scientists used brain activity measured with non-invasive technology.

The findings currently have limited practical implications, per New Scientists Matthew Sparkes. But the researchers hope to one day help people who cant communicate by talking, typing or gesturing, such as patients who have suffered severe brain injuries, King writes in the blog post. Most existing techniques to help these people communicate involve risky brain surgeries, per Science News.

In the experiment, the A.I. studied a pre-existing database of 169 peoples brain activity, collected as they listened to recordings of others reading aloud. The brain waves were recorded using magnetoencephalography (MEG) or electroencephalography (EEG), which non-invasively measure the magnetic or electric component of brain signals, according to Science News.

The researchers gave the A.I. three-second segments of brain activity. Then, given a list of more than 1,000 possibilities, they asked the algorithm to pull the ten sound recordings it thought the person had most likely heard, per Science News. The A.I. wasnt very successful with the activity from EEG readings, but for the MEG data, its list contained the correct sound recording 73 percent of the time, according to Science News.

The AIs performance was above what many people thought was possible at this stage, Giovanni Di Liberto, a computer scientist at Trinity College Dublin in Ireland who was not involved in the study, tells Science News. Of its practical use though, he says, What can we do with it? Nothing. Absolutely nothing.

Thats because MEG machines are too costly and impractical for widespread use, he tells Science News. Plus, MEG scans might not ever be able to capture enough detail of the brain to improve upon the findings, says Thomas Knpfel, a neuroscientist at Imperial College London in England, who didnt contribute to the research, to New Scientist. Its like trying to stream an HD movie over old-fashioned analogue telephone modems, he tells the publication.

Another drawback, experts say, is that the A.I. required a finite list of possible sound snippets to choose from, rather than coming up with the correct answer from scratch. With language, thats not going to cut it if we want to scale it to practical use, because language is infinite, says Jonathan Brennan, a linguist at the University of Michigan who didnt contribute to the research, to Science News.

King notes to Times Megan McCluskey that the study has only examined speech perception, not production. In order to help people, future technology would need to figure out what people are trying to communicate, which King says will be extremely challenging. We dont have any clue whether [decoding thought] is possible or not, he tells New Scientist.

Currently, the research, which is conducted by the Facebook Artificial Intelligence Research Lab and not directed top-down by Meta, is not designed for a commercial purpose, King tells Time.

To the critics, he says there is still value in this research. I take this more as a proof of principle, he tells Time. There may be pretty rich representations in these [brain] signalsmore than perhaps we would have thought.

Recommended Videos

Go here to see the original:

By Reading Brainwaves, an A.I. Aims to Predict What Words People Listened to - Smithsonian Magazine

Posted in Alt-right | Comments Off on By Reading Brainwaves, an A.I. Aims to Predict What Words People Listened to – Smithsonian Magazine

6 tactics to make artificial intelligence work on the frontlines – STAT

Posted: at 10:06 pm

Artificial intelligence is a transformative tool in the workplace except when it isnt.

For top managers, state-of-the art AI tools are a no-brainer: in theory, they increase revenues, decrease costs, and improve the quality of products and services. But in the wild, its often just the opposite for frontline employees who actually need to integrate these tools into their daily work. Not only can AI tools yield few benefits, but they can also introduce additional work and decrease autonomy.

Our research on the introduction of 15 AI clinical decision support tools over the past five years at Duke Health has shown that the key to successfully integrating them is recognizing that increasing the value for frontline employees is as important as making sure the tools work in the first place. The tactics we identified are useful not only in biopharma, medicine, and health care, but across a range of other industries as well.

advertisement

Here are six tactics for making artificial intelligence-based tools work on industry frontlines.

AI project leaders need to increase benefits for the frontline employees who will be the actual end users of a new tool, though this is often not the group that initially approaches them to build it.

advertisement

Cardiologists in Dukes intensive care unit asked AI project team leaders to build a tool to identify heart attack patients who did not need ICU care. Cardiologists said the tool would allow frontline emergency physicians to more easily identify these patients and triage them to noncritical care, increasing the quality of care, lowering costs, and preventing unnecessary overcrowding in the ICU.

The team developed a highly accurate tool that helped ER doctors identify low-risk patients. But within weeks of launching the tool, it was scrapped. Frontline emergency physicians complained that they didnt need a tool to tell us how to do our job. Incorporating the tool meant extra work and they resented the outsider intrusion.

The artificial intelligence team had been so focused on the needs of the group that initially approached them cardiologists that they neglected those who would actually use the tool emergency physicians.

The next time cardiologists approached the developers, the latter were savvier. This time, the cardiologists wanted an AI tool to help identify patients with low-risk pulmonary embolism (one or more blood clots in the lungs), so they could be sent home instead of hospitalized. The developers immediately reached out to emergency physicians, who would ultimately use the tool, to understand their pain points around the treatment of patients with pulmonary embolism. The developers learned that emergency physicians would use the tool only if they could be sure that patients would get the appropriate follow-up care. Cardiologists agreed to staff a special outpatient clinic for these patients.

This time, the emergency doctors accepted the tool, and it was successfully integrated into the emergency department workflow.

The key lesson here is that project leaders need to identify the frontline employees who will be the true end users of a new tool based on artificial intelligence. Otherwise, they will resist adopting it. When employees are included in the development process, they will make the tool more useful in daily work.

Successful AI project team leaders measure and reward frontline employees for accomplishing the outcomes the tool is designed to improve.

In the pulmonary embolism project described earlier, project leaders learned that emergency physicians might not use the tool because they were evaluated on how well they recognized and handled acute, common issues rather than how well they recognized and handled uncommon issues like low-risk pulmonary embolism. So the leaders worked with hospital management to change the reward system in a way that emergency physicians are now also evaluated based on how successfully they recognize and triage low-risk pulmonary embolism patients.

It may seem obvious that it is necessary to reward employees for accomplishing the outcomes a tool is designed to improve. But this is easier said than done, because AI project team leaders usually dont control compensation decisions for these employees. Project leaders need to gain top managers support to help change incentives for end users.

Data used to train a tool based on artificial intelligence must be representative of the target population in which it will be used. This requires a lot of training data, and identifying and cleaning data during AI tool design requires a lot of data work. AI project team leaders need to reduce the amount of this work that falls on frontline employees.

For example, kidney specialists asked the Duke AI team for a tool to increase early detection of people at high risk of chronic kidney disease. It would help frontline primary care physicians both detect patients who needed to be referred to nephrologists, and reduce the number of low-risk patients who were needlessly referred to nephrologists.

To build the tool, developers initially wanted to engage primary care practitioners in time-consuming work to spot and resolve data discrepancies between different data sources. But because it was the nephrologists, not the primary care practitioners, who would primarily benefit from the tool, PCPs were not enthusiastic to take on additional work to build a tool they didnt ask for. So the developers enlisted nephrologists rather than PCPs to do the work on data label generation, data curation, and data quality assurance.

Reducing data work for frontline employees makes perfect sense, so why do some AI project leaders fail to do it? Because these employees know data idiosyncrasies and the best outcome measures. The solution is to involve them, but use their labor judiciously.

Developing AI tools requires frontline employees to engage in integration work to incorporate the tool into their daily workflows. Developers can increase implementation by reducing this integration work.

Developers working on the kidney disease tool avoided requesting information they could retrieve automatically. They also made the tool easier to use by color coding high-risk patients in red, and medium-risk patients in yellow.

With integration work, AI developers often want to involve frontline employees for two reasons: because they know best how a new tool will fit into workflows and because those who are involved in development are more likely to help persuade their peers to use the tool. Instead of avoiding enlisting frontline employees altogether, developers need to assess which aspects of AI tool development will benefit most from their labor.

Most jobs include valued tasks as well as necessary scut work. One important tactic for AI developers is not infringing on the work that frontline employees value.

What emergency physicians value is diagnosing problems and efficiently triaging patients. So when Dukes artificial intelligence team began developing a tool to better detect and manage the potentially deadly bloodstream infection known as sepsis, they tried to configure it to avoid infringing on emergency physicians valued tasks. They built it instead to help with what these doctors valued less: blood test analysis, medication administration, and physical exam assessments.

AI project team leaders often fail to protect the core work of frontline employees because intervening around these important tasks often promises to yield greater gains. Smart AI leaders have discovered, however, that employees are much more likely to use the technology that helps them with their scut work rather than one that infringes on the work they love to do.

Introducing a new AI decision support tool can threaten to curtail employee autonomy. For example, because the AI sepsis tool flagged patients at high risk of this condition, it threatened clinicians autonomy around diagnosing patients. So the project team invited key frontline workers to choose the best ways to test the tools effectiveness.

AI project team leaders often fail to include frontline employees in the evaluation process because they can make it harder in the short term. When frontline employees are asked to select what will be tested, they often select the most challenging options. We have found, however, that developers cannot bypass this phase, because employees will balk at using the tools if they dont have confidence in them.

Behind the bold promise of AI lies a stark reality: AI solutions often make employees lives harder. Managers need to increase value for those working on the front lines to allow AI to function in the real world.

Katherine C. Kellogg is a professor of management and innovation and head of the Work and Organization Studies department at the MIT Sloan School of Management. Mark P. Sendak is the population health and data science lead at the Duke Institute for Health Innovation. Suresh Balu is the associate dean for innovation and partnership for the Duke University School of Medicine and director of the Duke Institute for Health Innovation.

Visit link:

6 tactics to make artificial intelligence work on the frontlines - STAT

Posted in Ai | Comments Off on 6 tactics to make artificial intelligence work on the frontlines – STAT

Perceptron: AI that lights up the moon, improvises grammar and teaches robots to walk like humans – TechCrunch

Posted: at 10:06 pm

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column,Perceptron, aims to collect some of the most relevant recent discoveries and papers particularly in, but not limited to, artificial intelligence and explain why they matter.

Over the past few weeks, scientists developed an algorithm to uncover fascinating details about the moons dimly lit and in some cases pitch-black asteroid craters. Elsewhere, MIT researchers trained an AI model on textbooks to see whether it could independently figure out the rules of a specific language. And teams at DeepMind and Microsoft investigated whether motion capture data could be used to teach robots how to perform specific tasks, like walking.

With the pending (and predictably delayed) launch of Artemis I, lunar science is again in the spotlight. Ironically, however, it is the darkest regions of the moon that are potentially the most interesting, since they may house water ice that can be used for countless purposes. Its easy to spot the darkness, but whats in there? An international team of image experts has applied ML to the problem with some success.

Though the craters lie in deepest darkness, the Lunar Reconnaissance Orbiter still captures the occasional photon from within, and the team put together years of these underexposed (but not totally black) exposures with a physics-based, deep learning-driven post-processing tool described in Geophysical Research Letters. The result is that visible routes into the permanently shadowed regions can now be designed, greatly reducing risks to Artemis astronauts and robotic explorers, according to David Kring of the Lunar and Planetary institute.

Let there be light! The interior of the crater is reconstructed from stray photons. Image Credits: V. T. Bickel, B. Moseley, E. Hauber, M. Shirley, J.-P. Williams and D. A. Kring

Theyll have flashlights, we imagine, but its good to have a general idea of where to go beforehand, and of course it could affect where robotic exploration or landers focus their efforts.

However useful, theres nothing mysterious about turning sparse data into an image. But in the world of linguistics, AI is making fascinating inroads into how and whether language models really know what they know. In the case of learning a languages grammar, an MIT experiment found that a model trained on multiple textbooks was able to build its own model of how a given language worked, to the point where its grammar for Polish, say, could successfully answer textbook problems about it.

Linguists have thought that in order to really understand the rules of a human language, to empathize with what it is that makes the system tick, you have to be human. We wanted to see if we can emulate the kinds of knowledge and reasoning that humans (linguists) bring to the task, said MITs Adam Albright in a news release. Its very early research on this front but promising in that it shows that subtle or hidden rules can be understood by AI models without explicit instruction in them.

But the experiment didnt directly address a key, open question in AI research: how to prevent language models from outputting toxic, discriminatory or misleading language. New work out of DeepMind does tackle this, taking a philosophical approach to the problem of aligning language models with human values.

Researchers at the lab posit that theres no one-size-fits-all path to better language models, because the models need to embody different traits depending on the contexts in which theyre deployed. For example, a model designed to assist in scientific study would ideally only make true statements, while an agent playing the role of a moderator in a public debate would exercise values like toleration, civility and respect.

So how can these values be instilled in a language model? The DeepMind co-authors dont suggest one specific way. Instead, they imply models can cultivate more robust and respectful conversations over time via processes they call context constructionandelucidation. As the co-authors explain: Even when a person is not aware of the values that govern a given conversational practice, the agent may still help the human understand these values by prefiguring them in conversation, making the course of communication deeper and more fruitful for the human speaker.

Googles LaMDA language model responding to a question. Image Credits: Google

Sussing out the most promising methods to align language models takes immense time and resources financial and otherwise. But in domains beyond language, particularly scientific domains, that might not be the case for much longer, thanks to a $3.5 million grant from the National Science Foundation (NSF) awarded to a team of scientists from the University of Chicago, Argonne National Laboratory and MIT.

With the NSF grant, the recipients plan to build what they describe as model gardens, or repositories of AI models designed to solve problems in areas like physics, mathematics and chemistry. The repositories will link the models with data and computing resources as well as automated tests and screens to validate their accuracy, ideally making it simpler for scientific researchers to test and deploy the tools in their own studies.

A user can come to the [model] garden and see all that information at a glance, Ben Blaiszik, a data science researcher at Globus Labs involved with the project, said in a press release. They can cite the model, they can learn about the model, they can contact the authors, and they can invoke the model themselves in a web environment on leadership computing facilities or on their own computer.

Meanwhile, over in the robotics domain, researchers are building a platform for AI models not with software, but with hardware neuromorphic hardware to be exact. Intel claims the latest generation of its experimental Loihi chip can enable an object recognition model to learn to identify an object its never seen before using up to 175 times less power than if the model were running on a CPU.

A humanoid robot equipped with one of Intels experimental neuromorphic chips. Image Credits: Intel

Neuromorphic systems attempt to mimic the biological structures in the nervous system. While traditional machine learning systems are either fast or power efficient, neuromorphic systems achieve both speed and efficiency by using nodes to process information and connections between the nodes to transfer electrical signals using analog circuitry. The systems can modulate the amount of power flowing between the nodes, allowing each node to perform processing but only when required.

Intel and others believe that neuromorphic computing has applications in logistics, for example powering a robot built to help with manufacturing processes. Its theoretical at this point neuromorphic computing has its downsides but perhaps one day, that vision will come to pass.

Image Credits: DeepMind

Closer to reality is DeepMinds recent work in embodied intelligence, or using human and animal motions to teach robots to dribble a ball, carry boxes and even play football. Researchers at the lab devised a setup to record data from motion trackers worn by humans and animals, from which an AI system learned to infer how to complete new actions, like how to walk in a circular motion. The researchers claim that this approach translated well to real-world robots, for example allowing a four-legged robot to walk like a dog while simultaneously dribbling a ball.

Coincidentally, Microsoft earlier this summer released a library of motion capture data intended to spur research into robots that can walk like humans. Called MoCapAct, the library contains motion capture clips that, when used with other data, can be used to create agile bipedal robots at least in simulation.

[Creating this dataset] has taken the equivalent of 50 years over many GPU-equipped [servers] a testament to the computational hurdle MoCapAct removes for other researchers, the co-authors of the work wrote in a blog post. We hope the community can build off of our dataset and work to do incredible research in the control of humanoid robots.

Peer review of scientific papers is invaluable human work, and its unlikely AI will take over there, but it may actually help make sure that peer reviews are actually helpful. A Swiss research group has been looking at model-based evaluation of peer reviews, and their early results are mixed in a good way. There wasnt some obvious good or bad method or trend, and publication impact rating didnt seem to predict whether a review was thorough or helpful. Thats okay though, because although quality of reviews differs, you wouldnt want there to be a systematic lack of good review everywhere but major journals, for instance. Their work is ongoing.

Last, for anyone concerned about creativity in this domain, heres a personal project by Karen X. Cheng that shows how a bit of ingenuity and hard work can be combined with AI to produce something truly original.

Follow this link:

Perceptron: AI that lights up the moon, improvises grammar and teaches robots to walk like humans - TechCrunch

Posted in Ai | Comments Off on Perceptron: AI that lights up the moon, improvises grammar and teaches robots to walk like humans – TechCrunch

The Download: The Merge arrives, and Chinas AI image censorship – MIT Technology Review

Posted: at 10:06 pm

The must-reads

Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology.

1 Social medias biggest companies appeared before the US SenatePast and present Meta, Twitter, TikTok and YouTube employees answered questions on social media's impact on homeland security. (TechCrunch)+ Retaining user attention is their algorithms primary purpose. (Protocol)+ TikToks representative avoided committing to cutting off Chinas access to US data. (Bloomberg $)

2 China wants to reduce its reliance on Western techInvesting heavily in native firms is just one part of its multi-year plan. (FT $)+ Cybercriminals are increasingly interested in Chinese citizens personal data. (Bloomberg $)+ The FBI accused him of spying for China. It ruined his life. (MIT Technology Review)

3 California is suing AmazonAccusing it of triggering price rises across the state. (WSJ $)+ The two-year fight to stop Amazon from selling face recognition to the police. (MIT Technology Review)

4 Russia is waging a surveillance war on its own citizensIts authorities are increasingly targeting ordinary people, not known dissidents or journalists. (Slate $)+ Russian troops are still fleeing northern Ukraine. (The Guardian)

5 Dozens of AIs debated 100 years of climate negotiations in secondsTheyre evaluating which policies are most likely to be well-received globally. (New Scientist $)+ Patagonias owner has given the company away to fight climate change. (The Guardian)

6 Iranian hackers hijacked their victims printers to deliver ransom notesThe three men have been accused of targeting people in the US, UK and Iran. (Motherboard)

7 DARPAs tiny plane could spy from almost anywhereThe unmanned vehicle could also carry small bombs. (WP $)+ The Taliban have crashed a helicopter left behind by the US military. (Motherboard)

8 Listening to stars helps astronomers to assess whats inside themThe spooky-sounding acoustic waves transmit a lot of data. (Economist $)+ The James Webb Space Telescope has spotted newborn stars. (Space)+ The next Space Force chief thinks the US needs a satellite constellation to combat China.(Nikkei Asia)

9 Well never be able to flip and turn like a catBut the best divers and gymnasts are the closest we can get. (The Atlantic $)+ The best robotic jumpers are inspired by nature. (Quanta)

10 This robot is having a laughEven if its not terribly convincing. (The Guardian)

Quote of the day

Tesla has yet to produce anything even remotely approaching a fully self-driving car."

Briggs Matsko, a Tesla owner, explains his rationale for suing the company over the deceptive way it marketed its driver-assistance systems, according to Reuters.

View original post here:

The Download: The Merge arrives, and Chinas AI image censorship - MIT Technology Review

Posted in Ai | Comments Off on The Download: The Merge arrives, and Chinas AI image censorship – MIT Technology Review

Can Conversational AI Improve the Online Retail Experience? – CMSWire

Posted: at 10:06 pm

The pandemic, which largely restricted physical interaction, meant that both retailers and consumers had to learn and adapt to digital communication tools.

Advancements in the retail and ecommerce sector have helped provide consumers with more tailor-made product recommendations and sophisticated guidance to eliminate friction throughout the shopping experience.

While having limited face-to-face interaction with customers and potential buyers, retailers have looked to the advanced capabilities embedded within conversational artificial intelligence (AI).

The last few years of the pandemic, which largely restricted physical interaction, meant that both retailers and consumers had to learn and adapt to digital communication tools. Conversational AI not only assists shoppers as they browse through the website, but it puts them in direct contact with the products and services they are looking for right from the start.

Instead of having to rely on more conventional chatbots, which saw a sharp rise during the early months of the pandemic, businesses can minimize mundane tasks while at the same time improving the shopping experience, saving them time and helping deep machine learning and natural language processing.

Researchers in the field of conversational AI found that by 2023 around 70% of chatbot conversations will be related to the retail sector.

As more brands look to transition online and competition in the market accelerates, the online customer experience will become a smoother and more delicate process that could ultimately prevent or minimize real-time engagement.

Conversational AI has moved beyond traditional chatbots such as those found at the bottom-right screen of some websites. Developments in the field of conversational AI, deep machine learning (DML) and language processing algorithms (LPA) have immensely improved within the last decade. Consumers have already become accustomed to the likes of Siri in iPhones and Amazon Alexa, which shows both the progress and difference conversational AI has made in our everyday lives.

With a whole host of innovative opportunities, ecommerce retailers and ecommerce technology will be able to enhance and improve the relationship between brands and consumers without encountering friction throughout most of the communication process.

To better understand these opportunities and what ecommerce retailers have done to improve the online shopping experience for consumers, shoppers and potential buyers, let's take a look at some of the challenges and benefits that conversational AI can bring to the table.

Consumer trends are ever-changing, and in a dynamic landscape, this requires brands to find more digitally engaging methods that will help continuously improve the online shopping experience, highlight key offerings and remain a competitive player.

Globally, the number of digital buyers surpassed 2.14 billion at the end of 2021, which is up from the more than 1.66 billion recorded in 2016. The surge in digital shoppers alongside a growing tech-savvy population has meant that market competition has only become more challenging.

To face and overcome these challenges, online brands will need to appeal to the digital community through more personalized practices and efforts that could drive brand loyalty.

Instead of looking toward traditional solutions, which for some time included FAQ pages, chatbots, voicebots or AI assistants that were programmed using language processing methods to resolve client issues, brands can tap into the opportunities that lie within algorithmic data and information collection.

Conversational AI should be able to understand consumer questions, retrieve answers and deliver results adequately. This would mean that AI algorithms will be able to read shopper trends faster, pick up when a customer shops for specific items and help recommend shopper-specific products. Online brands and ecommerce retailers will also be able to set up shopper profiles to create measurable key data points.

With access to previous conversations and interactions, brands will be able to physically understand who their shoppers are. This would include the use of specific traits such as age, gender and location, among others. Ultimately, this would mean online retailers can build a more digitally fluid online interaction.

Having more digital natives and tech-savvy consumers while trading in a highly competitive market means that the focus for online retailers is not on how they can attract shoppers but rather on how they can retain them more effectively.

To better appeal to and retain shoppers, brands will need to focus on three key components:

The understanding here is to turn interested shoppers into paying shoppers while at the same time properly imprinting brand loyalty and ensuring a convenient shopping experience without the need for physical human interaction.

Related Article:How Will Conversational AI Transform Customer Experience?

It's already possible for AI and deep machine learning to pick up on consumer trends and behavior through the type of websites they visit, social media content they like and share, online profiles they interact with and even the keywords they search for.

As our software becomes increasingly good at spotting patterns, these digital protocols will be able to give online retailers insights based on consumer behavior.

It's not at all possible that these insights will be completely accurate in some cases. It does, however, lend itself to building predictive models, which could help to further advance the online retail experience.

Building predictive models can help to:

With the help of artificial intelligence, ecommerce brands can build predictive models that can closely relate to changing consumer behavior. As online users start to follow new trends based on social media platforms or other digital native communication channels, retailers can adjust their customer experience to focus on them.

While this is a constantly changing process, having more predictive models that can help deliver accurate results time and again, retailers will be able to leverage the opportunities to fill customer-related needs without falling behind on overhyped trends outside their scope of interest. This is part of the many reasons why conversational AI and real-time feedback from users are crucial to creating customer-tailored recommendations.

In a nutshell, we see how these practices can help improve cross-selling and up-selling as they analyze consumer trends in the broader digital sphere, track a customer's previous spending habits and preferences and monitor queries or issues raised with customer support.

Although building predictive models is seemingly harder and more complex than simply implementing conversational AI within the online shopping experience, it should remain a crucial factor worth considering that can help keep brands ahead within the competitive marketplace.

While we are well aware of the technological benefits housed within conversational AI, there are numerous challenges ecommerce retailers will still need to face. Difficulties can range across platforms and retailers, as they largely depend on the level of AI software used.

Already we see a tremendous amount of backlash forming around the use of AI that looks to capture consumer information to help build more user-centric algorithms. We see this in things such as social media feeds that are constantly changing as soon as we start interacting with a specific type of profile, brand or online personality.

This resonates with the larger picture that represents difficulties for many ecommerce retailers looking to gain more online exposure and build hyper-personalized customer experiences.

Some of the limitations within conversational AI include:

Among these challenges and limitations, it becomes clear how conversational artificial intelligence still requires further improvements to become more centered around the physical human experience.

While many customers tend to feel separated from the brand or online store when interacting with chatbots or voice assistants, greater dissatisfaction from customers would lead to some brands and online retailers stepping in and resolving issues themselves rather than relying on artificial intelligence.

Related Article:Top Conversational AI Metrics for CX Professionals

Will artificial intelligence give ecommerce retailers tremendous benefits? It's still not able to replace the full human element that helps it develop and expand to what it is today.

There are several ways through which artificial intelligence software, deep machine learning and natural language processing have helped shape a more profound understanding of the online shopping experience. Through various capabilities and complex algorithms, these systems can build and deliver customer focus insights that can further initiate a more personalized shopping experience.

Despite its dominant online presence and robust benefits, brands and online retailers will need to consider the long-term potential rather than focusing on near-term results. Regardless of which side of the aisle you may find yourself in support of conversational AI, it's clear how this software has come to permanently revolutionize our way of work, communication and online shopping.

More:

Can Conversational AI Improve the Online Retail Experience? - CMSWire

Posted in Ai | Comments Off on Can Conversational AI Improve the Online Retail Experience? – CMSWire

Theres no Tiananmen Square in the new Chinese image-making AI – MIT Technology Review

Posted: at 10:06 pm

When a demo of the software was released in late August, users quickly found that certain wordsboth explicit mentions of political leaders names and words that are potentially controversial only in political contextswere labeled as sensitive and blocked from generating any result. Chinas sophisticated system of online censorship, it seems, has extended to the latest trend in AI.

Its not rare for similar AIs to limit users from generating certain types of content. DALL-E 2 prohibits sexual content, faces of public figures, or medical treatment images. But the case of ERNIE-ViLG underlines the question of where exactly the line between moderation and political censorship lies.

The ERNIE-ViLG model is part of Wenxin, a large-scale project in natural-language processing from Chinas leading AI company, Baidu. It was trained on a data set of 145 million image-text pairs and contains 10 billion parametersthe values that a neural network adjusts as it learns, which the AI uses to discern the subtle differences between concepts and art styles.

That means ERNIE-ViLG has a smaller training data set than DALL-E 2 (650 million pairs) and Stable Diffusion (2.3 billion pairs) but more parameters than either one (DALL-E 2 has 3.5 billion parameters and Stable Diffusion has 890 million). Baidu released a demo version on its own platform in late August and then later on Hugging Face, the popular international AI community.

The main difference between ERNIE-ViLG and Western models is that the Baidu-developed one understands prompts written in Chinese and is less likely to make mistakes when it comes to culturally specific words.

For example, a Chinese video creator compared the results from different models for prompts that included Chinese historical figures, pop culture celebrities, and food. He found that ERNIE-ViLG produced more accurate images than DALL-E 2 or Stable Diffusion. Following its release, ERNIE-ViLG has also been embraced by those in the Japanese anime community, who found that the model can generate more satisfying anime art than other models, likely because it included more anime in its training data.

But ERNIE-ViLG will be defined, as the other models are, by what it allows. Unlike DALL-E 2 or Stable Diffusion, ERNIE-ViLG does not have a published explanation of its content moderation policy, and Baidu declined to comment for this story.

When the ERNIE-ViLG demo was first released on Hugging Face, users inputting certain words would receive the message Sensitive words found. Please enter again (), which was a surprisingly honest admission about the filtering mechanism. However, since at least September 12, the message has read The content entered doesnt meet relevant rules. Please try again after adjusting it. ()

Go here to see the original:

Theres no Tiananmen Square in the new Chinese image-making AI - MIT Technology Review

Posted in Ai | Comments Off on Theres no Tiananmen Square in the new Chinese image-making AI – MIT Technology Review

Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most – CNBC

Posted: at 10:06 pm

Security officers keep watch in front of an AI (Artificial Intelligence) sign at the annual Huawei Connect event in Shanghai, China, September 18, 2019.

Aly Song | Reuters

Artificial intelligence is playing an increasingly important role in cybersecurity for both good and bad. Organizations can leverage the latest AI-based tools to better detect threats and protect their systems and data resources. But cyber criminals can also use the technology to launch more sophisticated attacks.

The rise in cyberattacks is helping to fuel growth in the market for AI-based security products. A July 2022 report by Acumen Research and Consulting says the global market was $14.9 billion in 2021 and is estimated to reach $133.8 billion by 2030.

An increasing number of attacks such as distributed denial-of-service (DDoS) and data breaches, many of them extremely costly for the impacted organizations, are generating a need for more sophisticated solutions.

Another driver of market growth was the Covid-19 pandemic and shift to remote work, according to the report. This forced many companies to put an increased focus on cybersecurity and the use of tools powered with AI to more effectively find and stop attacks.

Looking ahead, trends such as the growing adoption of the Internet of Things (IoT) and the rising number of connected devices are expected to fuel market growth, the Acumen report says. The growing use of cloud-based security services could also provide opportunities for new uses of AI for cybersecurity.

Among the types of products that use AI are antivirus/antimalware, data loss prevention, fraud detection/anti-fraud, identity and access management, intrusion detection/prevention system, and risk and compliance management.

Up to now, the use of AI for cybersecurity has been somewhat limited. "Companies thus far aren't going out and turning over their cybersecurity programs to AI," said Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Law. "That doesn't mean AI isn't being used. We are seeing companies utilize AI but in a limited fashion," mostly within the context of products such as email filters and malware identification tools that have AI powering them in some way.

"Most interestingly we see behavioral analysis tools increasingly using AI," Finch said. "By that I mean tools analyzing data to determine behavior of hackers to see if there is a pattern to their attacks timing, method of attack, and how the hackers move when inside systems. Gathering such intelligence can be highly valuable to defenders."

In a recent study, research firm Gartner interviewed nearly 50 security vendors and found a few patterns for AI use among them, says research vice president Mark Driver.

"Overwhelmingly, they reported that the first goal of AI was to 'remove false positives' insofar as one major challenge among security analysts is filtering the signal from the noise in very large data sets," Driver said."AI can trim this down to a reasonable size, which is much more accurate.Analysts are able to work smarter and faster to resolve cyber attacks as a result."

In general, AI is used to help detect attacks more accurately and then prioritize responses based on real world risk, Driver said. And it allows automated or semi-automated responses to attacks, and finally provides more accurate modelling to predict future attacks. "All of this doesn't necessarily remove the analysts from the loop, but it does make the analysts' job more agile and more accurate when facing cyber threats," Driver said.

On the other hand, bad actors can also take advantage of AI in several ways. "For instance, AI can be used to identify patterns in computer systems that reveal weaknesses in software or security programs, thus allowing hackers to exploit those newly discovered weaknesses," Finch said.

When combined with stolen personal information or collected open source data such as social media posts, cyber criminals can use AI to create large numbers of phishing emails to spread malware or collect valuable information.

"Security experts have noted that AI-generated phishing emails actually have higher rates of being opened [for example] tricking possible victims to click on them and thus generate attacks than manually crafted phishing emails," Finch said. "AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools."

Constantly changing malware signatures can help attackers evade static defenses such as firewalls and perimeter detection systems. Similarly, AI-powered malware can sit inside a system, collecting data and observing user behavior up until it's ready to launch another phase of an attack or send out information it has collected with relatively low risk of detection. This is partly why companies are moving towards a "zero trust" model, where defenses are set up to constantly challenge and inspect network traffic and applications in order to verify that they are not harmful.

But Finch said, "Given the economics of cyberattacks it's generally easier and cheaper to launch attacks than to build effective defenses I'd say AI will be on balance more hurtful than helpful. Caveat that, however, with the fact that really good AI is difficult to build and requires a lot of specially trained people to make it work well. Run of the mill criminals are not going to have access to the greatest AI minds in the world."

Cybersecurity program might have access to "vast resources from Silicon Valley and the like [to] build some very good defenses against low-grade AI cyber attacks," Finch said. "When we get into AI developed by hacker nation states [such as Russia and China], their AI hack systems are likely to be quite sophisticated, and so the defenders will generally be playing catch up to AI-powered attacks."

Read the original here:

Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most - CNBC

Posted in Ai | Comments Off on Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most – CNBC

A terrifying AI-generated woman is lurking in the abyss of latent space – TechCrunch

Posted: at 10:06 pm

Theres a ghost in the machine. Machine learning, that is.

We are all regularly amazed by AIs capabilities in writing and creation, but who knew it had such a capacity for instilling horror? A chilling discovery by an AI researcher finds that the latent space comprising a deep learning models memory is haunted by least one horrifying figure a bloody-faced woman now known as Loab.

(Warning: Disturbing imagery ahead.)

But is this AI model truly haunted, or is Loab just a random confluence of images that happens to come up in various strange technical circumstances? Surely it must be the latter unless you believe spirits can inhabit data structures, but its more than a simple creepy image its an indication that what passes for a brain in an AI is deeper and creepier than we might otherwise have imagined.

Loab was discovered encountered? summoned? by a musician and artist who goes by Supercomposite on Twitter (this article originally used her name but she said she preferred to use her handle for personal reasons, so it has been substituted throughout). She explained the Loab phenomenon in a thread that achieved a large amount of attention for a random creepy AI thing, something there is no shortage of on the platform, suggesting it struck a chord (minor key, no doubt).

Supercomposite was playing around with a custom AI text-to-image model, similar to but not DALL-E or Stable Diffusion, and specifically experimenting with negative prompts.

Ordinarily, you give the model a prompt, and it works its way toward creating an image that matches it. If you have one prompt, that prompt has a weight of one, meaning thats the only thing the model is working toward.

You can also split prompts, saying things like hot air balloon::0.5, thunderstorm::0.5 and it will work toward both of those things equally this isnt really necessary, since the language part of the model would also accept hot air balloon in a thunderstorm and you might even get better results.

But the interesting thing is that you can also have negative prompts, which causes the model to work away from that concept as actively as it can.

This process is far less predictable, because no one knows how the data is actually organized in what one might anthropomorphize as the mind or memory of the AI, known as latent space.

The latent space is kind of like youre exploring a map of different concepts in the AI. A prompt is like an arrow that tells you how far to walk in this concept map and in which direction, Supercomposite told me.

Heres a helpful rendering of a much, much simpler latent space in an old Google translation model working on a single sentence in multiple languages:

The latent space of a system like DALL-E is orders of magnitude larger and more complex, but you get the general idea. If each dot here was a million spaces like this one its probably a bit more accurate. Image Credits: Google

So if you prompt the AI for an image of a face, youll end up somewhere in the middle of the region that has all the of images of faces and get an image of a kind of unremarkable average face, she said. With a more specific prompt, youll find yourself among the frowning faces, or faces in profile, and so on. But with negatively weighted prompt, you do the opposite: You run as far away from that concept as possible.

But whats the opposite of face? Is it the feet? Is it the back of the head? Something faceless, like a pencil? While we can argue it amongst ourselves, in a machine learning model it was decided during the process of training, meaning however visual and linguistic concepts got encoded into its memory, they can be navigated consistently even if they may be somewhat arbitrary.

Image Credits: Supercomposite

We saw a related concept in a recent AI phenomenon that went viral because one model seemed to reliably associate some nonsense words with birds and insects. But it wasnt that DALL-E had a secret language in which Apoploe vesrreaitais means birds its just that the nonsense prompt basically had it throwing a dart at a map of its mind and drawing whatever it lands nearby, in this case birds because the first word is kind of similar to some scientific names. So the arrow just pointed generally in that direction on the map.

Supercomposite was playing with this idea of navigating the latent space, having given the prompt of Brando::-1, which would have the model produce whatever it thinks is the very opposite of Brando. It produced a weird skyline logo with nonsense but somewhat readable text: DIGITA PNTICS.

Weird, right? But again, the models organization of concepts wouldnt necessarily make sense to us. Curious, Supercomposite wondered it she could reverse the process. So she put in the prompt: DIGITA PNITICS skyline logo::-1. If this image was the opposite of Brando, perhaps the reverse was true too and it would find its way to, perhaps, Marlon Brando?

Instead, she got this:

Image Credits: Supercomposite

Over and over she submitted this negative prompt, and over and over the model produced this woman, with bloody, cut or unhealthily red cheeks and a haunting, otherworldly look. Somehow, this woman whom Supercomposite named Loab for the text that appears in the top-right image there reliably is the AI models best guess for the most distant possible concept from a logo featuring nonsense words.

What happened? Supercomposite explained how the model might think when given a negative prompt for a particular logo, continuing her metaphor from before.

You start running as fast as you can away from the area with logos, she said. You maybe end up in the area with realistic faces, since that is conceptually really far away from logos. You keep running, because you dont actually care about faces, you just want to run as far away as possible from logos. So no matter what, you are going to end up at the edge of the map. And Loab is the last face you see before you fall off the edge.

Image Credits: Supercomposite

Negative prompts dont always produce horrors, let alone so reliably. Anyone who has played with these image models will tell you it can actually be quite difficult to get consistent results for even very straightforward prompts.

Put in one for a robot standing in a field four or 40 times and you may get as many different takes on the concept, some hardly recognizable as robots or fields. But Loab appears consistently with this specific negative prompt, to the point where it feels like an incantation out of an old urban legend.

You know the type: Stand in a dark bathroom looking at the mirror and say Bloody Mary three times. Or even earlier folk instructions of how to reach a witchs abode or the entrance to the underworld: Holding a sprig of holly, walk backward 100 steps from a dead tree with your eyes closed.

DIGITA PNITICS skyline logo::-1 isnt quite as catchy, but as magic words go the phrase is at least suitably arcane. And it has the benefit of working. Only on this particular model, of course every AI platforms latent space is different, though who knows if Loab may be lurking in DALL-E or Stable Diffusion too, waiting to be summoned.

Loab as an ancient statue, but its unmistakably her. Image Credits: Supercomposite

In fact, the incantation is strong enough that Loab seems to infect even split prompts and combinations with other images.

Some AIs can take other images as prompts; they basically can interpret the image, turning it into a directional arrow on the map just like they treat text prompts, explained Supercomposite. I used Loabs image and one or more other images together as a prompt she almost always persists in the resulting picture.

Sometimes more complex or combination prompts treat one part as more of a loose suggestion. But ones that include Loab seem not just to veer toward the grotesque and horrifying, but to include her in a very recognizable fashion. Whether shes being combined with bees, video game characters, film styles or abstractions, Loab is front and center, dominating the composition with her damaged face, neutral expression and long dark hair.

Its unusual for any prompt or imagery to be so consistent to haunt other prompts the way she does. Supercomposite speculated on why this might be.

I guess because she is very far away from a lot of concepts and so its hard to get out of her little spooky area in latent space. The cultural question, of why the data put this woman way out there at the edge of the latent space, near gory horror imagery, is another thing to think about, she said.

Although its an oversimplification, latent space really is like a map, and the prompts like directions for navigating it and the system draws whatever ends up being around where its asked to go, whether its well-trodden ground like still life by a Dutch master or a synthesis of obscure or disconnected concepts: robots battle aliens in a cubist etching by Dore. As you can see:

Image Credits: TechCrunch / DALL-E

A purely speculative explanation of why Loab exists has to do with how that map is laid out. As Supercomposite suggested, its likely that, simply due to the fact that company logos and horrific, scary imagery are very far from one another conceptually.

A negative prompt doesnt mean take 10 data steps in the other direction, it means keep going as far as you can, and its more than possible that images at the farthest reaches of an AIs latent space have more extreme or uncommon values. Wouldnt you organize it that way, with stuff that has lots of commonalities or cross-references in the center, however you define that and weird, wild stuff thats rarely relevant out at the edge?

Therefore negative prompts may act like a way to explore the frontier of the AIs mind map, skimming the concepts it deems too outlandish to store among prosaic concepts like happy faces, beautiful landscapes or frolicking pets.

Image Credits: Devin Coldeway

The unnerving fact is no one really understands how latent spaces are structured or why. There is of course a great deal of research on the subject, and some indications that they are organized in some ways like how our own minds are which makes sense, since they were more or less built in imitation of them. But in other ways they have totally unique structures connecting across vast conceptual distances.

To be clear, its not as if there is some clutch of images specifically of Loab waiting to be found theyre definitely being created on the fly, and Supercomposite told me theres no indication the digital cryptid is based on any particular artist or work. Thats why latent space is latent! These images emerged from a combination of strange and terrible concepts that all happen to occupy the same area in the models memory, much like how in the Google visualization earlier, languages were clustered based on their similarity.

From what dark corner or unconscious associations sprang Loab, fully formed and coherent? We cant yet trace the path the model took to reach her location; a trained models latent space is vast and impenetrably complex.

The only way we can reach the spot again is through the magic words, spoken while we step backward through that space with our eyes closed, until we reach the witchs hut that cant be approached by ordinary means. Loab isnt a ghost, but she is an anomaly, yet paradoxically she may be one of an effectively infinite number of anomalies waiting to be summoned from the farthest, unlit reaches of any AI models latent space.

It may not be supernatural but sure as hell aint natural.

Read the rest here:

A terrifying AI-generated woman is lurking in the abyss of latent space - TechCrunch

Posted in Ai | Comments Off on A terrifying AI-generated woman is lurking in the abyss of latent space – TechCrunch

PyTorch Takes AI/ML Back to Its Research, Open Source Roots – thenewstack.io

Posted: at 10:06 pm

Metas decision to launch the PyTorch Foundation and contribute the PyTorch machine learning framework to the Linux Foundation indicates the maturity of the technology and a move closer to its open source roots.

AI adoption seems to be stuck and sinking, outside of some applications of AI in text and image generation, natural language processing, computer vision and some around pattern detection and predictive analytics, said Ronald Schmelzer, principal analyst at Cognilytica, a firm that focuses on AI research.

Open source has been gaining much faster adoption than vendor solutions in the market, as can be seen by the difficulties encountered by many fast-moving startups, unicorns, and IPOd companies, Schmelzer told The New Stack. With open source technology and data leading the way in AI, its no surprise that Meta is loosening its hold on PyTorch and letting the community guide its development.

PyTorch moves to the new, independent PyTorch Foundation, under the Linux Foundation umbrella, with a governing board composed of representatives from AMD, AWS, Google Cloud, Meta, Microsoft Azure, and Nvidia, with the intention to expand over time. The PyTorch Foundation will serve as the steward for the technology and will support PyTorch through conferences, training courses, and other initiatives, Meta AI said in a blog post.

Since the release of PyTorch 1.0 in 2018, PyTorch has grown into the lingua franca of AI research, Meta AI said. The framework will continue to be a part of Metas AI research and engineering work, the team said in its post. PyTorch is also a foundation of the AI research and products built by Amazon Web Services, Microsoft Azure, OpenAI, and many other companies and research institutions.

Most of those organizations are founding members of the PyTorch Foundation.

This is a Facebook + Google + AWS vs Microsoft story, said Lawrence E. Hecht, an analyst for The New Stack. The Stack Overflow survey collected data on almost 4,000 users. They were significantly more likely to have used Google Cloud recently as compared to the study average (35% vs 20%). Thats a 75% difference. It also catapults Google past Microsoft Azure (25% of PyTorch users and 23% overall), to be closer to the leader AWS (44% vs 41%).

In many ways, AI is retreating back to some research and open source roots, and the wave of hype and interest in AI by investors and for-profit companies seems to be waning, Schmelzer said. Were past peak on AI hype and winding down. Yeah, were past irrational exuberance on AI and into some sober reality. Companies like C3 and DataRobot and others are really struggling now that AI is not top of the list for many organizations.

Holger Mueller, an analyst at Constellation Research, noted that in general, it is better to have an open source framework at an independent organization. We can also assume that Meta thinks that PyTorch is no longer where it wants to invest into solely and maybe it is not that relevant for metaverse use cases, he said.

According to Jim Zemlin, executive director of The Linux Foundation, AI/ML is a truly open source-first ecosystem. The majority of popular AI and ML tools and frameworks are open source. The community clearly values transparency and the ethos of open source, Zemlin said in a blog post, noting that the Linux Foundation will provide a neutral home for PyTorch.

Moreover, the PyTorch Foundations mission is to drive the adoption of AI tooling by fostering and sustaining an ecosystem of open source, vendor-neutral projects with PyTorch. It will democratize state-of-the-art tools, libraries, and other components to make these innovations accessible to everyone. It also will focus on the business and product marketing of PyTorch and the related ecosystem, Meta said. The transition will not entail any changes to PyTorchs code and core project, including its separate technical governance structure.

As of August 2022, PyTorch was one of the five fastest-growing open source software communities in the world alongside the Linux kernel and Kubernetes, Zemlin said. From August 2021 through August 2022, PyTorch counted over 65,000 commits. Over 2,400 contributors participated in the effort, filing issues or PRs or writing documentation. These numbers place PyTorch among the most successful open source projects in history.

In January, PyTorch celebrated its five year anniversary since its inception in Metas AI labs. Now, all releases, features, and technical direction will continue to be driven by PyTorchs community: from individual code contributors, those who review and commit changes, to the module maintainers.

The creation of the PyTorch Foundation will ensure business decisions are being made in a transparent and open manner by a diverse group of members for years to come, said Soumith Chintala, PyTorch Lead Maintainer and AI Researcher at Meta, in a blog post.

However, the technical decisions remain in the control of individual maintainers, he said.

While, up to now, the business governance of PyTorch was unstructured and like a scrappy startup, the next stage is to support the interests of multiple stakeholders.

We chose the Linux Foundation as it has vast organization experience hosting large multi-stakeholder open source projects with the right balance of organizational structure and finding specific solutions for these projects, Chintala said. Such projects include Linux, Kubernetes, Node.js, Hyperledger and RISC-V.

More:

PyTorch Takes AI/ML Back to Its Research, Open Source Roots - thenewstack.io

Posted in Ai | Comments Off on PyTorch Takes AI/ML Back to Its Research, Open Source Roots – thenewstack.io

How Can Dentistry Benefit from AI? Its All in the Data – insideBIGDATA

Posted: at 10:06 pm

In this special guest feature, Florian Hillen, founder and CEO, VideaHealth, points out that Like many other industries within the healthcare ecosystem, dentistry is beginning to adopt artificial intelligence (AI) solutions to improve patient care, lower costs, and streamline workflows and care delivery.

Like many other industries within the healthcare ecosystem, dentistry is beginning to adopt artificial intelligence (AI) solutions to improve patient care, lower costs, and streamline workflows and care delivery. While the dental profession is no stranger to cutting-edge technology, AI represents such a revolutionary change that few organizations have the knowledge and skill sets to implement an effective strategy.

This is particularly important when applying AI to diagnose and treat patients. Ideally, AI should exceed human-level performance in speed, efficiency and accuracy. But unlike traditional technologies that are simply powered up and put to work, AI must be trained, conditioned and trusted to perform as expected even under difficult or unusual circumstances.

This requires dental providers to implement an AI training engine what we call an AI factory that incorporates key elements in the creation and conditioning of AI models. These include things like the data pipeline, labeling operations and software infrastructure, as well as the machine learning programs themselves, all of which are designed to detect a wide range of dental pathologies and provide highly tailored courses of treatment based on patients needs.

Turning Data into Knowledge

Training AI models is no easy job. It requires enormous amounts of data and strict guidance as to how that data is presented so as not to bias the algorithm, which can skew results and lead to health inequities. With the ability to support immense computing power to process calculations very quickly, coupled with access to aggregated and centralized data stores, todays platforms can comb through hundreds of millions of data points from service providers, insurance companies, universities and other sources to ensure that results are not just accurate but impartial as well.

This is what gives AI-driven processes the ability to enhance the clinical experience. By eliminating human error and bias, AI delivers more accurate diagnoses, better treatment options and fewer mistakes that must be corrected, usually at great expense or pain, at a later date.

It is important to note the data used to inform these models is not merely textual or numeric in nature, but pictographic as well. AI scans X-rays, MRIs and other visual elements to detect decay, abscesses and even cancers, sometimes long before they become apparent to the naked eye. This technology can also be used to customize crowns, bridges and implants much more quickly and more accurately than traditional procedures.

A key problem in the dental industry is the fractured nature of most practices. The vast majority of dental practices are independently owned and operated, which makes data collection and analysis difficult at best, particularly at the scale needed to draw accurate conclusions. While this has started to change in recent years with the rise of dental service organizations (DSOs) and increased consolidation within the insurance industry, to date there has been very little progress in capturing broad data sets, which are largely subject to data privacy and protection laws.

The AI Factory Approach

New companies are looking to change this with the development of factory-style data preparation modeled on the analytics engines of Netflix and other data-driven organizations. Using highly automated processes that can be quickly scaled to accommodate massive data sets from a multitude of sources, a properly designed AI factory can streamline the analytics process to ensure high-quality data is being fed into AI models.

This, in turn, produces high-quality results much the same way that automation has improved the manufacturing of cars, food and other physical products.

Perhaps one of the most basic improvements this factory approach to AI has achieved is cavity detection. A recent FDA trial demonstrated how AI-driven software trained on a factory model can reduce the number of missed cavities by 43% and cut the number of false positives by 15%. All dentists involved in the trial, regardless of training and experience, reported a distinct improvement in the ability to make accurate diagnoses.

Dentistry is a highly specialized sector of the broader healthcare industry, and as such it relies on unique data points in order to provide effective service to patients. At the same time differences in experience levels, equipment and diagnostic capabilities vary greatly, so much so that in most cases ten different dentists will provide ten different diagnoses.

By bringing order to this environment, an AI factory not only streamlines dental care and reduces costs but greatly increases accuracy in both the assessment and treatment of patients. Professional discrepancies will remain, of course, but disagreements on data and how it should be treated will be less. The end result should be better health outcomes and less burden, financial and otherwise, on todays bloated, largely redundant healthcare system.

The knowledge to accomplish this feat is already out there. All that is needed is an efficient, effective means of utilizing it.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

The rest is here:

How Can Dentistry Benefit from AI? Its All in the Data - insideBIGDATA

Posted in Ai | Comments Off on How Can Dentistry Benefit from AI? Its All in the Data – insideBIGDATA