Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% – ResearchAndMarkets.com – Business…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence (AI) in Drug Discovery Market by Component (Software, Service), Technology (ML, DL), Application (Neurodegenerative Diseases, Immuno-Oncology, CVD), End User (Pharmaceutical & Biotechnology, CRO), Region - Global forecast to 2024" report has been added to ResearchAndMarkets.com's offering.

The Artificial intelligence/AI in drug discovery Market is projected to reach USD 4.0 billion by 2027 from USD 0.6 billion in 2022, at a CAGR of 45.7% during the forecast period. The growth of this market is primarily driven by factors such as the need to control drug discovery & development costs and reduce the overall time taken in this process, the rising adoption of cloud-based applications and services. On the other hand, the inadequate availability of skilled labor is key factor restraining the market growth at certain extent over the forecast period.

Services segment is estimated to hold the major share in 2022 and also expected to grow at the highest over the forecast period

On the basis of offering, the AI in drug discovery market is bifurcated into software and services. the services segment expected to account for the largest market share of the global AI in drug discovery services market in 2022, and expected to grow fastest CAGR during the forecast period. The advantages and benefits associated with these services and the strong demand for AI services among end users are the key factors for the growth of this segment.

Machine learning technology segment accounted for the largest share of the global AI in drug discovery market

On the basis of technology, the AI in drug discovery market is segmented into machine learning and other technologies. The machine learning segment accounted for the largest share of the global market in 2021 and expected to grow at the highest CAGR during the forecast period. High adoption of machine learning technology among CRO, pharmaceutical and biotechnology companies and capability of these technologies to extract insights from data sets, which helps accelerate the drug discovery process are some of the factors supporting the market growth of this segment.

Pharmaceutical & biotechnology companies segment expected to hold the largest share of the market in 2022

On the basis of end user, the AI in drug discovery market is divided into pharmaceutical & biotechnology companies, CROs, and research centers and academic & government institutes. In 2021, the pharmaceutical & biotechnology companies segment accounted for the largest share of the AI in drug discovery market. On the other hand, research centers and academic & government institutes are expected to witness the highest CAGR during the forecast period. The strong demand for AI-based tools in making the entire drug discovery process more time and cost-efficient is the key growth factor of pharmaceutical and biotechnology end-user segment.

Key Topics Covered:

1 Introduction

2 Research Methodology

3 Executive Summary

4 Premium Insights

4.1 Growing Need to Control Drug Discovery & Development Costs is a Key Factor Driving the Adoption of AI in Drug Discovery Solutions

4.2 Services Segment to Witness the Highest Growth During the Forecast Period

4.3 Deep Learning Segment Accounted for the Largest Market Share in 2021

4.4 North America is the Fastest-Growing Regional Market for AI in Drug Discovery

5 Market Overview

5.1 Introduction

5.2 Market Dynamics

5.2.1 Market Drivers

5.2.1.1 Growing Number of Cross-Industry Collaborations and Partnerships

5.2.1.2 Growing Need to Control Drug Discovery & Development Costs and Reduce Time Involved in Drug Development

5.2.1.3 Patent Expiry of Several Drugs

5.2.2 Market Restraints

5.2.2.1 Shortage of AI Workforce and Ambiguous Regulatory Guidelines for Medical Software

5.2.3 Market Opportunities

5.2.3.1 Growing Biotechnology Industry

5.2.3.2 Emerging Markets

5.2.3.3 Focus on Developing Human-Aware AI Systems

5.2.3.4 Growth in the Drugs and Biologics Market Despite the COVID-19 Pandemic

5.2.4 Market Challenges

5.2.4.1 Limited Availability of Data Sets

5.3 Value Chain Analysis

5.4 Porter's Five Forces Analysiss

5.5 Ecosystem

5.6 Technology Analysis

5.7 Pricing Analysis

5.8 Business Models

5.9 Regulations

5.10 Conferences and Webinars

5.11 Case Study Analysis

6 Artificial Intelligence in Drug Discovery Market, by Offering

7 Artificial Intelligence in Drug Discovery Market, by Technology

8 Artificial Intelligence in Drug Discovery Market, by Application

9 Artificial Intelligence in Drug Discovery Market, by End-user

10 Artificial Intelligence in Drug Discovery Market, by Region

11 Competitive Landscape

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/q5pvns

Go here to see the original:

Worldwide Artificial Intelligence (AI) in Drug Discovery Market to reach $ 4.0 billion by 2027 at a CAGR of 45.7% - ResearchAndMarkets.com - Business...

Glorikian’s New Book Sheds Light on Artificial Intelligence Advances in the Healthcare Field – The Armenian Mirror-Spectator

After describing various ways in which AI and big data are involved already in our daily lives, ranging from the food we eat, the cars we drive and the things we buy, he concludes that it is leading to the Fourth Industrial Revolution, a phrase coined by Klaus Schwab, the head of the World Economic Forum. All aspects of life will be transformed in a way analogous to the prior industrial revolutions (first the use of steam and waterpower, second the expansion of electricity and telegraph cables, and third, the digital revolution of the end of the 20th century).

At the heart of the book are the chapters in which he explains what data and AI have already accomplished for our health and what they can do in the future. The ever-expanding amount of personal data available combined with advances in AI allows for increasing accuracy of diagnoses, treatments and better sensors and software. Glorikian notes that today there are over 350,000 different healthcare apps and the mobile health market is expected to approach $290 billion in revenue by 2025.

Glorikian employs a light, informal style of writing, with references to pop culture such as Star Trek. He asks the reader questions and intersperses each chapter with what he calls sidebars. They are short illustrative stories or sets of examples. For example, AI Saved My Life: The Watch That Called 911 for a Fallen Cyclist (p. 68) starts with a man who lost consciousness after falling off his bike, and then lists other ways current phones can save lives. Other sidebars explain basic concepts like the meaning of genes and DNA; or about gene editing with CRISPR.

Present and Future Advances

Before getting into more complex issues, Glorikian describes what be most familiar to readers: the use of AI-enabled smartphone apps which guide individuals towards optimal diets and exercising as well as allow for group activities through remote communication and virtual reality. There are already countless AI-enabled smartphone apps and sensors allowing us to track our movements and exercise, as well as our diets, sleep and even stress levels. In the future, their approach will become more tailored to individual needs and data, including genomics, environment, lifestyle and molecular biology, with specific recommendations.

He speculates as to what innovations the near future may bring, remarking: What isnt clear is just how long it will take us to move from this point of collecting and finding patterns in the data, to one where we (and our healthcare providers are actively using those patterns to make accurate predications about our health. He gives the example of having an app to track migraine headaches, which can find and analyze patterns in the data (do they occur on nights when you have eaten a particular kind of food or traveled on a plane, for example). Eventually, at a more advanced stage, it might suggest you take an earlier flight or eat in a different restaurant that does not use ingredients that might be migraine triggers for you.

Healthcare will become more decentralized, Glorikian predicts, with people no longer forced to wait hours in hospital emergency rooms. Instead, some issues can be determined through phone apps and remote specialists, and others can be handled at rapid care facilities or pharmacies. Hospitals themselves will become more efficient with command centers monitoring the usage of various resources and using AI to monitor various aspects of patient health. Telerobotics will allow access to specialized surgeons located in major urban centers even if there are none in the local hospital.

In the chapter on genetics, Glorikian presents three ways in which unlocking the secrets of an individuals genome can have practical health consequences right now. The first is the prevention of bad drug reactions through pharmacogenomics, or learning how genes affect response to drugs. Second are enhanced screening and preventative treatment for hereditary cancer syndromes. One major advancement just starting to be used more, notes Glorikian, is liquid biopsy, in which a blood sample allows identification of tumor cells as opposed to standard physical biopsies. It is less invasive and sometimes more accurate for detecting cancers prior to the appearance of symptoms. The third way is DNA sequencing at birth to screen for many disorders which are treatable when caught early. The future may see corrections of various mutations through gene editing.

He points out the various benefits in the health field of collecting large sets of data. For example, it allows the use of AI or machine learning to better read mammogram results and to better predict which patients would see benefit from various procedures like cardiac resynchronization therapy or who had greater risk for cardiovascular disease. There is hope that this approach can help detect the start and the progression of diseases like Alzheimers or diabetic retinopathy. Ultimately it may even be able to predict fairly reliably when individuals would die.

At present, AI accessing sufficient data is helping identify new drugs, saving time and money by using statistical models to predict whether the new drugs will work even before trials. AI can determine which variables or dimensions to remove when making complex computations of models in order to speed up computational processes. This is important when there are large numbers of variables and vast amounts of data.

Glorikian does not miss the opportunity to use the current Covid-19 crisis as a teaching moment. In a chapter called Solving the Pandemic Problem, Glorikian discusses the role AI, machine learning and big data played in the fight against the coronavirus pandemic, in spotting it early on, predicting where it might travel next, sequencing its genome in days, and developing diagnostic tests, vaccines and treatments. Vaccine development, like drug development, is much faster today than even 20 years ago, thanks to computational modeling and virtual clinical trials and studies.

Potential Problems

Glorikian does not shy away from raising some of the potential problems associated with the wide use of AI in medicine, such as the threat to patient privacy and ethical questions about what machines should be allowed to do. Should genetic editing be allowed in humans for looks, intelligence or various types of talents? Should AI predictions of lifespan and dates of death be used? What types of decisions should machines be allowed to make in healthcare? And what sort of triage should be allowed in case of limited medical resources (if AI predicts one patient is for example ten times more likely to die than another despite medical intervention)? There are grave dangers if hackers access databanks or medical machines.

There are also potential operational problems with using data as a basis for AI, such as outdated information, biased data, missing data (and how it is handled), misanalyzed or differently analyzed data.

Despite all these issues, Glorikian is optimistic about the value of AI. He concludes, But despite the risk, for the most part, the benefits outweigh the potential downsidesThe data we willingly give up makes our lives better.

Armenian Connection

When asked at the end of June, 2022 how Armenia compares with the US and other parts of the world in the use of AI in healthcare, he made the distinction between the Armenian healthcare system and Armenian technology that is directed at the world healthcare system.

On the one hand, he said, I dont know of a lot that is being incorporated into the healthcare system, although we do have a national electronic medical record system that they have really been improving on a consistent basis. Having a call management record system throughout the country will provide data for the next step in use of AI, and that, he said is very exciting.

On the other hand, for technology companies involved in healthcare and biotechnology in Armenia, he said, I would always like to see more, but there are some really interesting companies that have sprouted up over the last five years. Also, with the tech giant NVDIA opening up a research center in Armenia, Glorikian said he hoped there will be interesting synergies since this company does invest in the healthcare area.Harry Glorikian, second from left, next to Acting Prime Minister Nikol Pashinyan, in a December 19, 2018 Yerevan meeting

At the end of 2018, Glorikian met with then Acting Prime Minister Nikol Pashinyan to discuss launching the Armenian Genome project to expand the scope of genetic studies in the field of healthcare. He said that this undertaking was halted for reasons beyond his understanding. He said, My lesson learned was you can move a lot faster and have significant impact by focusing on the private sector.

Indeed, this is what he does, as an individual investor and as a member of the Angel Investor Club of Armenia. While the group looks at a broad range of companies, mainly technology driven, he and a few other people in it take a look at those which are involved in healthcare. In fact, he is going to California at the very end of June to learn more about a robot companion for children called Moxie, prepared by Embodied, Inc., a company founded by veteran roboticist Paolo Pirjanian. Pirjanian, who was a guest on Glorikians podcast several weeks ago, lives in California, but Glorikian said that the back end of his companys work is done in Armenia.

Glorikian added that he is always finding out about or running into Armenians in the diaspora doing work with AI.

Changes

When asked what has changed since the publication of the book last year, he replied, Things are getting better! While hardware does not change overnight, he said that there have been incremental improvements to software during the period of time it took to write the book and then have it published. He said, For someone reading the book now, you are probably saying, I had no idea that this was even available. For someone like me, you already feel a little behind.

Readers of the book have already begun to contact Glorikian with anecdotes about what it led them to find out and do. He hopes the book will continue to reach more people. He said, The biggest thing I get out of it is when someone says I learned this and I did something about it. When individuals have access to more quantifiable data, not only can they manage their own health better, but they also provide their doctors with more data longitudinally that helps the doctor to be more effective. Glorikian said this should have a corollary effect of deflating healthcare costs in the long run.

One minor criticism of the book, at least of the paperback version that fell into the hands of this reviewer, is the poor quality of some of the images used. The text which is part of those illustrations is very hard to read. Otherwise, this is a very accessible read for an audience of varying backgrounds seeking basic information on the ongoing transformations in healthcare through AI.

See the original post here:

Glorikian's New Book Sheds Light on Artificial Intelligence Advances in the Healthcare Field - The Armenian Mirror-Spectator

Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference – Business Wire

MONTREAL--(BUSINESS WIRE)--On July 19th, Behavox will host a conference to share the next generation of artificial intelligence in Compliance and Security with clients, regulators, and industry leaders.

The Behavox AI in Compliance and Security Conference will be held at the company HQ in Montreal. With this exclusive in-person conference, Behavox is relaunching its pre-COVID tradition of inviting customers, regulators, AI industry leaders, and partners to its Montreal HQ to deep dive into workshops and keynote speeches on compliance, security, and artificial intelligence.

Were extremely excited to relaunch our tradition of inviting clients to our offices in order to learn directly from the engineers and data scientists behind our groundbreaking innovations, said Chief Customer Intelligence Officer Fahreen Kurji. Attendees at the conference will get to enjoy keynote presentations as well as Innovation Paddocks where you can test drive our latest innovations and also spend time networking with other industry leaders and regulators.

Keynote presentations will cover:

The conference will also feature Innovation Paddocks where guests will be able to learn more from the engineers and data scientists behind Behavox innovations. At this conference, Behavox will demonstrate its revolutionary new product - Behavox Quantum. There will be test drives and numerous workshops covering everything from infrastructure for cloud orchestration to the AI engine at the core of Behavox Quantum.

Whats in it for participants?

Behavox Quantum has been rigorously tested and benchmarked against existing solutions in the market and it outperformed competition by at least 3,000x using new AI risk policies, providing a holistic security program to catch malicious, immoral, and illegal actors, eliminating fraud and protecting your digital headquarters.

Attendees at the July 19th conference will include C-suite executives from top global banks, financial institutions, and corporations with many prospects and clients sending entire delegations to the conference. Justin Trudeau, Canadian Prime Minister, will give the commencement speech at the conference in recognition/ celebration of the world leading AI innovations coming out of Canada.

This is a unique opportunity to test drive the product and meet the team behind the innovations as well as network with top industry professionals. Register here for the Behavox AI in Compliance and Security Conference.

About Behavox Ltd.

Behavox provides a suite of security products that help compliance, HR, and security teams protect their company and colleagues from business risks.

Through AI-powered analysis of all corporate communications, including email, instant messaging, voice, and video conferencing platforms, Behavox helps organizations identify illegal, immoral, and malicious behavior in the workplace.

Founded in 2014, Behavox is headquartered in Montreal and has offices in New York City, London, Seattle, Singapore, and Tokyo.

More information about the company is available at https://www.behavox.com/.

View original post here:

Deep Dive Into Advanced AI and Machine Learning at The Behavox Artificial Intelligence in Compliance and Security Conference - Business Wire

What’s Your Future of Work Path With Artificial Intelligence? – CMSWire

What does the future of artificial intelligence in the workplace look like for employee experience?

Over last few years, artificial intelligence (AI) has become a very significant part of business operations across all industries. Its already making an impact as part of our daily lives, from appliances, voice assistants, search, surveillance, marketing, autonomous vehicles, video games, TVs, to large sporting events.

AI is the result of applying cognitive science techniques to emulate human intellect and artificially create something that performs tasks that only humans can perform, like reasoning, natural communication and problem-solving. It does this by leveraging machine learning technique by reading and analyzing large data sets to identify patterns, detect anomalies and make decisions with no human intervention.

In this ever-evolving market, AI has become super crucial for businesses to upscale workplace infrastructure and improve employee experience. According to Precedence Research, the AI market size is projected to surpass around $1,597.1 billion by 2030, and is expanding growth at a CAGR of 38.1% from 2022 to 2030.

Currently, AI is being used in the workplace to automate jobs that are repetitive or require a high degree of precision, like data entry or analysis. AI can also be used to make predictions about customer behavior or market trends.

In the future, AI is expected to increasingly be used to augment human workers, providing them with recommendations or suggestions based on the data that it has been programmed to analyze.

Todays websites are capable of using AI to quickly detect potential customer intent in real-time based on interactions by the online visitor, and to show more engaging and personalized content to enhance the possibility of converting customers. As AI continues to develop, its capabilities in the workplace are expected to increase, making it an essential tool for businesses looking to stay ahead of the competition.

Kai-Fu Lee, a famous computer scientist, businessman and writer, said in a 2019 interview with CBS News, that he believes 40% of the worlds jobs will be replaced by robots capable of automating tasks.

AI has a potential to replace many types of jobs that involve mechanical or structured tasks that are repetitive in nature. Some opportunities we are seeing now are robotic vehicles, drones, surgical devices, logistics, call centers, administrative tasks like housekeeping, data entry and proofreading. Even armies of robots for security and defense are being discussed.

That said, AI is going to be a huge disruption worldwide over the next decade or so. Most innovations come from disruptions; take COVID-19 pandemic as an example, it dramatically changed how we work now.

While AI takes some jobs, it is also creates many opportunities. When it comes to strategic thinking, creativity, emotions and empathy, humans will always win over machines. This rings the bell to adapt with the change and grow human factors in workplace in all possible dimensions. Nokia and Blackberry mobile phones, Kodak cameras are the living examples of failing by not acknowledging the digital disruption. Timely market research, using the right technology and enabling the workforce to adapt for change can bring success to businesses through digital transformation.

Related Article:What's Next for Artificial Intelligence in Customer Experience?

There will be changes in the traditional means of doing things, and more jobs will be generated. AI has the potential to revolutionize the workplace, transforming how we do everything from customer service to driving cars in one of the busiest places like downtown San Francisco. However, there are still several challenges that need to be overcome before AI can be widely implemented in the workplace.

One of the biggest challenges is developing algorithms that can reliably replicate human tasks. This is often difficult because human tasks often involve common sense and reasoning, which are difficult for computers to understand. We should also ensure that AI systems are fair and unbiased. This is important because AI systems are often used to make decisions about things like hiring and promotions, and if they are biased then this can lead to discrimination. We live in the world of diversity, equity, and inclusion (DEI), and mistakes with AI can be costly for businesses. It may take a very long time to develop a customer-centric model that is completely dependent on AI, one that is reliable and trustworthy.

The future of AI is hard to predict, but there are a few key trends that are likely to shape its development. The increasing availability of data will allow AI systems to become more accurate and efficient, and as businesses and individuals rely on AI more and more, a need for new types of AI applications means more work and jobs. As these trends continue, AI is likely to have a significant impact on the workforce. It can very well lead to the automation of many cognitive tasks, including those that are currently performed by human workers.

This could result in a reduction in the overall demand for labor as well as an increase in the need for workers with skills that complement the AI systems. AI is the future of work; there's no doubt about that, but how it will shape the future of human workforce remains to be seen.

Many are worried that AI will remove many jobs, while others see it as an opportunity to increase efficiency and accuracy in the workforce. No matter which side you're on, it's important to understand how AI is changing the way we work and what that means for the future.

Related Article: 8 Examples of Artificial Intelligence in the Workplace

Let's look at few real-world examples that are already changing the way of work:

All above implementations look great. However, it is important to note that AI should be used as a supplement to human intelligence, not a replacement for it. When used properly, AI can help businesses thrive. The role of AI in the workplace is ever evolving, and it will be interesting to see how businesses adopt these technologies and improve the overall work environment to provide the best employee experience.

AnOctober 2020 Gallup pollfound that 51% of workers are not engaged they are psychologically unattached to their work and company.

Here are some employee experience aspects that AI could improve:

Employees need to know and trust that you have their best interests in mind. The value of AI in human resources is going to be critical to deliver employee experiences along with human connection and values.

Read this article:

What's Your Future of Work Path With Artificial Intelligence? - CMSWire

How artificial intelligence is boosting crop yield to feed the world – Freethink

Over the last several decades, genetic research has seen incredible advances in gene sequencing technologies. In 2004, scientists completed the Human Genome Project, an ambitious project to sequence the human genome, which cost $3 billion and took 10 years. Now, a person can get their genome sequenced for less than $1,000 and within about 24 hours.

Scientists capitalized on these advances by sequencing everything from the elusive giant squid to the Ethiopian eggplant. With this technology came promises of miraculous breakthroughs: all diseases would be cured and world hunger would be a thing of the past.

So, where are these miracles?

We need about 60 to 70% more food production by 2050.

In 2015, a group of researchers founded Yield10 Bioscience, an agriculture biotech company that aimed to use artificial intelligence to start making those promises into reality.

Two things drove the development of Yield10 Bioscience.

One, obviously, [the need for] global food security: we need about 60 to 70% more food production by 2050, explained Dr. Oliver Peoples, CEO of Yield10 Bioscience, in an interview with Freethink. And then, of course, CRISPR.

It turns out that having the tools to sequence DNA is only step one of manufacturing the miracles we were promised.

The second step is figuring out what a sequence of DNA actually does. In other words, its one thing to discover a gene, and it is another thing entirely to discover a genes role in a specific organism.

In order to do this, scientists manipulate the gene: delete it from an organism and see what functions are lost, or add it to an organism and see what is gained. During the early genetics revolution, although scientists had tools to easily and accurately sequence DNA, their tools to manipulate DNA were labor-intensive and cumbersome.

Its one thing to discover a gene, and it is another thing entirely to discover a genes role in a specific organism.

Around 2012, CRISPR technology burst onto the scene, and it changed everything. Scientists had been investigating CRISPR a system that evolved in bacteria to fight off viruses since the 80s, but it took 30 years for them to finally understand how they could use it to edit genes in any organism.

Suddenly, scientists had a powerful tool that could easily manipulate genomes. Equipped with DNA sequencing and editing tools, scientists could complete studies that once took years or even decades in mere months.

Promises of miracles poured back in, with renewed vigor: CRISPR would eliminate genetic disorders and feed the world! But of course, there is yet another step: figuring out which genes to edit.

Over the last couple of decades, researchers have compiled databases of millions of genes. For example, GenBank, the National Institute of Healths (NIH) genetic sequence database, contains 38,086,233 genes, of which only tens of thousands have some functional information.

For example, ARGOS is a gene involved in plant growth. Consequently, it is a very well-studied gene. Scientists found that genetically engineering Arabidopsis, a fast-growing plant commonly used to study plant biology, to express lots of ARGOS made the plant grow faster.

Dozens of other plants have ARGOS (or at least genes very similar to it), such as pineapple, radish, and winter squash. Those plants, however, are hard to genetically manipulate compared to Arabidopsis. Thus, ARGOSs function in crops in general hasnt been as well studied.

The big crop companies are struggling to figure out what to do with CRISPR.

CRISPR suddenly changed the landscape for small groups of researchers hoping to innovate in agriculture. It was an affordable technology that anyone could use but no one knew what to do with it. Even the largest research corporations in the world dont have the resources to test all the genes that have been identified.

I think if you talk to all the big crop companies, theyve all got big investments in CRISPR. And I think theyre all struggling with the same question, which is, This is a great tool. What do I do with it? said Dr. Peoples.

The algorithm can identify genes that act at a fundamental level in crop metabolism.

The holy grail of crop science, according to Dr. Peoples, would be a tool that could identify three or four genetic changes that would double crop production for whatever youre growing.

With CRISPR, those changes could be made right now. However, there needs to be a way to identify those changes, and that information is buried in the massive databases.

To develop the tool that can dig them out, Dr. Peoples team merged artificial intelligence with synthetic biology, a field of science that involves redesigning organisms to have useful new abilities, such as increasing crop yield or bioplastic production.

This union created Gene Ranking Artificial Intelligence Network (GRAIN), an algorithm that evaluates scientific databases like GenBank and identifies genes that act at a fundamental level in crop metabolism.

That fundamental level aspect is one of the keys to GRAINs long-term success. It identifies genes that are common across multiple crop types, so when a powerful gene is identified, it can be used across multiple crop types.

For example, using the GRAIN platform, Dr. Peoples and his team identified four genes that may significantly impact seed oil content in Camelina, a plant similar to rapeseed (true canola oil). When the researchers increased the activity of just one of those genes via CRISPR, the plants had a 10% increase in seed oil content.

Its not quite a miracle yet, but with more advances in gene editing and AI happening all the time, the promises of the genetic revolution are finally starting to pay off.

Wed love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us attips@freethink.com.

Read the original:

How artificial intelligence is boosting crop yield to feed the world - Freethink

Taking the guesswork out of dental care with artificial intelligence – MIT News

When you picture a hospital radiologist, you might think of a specialist who sits in a dark room and spends hours poring over X-rays to make diagnoses. Contrast that with your dentist, who in addition to interpreting X-rays must also perform surgery, manage staff, communicate with patients, and run their business. When dentists analyze X-rays, they do so in bright rooms and on computers that arent specialized for radiology, often with the patient sitting right next to them.

Is it any wonder, then, that dentists given the same X-ray might propose different treatments?

Dentists are doing a great job given all the things they have to deal with, says Wardah Inam SM 13, PhD 16.

Inam is the co-founder of Overjet, a company using artificial intelligence to analyze and annotate X-rays for dentists and insurance providers. Overjet seeks to take the subjectivity out of X-ray interpretations to improve patient care.

Its about moving toward more precision medicine, where we have the right treatments at the right time, says Inam, who co-founded the company with Alexander Jelicich 13. Thats where technology can help. Once we quantify the disease, we can make it very easy to recommend the right treatment.

Overjet has been cleared by the Food and Drug Administration to detect and outline cavities and to quantify bone levels to aid in the diagnosis of periodontal disease, a common but preventable gum infection that causes the jawbone and other tissues supporting the teeth to deteriorate.

In addition to helping dentists detect and treat diseases, Overjets software is also designed to help dentists show patients the problems theyre seeing and explain why theyre recommending certain treatments.

The company has already analyzed tens of millions of X-rays, is used by dental practices nationwide, and is currently working with insurance companies that represent more than 75 million patients in the U.S. Inam is hoping the data Overjet is analyzing can be used to further streamline operations while improving care for patients.

Our mission at Overjet is to improve oral health by creating a future that is clinically precise, efficient, and patient-centric, says Inam.

Its been a whirlwind journey for Inam, who knew nothing about the dental industry until a bad experience piqued her interest in 2018.

Getting to the root of the problem

Inam came to MIT in 2010, first for her masters and then her PhD in electrical engineering and computer science, and says she caught the bug for entrepreneurship early on.

For me, MIT was a sandbox where you could learn different things and find out what you like and what you don't like, Inam says. Plus, if you are curious about a problem, you can really dive into it.

While taking entrepreneurship classes at the Sloan School of Management, Inam eventually started a number of new ventures with classmates.

I didn't know I wanted to start a company when I came to MIT, Inam says. I knew I wanted to solve important problems. I went through this journey of deciding between academia and industry, but I like to see things happen faster and I like to make an impact in my lifetime, and that's what drew me to entrepreneurship.

During her postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL), Inam and a group of researchers applied machine learning to wireless signals to create biomedical sensors that could track a persons movements, detect falls, and monitor respiratory rate.

She didnt get interested in dentistry until after leaving MIT, when she changed dentists and received an entirely new treatment plan. Confused by the change, she asked for her X-rays and asked other dentists to have a look, only to receive still another variation in diagnosis and treatment recommendations.

At that point, Inam decided to dive into dentistry for herself, reading books on the subject, watching YouTube videos, and eventually interviewing dentists. Before she knew it, she was spending more time learning about dentistry than she was at her job.

The same week Inam quit her job, she learned about MITs Hacking Medicine competition and decided to participate. Thats where she started building her team and getting connections. Overjets first funding came from the Media Lab-affiliated investment group the E14 Fund.

The E14 fund wrote the first check, and I don't think we would've existed if it wasn't for them taking a chance on us, she says.

Inam learned that a big reason for variation in treatment recommendations among dentists is the sheer number of potential treatment options for each disease. A cavity, for instance, can be treated with a filling, a crown, a root canal, a bridge, and more.

When it comes to periodontal disease, dentists must make millimeter-level assessments to determine disease severity and progression. The extent and progression of the disease determines the best treatment.

I felt technology could play a big role in not only enhancing the diagnosis but also to communicate with the patients more effectively so they understand and don't have to go through the confusing process I did of wondering who's right, Inam says.

Overjet began as a tool to help insurance companies streamline dental claims before the company began integrating its tool directly into dentists offices. Every day, some of the largest dental organizations nationwide are using Overjet, including Guardian Insurance, Delta Dental, Dental Care Alliance, and Jefferson Dental and Orthodontics.

Today, as a dental X-ray is imported into a computer, Overjets software analyzes and annotates the images automatically. By the time the image appears on the computer screen, it has information on the type of X-ray taken, how a tooth may be impacted, the exact level of bone loss with color overlays, the location and severity of cavities, and more.

The analysis gives dentists more information to talk to patients about treatment options.

Now the dentist or hygienist just has to synthesize that information, and they use the software to communicate with you, Inam says. So, they'll show you the X-rays with Overjet's annotations and say, 'You have 4 millimeters of bone loss, it's in red, that's higher than the 3 millimeters you had last time you came, so I'm recommending this treatment.

Overjet also incorporates historical information about each patient, tracking bone loss on every tooth and helping dentists detect cases where disease is progressing more quickly.

Weve seen cases where a cancer patient with dry mouth goes from nothing to something extremely bad in six months between visits, so those patients should probably come to the dentist more often, Inam says. Its all about using data to change how we practice care, think about plans, and offer services to different types of patients.

The operating system of dentistry

Overjets FDA clearances account for two highly prevalent diseases. They also put the company in a position to conduct industry-level analysis and help dental practices compare themselves to peers.

We use the same tech to help practices understand clinical performance and improve operations, Inam says. We can look at every patient at every practice and identify how practices can use the software to improve the care they're providing.

Moving forward, Inam sees Overjet playing an integral role in virtually every aspect of dental operations.

These radiographs have been digitized for a while, but they've never been utilized because the computers couldn't read them, Inam says. Overjet is turning unstructured data into data that we can analyze. Right now, we're building the basic infrastructure. Eventually we want to grow the platform to improve any service the practice can provide, basically becoming the operating system of the practice to help providers do their job more effectively.

View original post here:

Taking the guesswork out of dental care with artificial intelligence - MIT News

IT, Computing and Communications (ITCC) Technology Innovations/Growth Opportunities Report 2022 with Focus on Cloud, Artificial Intelligence, and Edge…

DUBLIN--(BUSINESS WIRE)--The "Growth Opportunities in Cloud, Artificial Intelligence, and Edge Computing" report has been added to ResearchAndMarkets.com's offering.

This edition of IT, Computing and Communications (ITCC) Technology Opportunity Engine (TOE) provides a snapshot of the emerging ICT led innovations in Cloud, Artificial Intelligence and Edge Computing.

This issue focuses on the application of information and communication technologies in alleviating the challenges faced across industry sectors in areas such as retail, industrial, BFSI, and automotive.

ITCC TOE's mission is to investigate emerging wireless communication and computing technology areas including 3G, 4G, Wi-Fi, Bluetooth, Big Data, cloud computing, augmented reality, virtual reality, artificial intelligence, virtualization and the Internet of Things and their new applications; unearth new products and service offerings; highlight trends in the wireless networking, data management and computing spaces; provide updates on technology funding; evaluate intellectual property; follow technology transfer and solution deployment/integration; track development of standards and software; and report on legislative and policy issues and many more.

Innovations in Cloud, Artificial Intelligence, and Edge Computing

For more information about this report visit https://www.researchandmarkets.com/r/37mrl6

Go here to see the original:

IT, Computing and Communications (ITCC) Technology Innovations/Growth Opportunities Report 2022 with Focus on Cloud, Artificial Intelligence, and Edge...

Artificial Intelligence in Education Market Size, Scope and Forecast | Google Inc., Microsoft Corporation, eGain Corporation, QlikTech International…

New Jersey, United States This Artificial Intelligence in Education Market research examines the state and future prospects of the Artificial Intelligence in Education market from the perspectives of competitors, regions, products, and end Applications/industries. The Worldwide Artificial Intelligence in Education market is segmented by product and Application/end industries in this analysis, which also analyses the different players in the global and key regions.

The analysis for the Artificial Intelligence in Education market is included in this report in its entirety. The in-depth secondary research, primary interviews, and internal expert reviews went into the Artificial Intelligence in Education reports market estimates. These market estimates were taken into account by researching the effects of different social, political, and economic aspects, as well as the present market dynamics, on the growth of the Artificial Intelligence in Education market.

Get Full PDF Sample Copy of Report: (Including Full TOC, List of Tables & Figures, Chart) @https://www.verifiedmarketresearch.com/download-sample/?rid=29891

Key Players Mentioned in the Artificial Intelligence in Education Market Research Report:

Google Inc., Microsoft Corporation, eGain Corporation, QlikTech International AB, Cognii Next IT Corporation, Nuance Communication Inc., Quantum Adaptive Learning LLC. IntelliResponse System Inc., IBM Corporation.

The Porters Five Forces analysis, which explains the five forces: customers bargaining power, distributors bargaining power, the threat of substitute products, and degree of competition in the Artificial Intelligence in Education Market, is included in the report along with the market overview, which includes the market dynamics. It describes the different players who make up the market ecosystem, including system integrators, middlemen, and end-users. The competitive environment of the Artificial Intelligence in Education marketis another major topic of the report. For enhanced decision-making, the research also provides in-depth details regarding the COVID-19 scenario and its influence on the market.

Artificial Intelligence in EducationMarket Segmentation:

Global Artificial Intelligence in Education Market, By Educational Model

Domain Model Learner Model Pedagogical Model

Global Artificial Intelligence in Education Market, By Application

Content Delivery Systems Intelligent Tutoring Systems Interactive Websites Learning Platform and Virtual Facilitators Smart Content

Global Artificial Intelligence in Education Market, By End-User

Higher Education Primary and Secondary Education

Inquire for a Discount on this Premium Report@ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=29891

Artificial Intelligence in Education Market Report Scope

Key questions answered in the report:

1. Which are the five top players of the Artificial Intelligence in Education market?

2. How will the Artificial Intelligence in Education market change in the next five years?

3. Which product and application will take a lions share of the Artificial Intelligence in Education market?

4. What are the drivers and restraints of the Artificial Intelligence in Education market?

5. Which regional market will show the highest growth?

6. What will be the CAGR and size of the Artificial Intelligence in Education market throughout the forecast period?

For More Information or Query or Customization Before Buying, Visit @ https://www.verifiedmarketresearch.com/product/artificial-intelligence-in-education-market/

Visualize Artificial Intelligence in Education Market using Verified Market Intelligence:-

Verified Market Intelligence is our BI-enabled platform for narrative storytelling of this market. VMI offers in-depth forecasted trends and accurate Insights on over 20,000+ emerging & niche markets, helping you make critical revenue-impacting decisions for a brilliant future.

VMI provides a holistic overview and global competitive landscape with respect to Region, Country, and Segment, and Key players of your market. Present your Market Report & findings with an inbuilt presentation feature saving over 70% of your time and resources for Investor, Sales & Marketing, R&D, and Product Development pitches. VMI enables data delivery In Excel and Interactive PDF formats with over 15+ Key Market Indicators for your market.

Visualize Artificial Intelligence in Education Market using VMI @ https://www.verifiedmarketresearch.com/vmintelligence/

About Us: Verified Market Research

Verified Market Research is a leading Global Research and Consulting firm that has been providing advanced analytical research solutions, custom consulting and in-depth data analysis for 10+ years to individuals and companies alike that are looking for accurate, reliable and up to date research data and technical consulting. We offer insights into strategic and growth analyses, Data necessary to achieve corporate goals and help make critical revenue decisions.

Our research studies help our clients make superior data-driven decisions, understand market forecast, capitalize on future opportunities and optimize efficiency by working as their partner to deliver accurate and valuable information. The industries we cover span over a large spectrum including Technology, Chemicals, Manufacturing, Energy, Food and Beverages, Automotive, Robotics, Packaging, Construction, Mining & Gas. Etc.

We, at Verified Market Research, assist in understanding holistic market indicating factors and most current and future market trends. Our analysts, with their high expertise in data gathering and governance, utilize industry techniques to collate and examine data at all stages. They are trained to combine modern data collection techniques, superior research methodology, subject expertise and years of collective experience to produce informative and accurate research.

Having serviced over 5000+ clients, we have provided reliable market research services to more than 100 Global Fortune 500 companies such as Amazon, Dell, IBM, Shell, Exxon Mobil, General Electric, Siemens, Microsoft, Sony and Hitachi. We have co-consulted with some of the worlds leading consulting firms like McKinsey & Company, Boston Consulting Group, Bain and Company for custom research and consulting projects for businesses worldwide.

Contact us:

Mr. Edwyne Fernandes

Verified Market Research

US: +1 (650)-781-4080UK: +44 (753)-715-0008APAC: +61 (488)-85-9400US Toll-Free: +1 (800)-782-1768

Email: sales@verifiedmarketresearch.com

Website:- https://www.verifiedmarketresearch.com/

See the original post here:

Artificial Intelligence in Education Market Size, Scope and Forecast | Google Inc., Microsoft Corporation, eGain Corporation, QlikTech International...

Artificial Intelligence in Medical Imaging Market Analysis by Trends, Demand, Products and Technology Forecast to 2028 Designer Women – Designer…

Actionable insights and market data provided in the high-qualityArtificial Intelligence in Medical Imaging Market Reporthelp build business growth strategy by astute and authoritative DBMR team.The document focuses on smaller, singular topics, issues, or populations, rather than an overall market sample.This industry analysis document sheds light on finer details about the exact company.The marketing report contains valuable information about the buyer personas, target audience, and customers of the business to determine the viability and success of the product or service.An international healthcare report provides an in-depth understanding of who the buyers are, the specific market, and what influences the purchasing decisions and behavior of members of the target audience.

A better understanding of the market gained through the use of a world-class healthcare report will help in developing the products and advertising campaigns to more specifically address the target market.The market research report not only saves time and money, but also reduces business risks.To advance the companys industry knowledge, to create new advertising and marketing campaigns, as well as to identify demographic needs to target, this industry report will be very useful.Whether companies are researching new product trends or analyzing the competition of an existing or emerging market,

Get Sample PDF of the Report https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-artificial-intelligence-in-medical-imaging-market&rajaas

Growing capability and realization of personalized treatment, improvement of procedures and treatment of patients is the vital factor driving the escalating growth of the market, increase in the number of diagnostic procedures, the rising disease prevalence, rising favorable reimbursement policies, growing presence of key players and favorable government regulations, rapidly changing healthcare infrastructure in Asian countries such as China , Indonesia and India and rising prevalence of various lifestyle related chronic diseases such as cancer and cardiovascular diseases are the major factors among others driving artificial intelligence in imaging market medical.Furthermore,HealthcareThe growing industry and emerging markets with a growing geriatric population base will create more new opportunities for artificial intelligence in the medical imaging market during the forecast period 2021-2028.

Artificial Intelligence in Medical Imaging Market Scope and Market Size

Artificial Intelligence in Medical Imaging Market is segmented on the basis of technology, offering, type of deployment, application, clinical applications and end user.The growth among these segments will help you analyze low growth segments within the industries and provide users with valuable market insight and market insights to help them make strategic decisions for identification of major market applications.

To get more insights into Market Analysis, browse Research Report Summary @ https://www.databridgemarketresearch.com/reports/global-artificial-intelligence-in-medical-imaging-market?rajaas

Country Level Analysis of Artificial Intelligence in Medical Imaging Market

Artificial Intelligence in Medical Imaging Market is analyzed and market size insights and trends are provided by country, technology, offering, deployment type, application, clinical applications and end-user as listed above . The countries covered in the Artificial Intelligence in Medical Imaging market report are US, Canada & Mexico North America, Germany, France, UK, Netherlands- Bas, Switzerland, Belgium, Russia, Italy, Spain, Turkey, the rest of Europe in Europe, China, Japan, India, South Korea, Singapore, Malaysia, Australia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific (APAC) in Asia-Pacific (APAC), Saudi Arabia, United Arab Emirates, South Africa, Egypt, Israel, Rest of Middle East and Africa (MEA) in the Middle East and Africa (MEA) frame, Brazil, Argentina and Rest of South America in the South America frame.

North America and Europe dominate the artificial intelligence in medical imaging market owing to the increase in technologically advanced healthcare infrastructure and high disposable income in this region.Asia-Pacific is the expected region in terms of growth of artificial intelligence in medical imaging market owing to rapidly changing healthcare infrastructure in countries such as China, Indonesia and the United States. India.

The country section of the Artificial Intelligence in Medical Imaging market report also provides individual market impacting factors and regulatory changes in the country market that are impacting current and future trends of the market.Data points such as consumption volumes, production sites and volumes, import and export analysis, price trend analysis, raw material cost, value chain analysis Downstream and Upstream are some of the major indicators used to forecast the market scenario for each country.In addition, the presence and availability of global brands and the challenges they face due to significant or rare competition from local and national brands,

Competitive Landscape and Market Share Analysis of Artificial Intelligence in Medical Imaging

Artificial Intelligence in Medical Imaging market competitive landscape provides details by competitor.Details included are company overview, company financials, revenue generated, market potential, research and development investment, new market initiatives, global presence, locations and production facilities, production capacities, company strengths and weaknesses, product launch, product breadth and breadth, application dominance.The data points provided above are only related to the companies emphasis on artificial intelligence in the medical imaging market.

The key players covered in the Artificial Intelligence in Medical Imaging market report are BenevolentAI, OrCam, Babylon, Freenome Inc., Clarify Health Solutions, BioXcel Therapeutics, Ada Health GmbH, GNS Healthcare, Zebra Medical Vision Inc. , Qventus Inc, IDx Technologies Inc., K Health, Prognos, Medopad Ltd., Viz.ai Inc., Voxel Technology, Renalytix AI plc, Beijing Pushing Technology Co. Ltd., PAIGE, mPulse Mobile, Suki AI Inc., BERG LLC, Zealth Inc., OWKIN INC., and Your.MD Ltd.UK among other national and global players.Market share data is available separately for Global, North America, Europe, Asia-Pacific (APAC), Middle East and Africa (MEA) and South America .

Browse Complete TOC at https://www.databridgemarketresearch.com/toc/?dbmr=global-artificial-intelligence-in-medical-imaging-market&rajaas

Related Reports:

Global Immuno-Oncology Clinical Trials Market Industry Trends and Forecast to 2029- https://www.databridgemarketresearch.com/reports/global-immuno-oncology-clinical-trials-market

Global Clinical Laboratory Services Market Industry Trends and Forecast to 2029- https://www.databridgemarketresearch.com/reports/global-clinical-laboratory-services-market

Asia-Pacific Clinical Laboratory Services Market Industry Trends and Forecast to 2028- https://www.databridgemarketresearch.com/reports/asia-pacific-clinical-laboratory-services-market

Middle East and Africa Clinical Laboratory Services Market Industry Trends and Forecast to 2028- https://www.databridgemarketresearch.com/reports/middle-east-and-africa-clinical-laboratory-services-market

About Data Bridge Market Research

An absolute way to forecast what future holds is to comprehend the trend today!

Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market. Data Bridge endeavors to provide appropriate solutions to the complex business challenges and initiates an effortless decision-making process.

Contact Us:

Data Bridge Market ResearchUS: +1 888 387 2818UK: +44 208 089 1725Hong Kong: +852 8192 7475E-Mail:Corporatsesales@databridgemarketresearch.com

Read more here:

Artificial Intelligence in Medical Imaging Market Analysis by Trends, Demand, Products and Technology Forecast to 2028 Designer Women - Designer...

Deep Learning AI Needs Tools To Adapt To Changes In The Data Environment – Forbes

Sergey Tarasov - stock.adobe.com

In the continuing theme of higher level tools to improve developing useful applications, today well visit feature engineering in a changing environment. Artificial intelligence (AI) is increasingly used to analyze data, and deep learning (DL) is one of the more complex aspects of AI. In multiple forums, Ive discussed the need to move past heavy reliance on not just pure coding, but even past the basic frameworks discussed by DL programmers. One of the keys to the complexity is figuring out the right data attributes, or features, which matter to any system. Its even more important in DL, both because of larger data sets and due to the less transparent nature to the inference engine over procedural code. As tricky as that is the first time, it needs to be a repeatable process, as environments change, and systems must change with them.

Defining the initial feature set is important, but its not the end of the game. While many people focus on DLs ability to change results based on more data, that still means the use of the same features. For instance, the features are fairly well known radiology. Its gaining more examples for training that matters, to see the variation of how those features appear. However, what is theres a new tumor? There might be a new feature that needs to be added to the mix. With supervised systems, thats easy to modify because you can provide labeled images with the features and the system can be retrained.

However, what about consumer taste? Features are defined, then the deep learning system looks for relationships between the different defined features and provides analysis. However, fashion changes over time. Imagine, for instance, a system defined when all pants had pleats. The question of whether or not pants should have pleats isnt an issue, so the designers did not train the system to analyze the existence of pleats. While the feature might be defined in the full data set, for performance issues the feature was not engineered into the engine.

Suddenly, theres a change. People start buying pants without pleats. That becomes something that consumers want. While that might be in the full dataset, the inference engine is not evaluating that variable because it is not a defined feature. The environment has changed. How can that be recognized, and the DL system changed?

SparkBeyond is a company working to address the problem. While the product works with initial feature engineering, the key advantage is that it helps with DevOps and other processes to work to keep DL driven applications current in changing environments.

What the companys platform does is analyze the base data being used by the DL systems. It is not AI itself, but leverages random forests (RF). This technique is a way of running multiple tests with different parameters. This is helped by the advances of cloud technologies and the ability to scale-out to multiple servers. Large numbers of decision trees can be analyzed, with new patterns being seen. The RF is one of the ways that machine learning has moved past a pure AI definition, as it can create insight far faster than other methods, identifying new classifications and relationships in large data sets.

The complexities of consumer behavior, and that of financial and other markets, is far more complex than that of pleats v no-pleats, its important to recognize and adapt to change as fast as possible. Changing environments are critical to analysis, said Mike Sterling, Director of Impact Management, SparkBeyond. Generating large volumes of hypotheses and models, and them testing them, is critical to identifying those changes in order to adapt deep learning systems to remain accurate in those environments.

Artificial intelligence does not exist on its own. It is a technology that fits into a larger solution to address a business issue. No market is stagnant while remaining relevant. How and when to update deep learning systems, as they are used in more and more places, is important. The ability to analyze the data sets is critical, both for initial feature engineering and as an ongoing process to keep the systems relevant and accurate.

I see this as one feature, if you will, of what will eventually become development suites similar to 4GL development in the 90s. It will take a few more years, but this step to incorporate more tools into the deep learning environment

More here:

Deep Learning AI Needs Tools To Adapt To Changes In The Data Environment - Forbes

What Opportunities are Appearing Thanks to AI, Artificial Intelligence? – We Heart

The AI sector is booming. Thanks to several leaps that have been made, we are closer than ever before to developing an AI that acts and reacts as a real human would do. Opportunities in this sector are flourishing, and there is always a way for you to get involved.

Photo by Annie Spratt.

Employees: If you are searching for a job in the tech sector, one of the most rewarding you could find is working with AI. It is a mistake to assume that all AI development is focussed on developing android technologies. There are many other applications for AI and each one needs experts at the helm to help bring it to fruition.

Whether you are a graduate, or you are looking for a change in careers, there is always a job opening that you could look into. Even if you dont have a background in this tech, there are many other ways you could get involved, whether you are working on an AIs cognitive abilities or even just testing out the product. Whatever your background and skillset might be, there is always a way for you to get involved.

Investors: AI development is incredibly costly. While many of the smaller developers may have a great idea that could be world-changing if they bring it to fruition. However, they often lack the finances to be able to do so. This is where investors can come in.

Investors like Tej Kohli, James Wise, or Jonathan Goodwin may have little expertise in these areas from their own personal experience, but they know how to recognise a viable idea when presented with one. Whether you are looking to get into venture investment yourself or you are a tech company looking for financial backing, their activities should give you some idea about the paths you need to follow.

Photo, Bence Boros.

Consumers: The world of AI isnt just open to investors and tech gurus. There is now a vast range of AI-driven tech emerging onto the market. You, as a consumer, get to be an instrumental part of driving this new tech forward as it means that the developers gain some insight into what features are popular and which arent.

Just look at the boom in home assistants that has erupted in the past few years. We are now able to live in fully functioning smart homes with music playing and lights turning off with a simple voice command. By exploring what AI has to offer through the role of the consumer, this all feeds back to the developers and helps them create the next generation of products.

No matter how interested you are in this sector, there is always going to be something you can pursue that will help to develop AI overall. This is an incredibly exciting era to live in, and AI is just one of the pieces of tech that could transform the world as we know it. Take a look at some of the roles and opportunities and see where you could jump in today.

Read the rest here:

What Opportunities are Appearing Thanks to AI, Artificial Intelligence? - We Heart

IoT And AI: Improving Customer Satisfaction – Forbes


Forbes
IoT And AI: Improving Customer Satisfaction
Forbes
Truethe Internet of Things (IoT) and artificial intelligence (AI) hold huge promise in helping us better engage and satisfy our customers. But that promise still depends heavily on our ability to process and act on the data we're gathering in a way ...

Read this article:

IoT And AI: Improving Customer Satisfaction - Forbes

When will AI be ready to really understand a conversation? – Fast Company

Imagine holding a meeting about a new product release, after which AI analyzes the discussion and creates a personalized list of action items for each participant. Or talking with your doctor about a diagnosis and then having an algorithm deliver a summary of your treatment plan based on the conversation. Tools like these can be a big boost given that people typically recall less than 20% of the ideas presented in a conversation just five minutes later. In healthcare, for instance, research shows that patients forget between 40% and 80% of what their doctors tell them very shortly after a visit.

You might think that AI is ready to step into the role of serving as secretary for your next important meeting. After all, Alexa, Siri, and other voice assistants can already schedule meetings, respond to requests, and set up reminders. Impressive as todays voice assistants and speech recognition software might be, however, developing AI that can track discussions between multiple people and understand their content and meaning presents a whole new level of challenge.

Free-flowing conversations involving multiple people are much messier than a command from a single person spoken directly to a voice assistant. In a conversation with Alexa, there is usually only one speaker for the AI to track and it receives instant feedback when it interprets something incorrectly. In natural human conversations, different accents, interruptions, overlapping speech, false starts, and filler words like umm and okay all make it harder for an algorithm to track the discussion correctly. These human speech habits and our tendency to bounce from topic to topic also make it significantly more difficult for an AI to understand the conversation and summarize it appropriately.

Say a meeting progresses from discussing a product launch to debating project roles, with an interlude about the meeting snacks provided by a restaurant that recently opened nearby. An AI must follow the wide-ranging conversation, accurately segment it into different topics, pick out the speech thats relevant to each of those topics, and understand what it all means. Otherwise, Visit the restaurant next door might be the first item in your post-meeting to-do list.

Another challenge is that even the best AI we currently have isnt particularly good at handling jargon, industry-speak, or context-specific terminology. At Abridge, a company I cofounded that uses AI to help patients follow through on conversations with their doctors, weve seen out-of-the-box speech-to-text algorithms make transcription mistakes such as substituting the word tastemaker for pacemaker or Asian populations for atrial fibrillation. We found that providing the AI with information about a conversations topic and context can help. In transcribing conversations with a cardiologist, for example, medical terms like pacemaker are assumed to be the go-to.

The structure of a conversation is also influenced by the relationship between participants. In a doctor-patient interaction, the discussion usually follows a specific template: the doctor asks questions, the patient shares their symptoms, then the doctor issues a diagnosis and treatment plan. Similarly, a customer service chat or a job interview follows a common structure and involves speakers with very different roles in the conversation. Weve found that providing an algorithm with information about the speakers roles and the typical trajectory of a conversation can help it better extract information from the discussion.

Finally, its critical that any AI designed to understand human conversations represents the speakers fairly, especially given that the participants may have their own implicit biases. In the workplace, for instance, AI must account for the fact that there are often power imbalances between the speakers in a conversation that fall along lines of gender and race. At Abridge, we evaluated one of our AI systems across different sociodemographic groups and discovered that the systems performance depends heavily on the language used in the conversations, which varies across groups.

While todays AI is still learning to understand human conversations, there are several companies working on this problem. At Abridge, we are currently building AI that can transcribe, analyze, and summarize discussions between doctors and patients to help patients better manage their health and ultimately improve health outcomes. Microsoft recently made a big bet in this space by acquiring Nuance, a company that uses AI to help doctors transcribe medical notes, for $16 billion. Google and Amazon have also been building tools for medical conversation transcription and analysis, suggesting that this market is going to see more activity in the near future.

Giving AI a seat at the table in meetings and customer interactions could dramatically improve productivity at companies around the world. Otter.ai is using AIs language capabilities to transcribe and annotate meetings, something that will be increasingly valuable as remote work continues to grow. Chorus is building algorithms that can analyze how conversations with customers and clients drive companies performance and make recommendations for improving interactions with customers.

Looking to the future, AI that can understand human conversations could lay the groundwork for applications with enormous societal benefits. Real-time, accurate transcription and summarization of ideas could make global companies more productive. At an individual level, having AI that can serve as your own personal secretary can help each of us focus on being present for the conversations were having without worrying about note taking or something important slipping through the cracks. Down the line, AI that can not only document human conversations but also engage in them could revolutionize education, elder care, retail, and a host of other services.

The ability to fully understand human conversations lies just beyond the bounds of todays AI, even though most humans are able to more or less master it before middle school. However, the technology is progressing rapidly and algorithms are increasingly able to transcribe, analyze, and even summarize our discussions. It wont be long before you find a voice assistant at your next business meeting or doctors appointment ready to share a summary of what was discussed and a list of next steps as soon as you walk out the door.

Sandeep Konam is a machine learning expert who trained in robotics at Carnegie Mellon University and has worked on numerous projects at the intersection of AI and healthcare. He is the cofounder and CTO of Abridge, a company that uses AI to help patients stay on top of their health.

The rest is here:

When will AI be ready to really understand a conversation? - Fast Company

A beginners guide to the AI apocalypse: Artificial stupidity – The Next Web

Welcome to the latest article in TNWs guide to the AI apocalypse. In this series well examine some of the most popular doomsday scenarios prognosticated by modern AI experts.

In this edition were going to flip the script and talk about something that might just save us from being destroyed by our robot overlords on September 23, 2029 (random date, but if it actually happens your mind is going to be blown), and that is: artificial stupidity.

But first, a few words about humans.

You wont find any comprehensive data on the subject outside of the testimonials at the Darwin Awards, but stupidity is surely the biggest threat to humans throughout all of history.

Luckily were still the smartest species on the planet, so weve managed to remain in charge for a long time despite our shortcomings. Unfortunately a new challenger has entered the arena in the form of AI. And despite its relative infancy, artificial intelligence isnt as far from challenging our status as the apex intellects as you might think.

The experts will tell you that were really far away from human-level AI (HLAI). But maybe thats because nobodys quite sure what the benchmark for that would be. What should a human be able to do? Can you play the guitar? I can. Can you play the piano? I cant.

Sure, you can argue that a human-level AI should be able to learn to play the guitar or the piano, just like a human can many play both. But the point is that measuring human ability isnt a cut-and-dry endeavor.

Computer scientist Roman Yampolskiy, of the university of Louisville, recently published a paper discussing this exact concept. He writes:

Imagine that tomorrow a prominent technology company announces that they have successfully created an Artificial Intelligence (AI) and offers for you to test it out.

You decide to start by testing developed AI for some very basic abilities such as multiplying 317 by 913, and memorizing your phone number. To your surprise, the system fails on both tasks.

When you question the systems creators, you are told that their AI is human-level artificial intelligence (HLAI) and as most people cannot perform those tasks neither can their AI. In fact, you are told, many people cant even compute 13 x 17, or remember name of a person they just met, or recognize their coworker outside of the office, or name what they had for breakfast last Tuesday.

The list of such limitations is quite significant and is the subject of study in the field of Artificial Stupidity.

Trying to define what HLAI should and shouldnt be able to do is just as difficult as trying to define the same for an 18-year-old human. Change a tire? Run a business? Win at Jeopardy?

This line of reasoning usually swings the conversation to narrow intelligence versus general intelligence. But here we run into a problem as well. General AI is, hypothetically, a machine capable of learning any function in any domain that a human can. That means a single GAI should be capable of replacing any human in the entire world given proper training.

Humans dont work that way however. Theres no general human intelligence. The combined potential for human function is not achievable by an individual. If we build a machine capable of replacing any of us, it stands to reason it will.

And thats cause for concern. We dont consider which ants are most talented when we wreck an anthill to build a softball field, why should our intellectual superiors?

The good news is that most serious AI experts dont think GAI will happen anytime soon, so the most well have to deal with is whatever fuzzy definition of HLAI the person or company who claims it comes up with. Much like Google decided it had achieved quantum supremacy by coming up with an arbitrary (and disputed) benchmark, itll surprise nobody in the industry if, for example, the AI crew at Facebook determines that a specific translation algorithm theyve invented meets their self-imposed criteria for HLAI (or something like that). Maybe itll be Amazon or OpenAI.

The bad news is that you also wont find many reputable scientists willing to rule GAI out. And that means we could be an eureka! or two away from someone like Ian Goodfellow oopsing up an algorithm that ties general intelligence to hardware. And when that happens, we could be looking at Bostroms Paperclip Maximizer in full effect. In other words: the robots wont kill us out of spite, theyll just forget we exist and transform the world and its habitats to suit their needs just as we did.

Thats one theory anyway. And, as with any potential extinction scenario, its important to have a plan to stop it. Based on the fact that we cant know exactly whats going to happen once a superintelligent artificial being emerges, we should probably just start hard-coding artificial stupidity into the mix.

The right dose of unwavering limitations think Asimovs Laws of Robotics but more specific to the number of parameters or compute a specific model can use and what level of network integration can exist between disparate systems could spell the difference between our existence and extinction.

So, rather than attempting to program advanced AI with a philosophical view on the sanctity of human life and what constitutes the greater good, we should just hamstring them with artificial stupidity from the start.

Published July 17, 2020 19:55 UTC

Read the original:

A beginners guide to the AI apocalypse: Artificial stupidity - The Next Web

An AI algorithm inspired by how kids learn is harder to confuse – MIT Technology Review

Information firehose: The standard practice for teaching a machine-learning algorithm is to give it all the details at once. Say youre building an image classification system to recognize different species of animals. You show it examples of each species and label them accordingly: German shepherd and poodle for dogs, for example.

But when a parent is teaching a child, the approach is entirely different. They start with much broader labels: any species of dog is at first simply a dog. Only after the child has learned how to distinguish these simpler categories does the parent break each one down into more specifics.

Dispelled confusion: Drawing inspiration from this approach, researchers at Carnegie Mellon University created a new technique that teaches a neural network to classify things in stages. In each stage, the network sees the same training data. But the labels start simple and broad, becoming more specific over time.

To determine this progression of difficulty, the researchers first showed the neural network the training data with the final detailed labels. They then computed whats known as a confusion matrix, which shows the categories the model had the most difficulty telling apart. The researchers used this to determine the stages of training, grouping the least distinguishable categories together under one label in early stages and splitting them back up into finer labels with each iteration.

Better accuracy: In tests with several popular image-classification data sets, the approach almost always led to a final machine-learning model that outperformed one trained by the conventional method. In the best-case scenario, it increased classification accuracy up to 7%.

Curriculum learning: While the approach is new, the idea behind it is not. The practice of training a neural network on increasing stages of difficulty is known as curriculum learning and has been around since the 1990s. But previous curriculum learning efforts focused on showing the neural network a different subset of data at each stage, rather than the same data with different labels. The latest approach was presented by the papers coauthor Otilia Stretcu at the International Conference of Learning Representations last week.

Why it matters: The vast majority of deep-learning research today emphasizes the size of models: if an image-classification system has difficulty distinguishing between different objects, it means it hasnt been trained on enough examples. But by borrowing insight from the way humans learn, the researchers found a new method that allowed them to obtain better results with exactly the same training data. It suggests a way of creating more data-efficient learning algorithms.

Read the original:

An AI algorithm inspired by how kids learn is harder to confuse - MIT Technology Review

Clara Labs nabs $7M Series A as it positions its AI assistant to meet … – TechCrunch

Clara Labs, creator of the Clara AI assistant, is announcing a $7 million Series A this morning led byBasis Set Ventures. Slack Fund also joined in the round, alongside existing investors Sequoia and First Round. The startup will be looking to further differentiate within the crowded field of email-centric personal assistants by building in features and integrations to address the needs of enterprise teams.

Founded in 2014, Clara Labs has spent much of the last three years trying to fix email. When CC-ed on emails, the Clara assistant can automatically schedule meetings reasoning around preferences like location and time.

If this sounds familiar, its because youve probably come across x.ai or Fin. But while all three startups look similar on paper, each has its own distinct ideology. Where Clara is running toward the needs of teams, Fin embraces the personal pains of travel planning and shopping. Meanwhile,x.ai opts for maximum automation and lower pricing.

That last point around automation needs some extra context. Clara Labs prides itself in its implementation of a learning strategy called human-in-the-loop. For machines to analyze emails, they have to make a lot of decisions is that date when you want to grab coffee, or is it the start of your vacation when youll be unable to meet?

In the open world of natural language, incremental machine learning advances only get you so far. So instead, companies like Clara convert uncertainty into simple questions that can be sent to humans on demand (think proprietary version of Amazon Mechanical Turk). The approach has become a tech trope with the rise of all things AI, but Maran Nelson, CEO of Clara Labs, is adamant that theres still a meaningful way to implement agile AI.

The trick is ensuring that a feedback mechanism exists for these questions to serve as training materials for uncertain machine learning models. Three years later, Clara Labs is confident that its approach is working.

Bankrolling the human in human-in-the-loop does cost everyone more, but people are willing to pay for performance. After all, even a nosebleed-inducing $399 per month top-tier plan costs a fraction of a real human assistant.

Anyone who has ever experimented with adding new email tools into old workflows understands that Gmail and Outlook have tapped into the dark masochistic part of our brain that remains addicted to inefficiency. Its tough to switch and the default of trying tools like Clara is often a slow return to the broken way of doing things. Nelson says shes keeping a keen eye on user engagement and numbers are healthy for now theres undoubtedly a connection between accuracy and engagement.

As Clara positions its services around the enterprise, it will need to take into account professional sales and recruiting workflows. Integrations with core systems like Slack, CRMs and job applicant tracking systems will help Clara keep engagement numbers high while feeding machine learning models new edge cases to improve the quality of the entire product.

Scheduling is different if youre a sales person and your sales team is measured by the total number of meetings scheduled, Nelson told me in an interview.

Nelson is planning to make new hires in marketing and sales to push the Clara team beyond its current R&D comfort zone. Meanwhile the technical team will continue to add new features and integrations, like conference room booking, that increase the value-add of the Clara assistant.

Xuezhao Lan of Basis Set Ventures will be joining the Clara Labs board of directors as the company moves into its next phase of growth. Lan will bring both knowledge of machine learning and strategy to the board. Todays Clara deal is one of the first public deals to involve the recently formed $136 million AI-focused Basis Set fund.

Read more:

Clara Labs nabs $7M Series A as it positions its AI assistant to meet ... - TechCrunch

Immervision uses AI for better wide-angle smartphone videos and photos – VentureBeat

Immervision has announced real-time video distortion correction software to help create professional quality videos on smartphones. The Montreal company also revealed an off-the-shelf 125-degree wide-angle lens, enabling mobile phone makers to improve their next-generation smartphone cameras.The software algorithms are now available for mobile phone makers to license from Immervisions exclusive distribution partner Ceva and promise to enhance images through artificial intelligence and machine learning.

The wider field of view (FOV) in phones creates more apparent distortion than you would see with other cameras. But the software algorithms from Immervision help correct stretched bodies and can adjust proportions of objects, lines, and faces in real time. The AI can take a line that looks like a banana and straighten it out, said Alessandro Gasparini, executive vice president of operations and chief commercial officer at Immervision, in an interview with VentureBeat.

Whether the goal is to leave the preset as is, fully customize it, let end users decide, allow phone orientation to dictate, or leverage machine learning to control the result, Immervision said it can help phone makers differentiate their hardware.Gasparini said the algorithms offer real-time distortion correction in both videos and pictures, adjusting the perspective, capturing more of a scene with less distortion, and correcting line and object distortion.

Above: Immervision fixes curved building lines caused by smaller fields of view.

Image Credit: Immervision

While the majority of tier one phone makers have wide-angle lenses in their phones, tier two and tier three mobile brands have yet to adopt them. Immervisions technology has been preconfigured on popular sensors, including Sony, Omnivision, and Samsung, and has one lens with ready-to-use software, reducing camera customization and integration time.The lens is 6.4 millimeters high and ranges from eight megapixels to 20 megapixels in terms of image quality.

We design lenses for the mobile industry, with action cameras and broadcast cameras of different sizes, different resolutions, and different field of views, Gasparini said.

Immervision surveyed users to find out what kind of image quality and distortion issues mattered most. Some lenses with low FOV numbers can make people on the edges of photos look fatter than they are, and that really makes people mad, Gasparini said. Most smartphones have lenses that are 100 to 130 degrees FOV. Immervisions competition in this market includes Apple and Samsung, which do their own work. But Immervision aims to arm the rest of the industry with the same kind of high-quality cameras.

Above: Immervision helps a camera get a better view of a scene.

Image Credit: Immervision

Gasparini said Immervision specializes in a combination of optical design and image processing, with different types of engineers under the same roof.

We find ourselves to be one of the largest independent optical design firms in the world, he said. If you look at some of the companies that manufacture optics today for smartphones, they might have one or two optical designers in their factory. Actually, we have more, and we have cross-pollination of different competencies in our company.

Immervision was founded in 2000 and employs around 30 people. Gasparini said the company has managed good profit margins as it works to help cameras better reproduce reality.

Software can do certain magic on images. But there are limitations, Gasparini said. And there are challenges the next generation has dealing with more video. The new smartphones are cinematographic, and more people will be shooting short films and movies with them. This will increase the challenge of processing them in real time.

See the original post:

Immervision uses AI for better wide-angle smartphone videos and photos - VentureBeat

No, Facebook did not shut down AI program for getting too smart – WTOP

AP Photo/Matt Rourke, File

WASHINGTON Facebook artificial intelligence bots tasked with dividing items between them have been shut down after the bots started talking to each other in their own language.

But hold off on making comparisons to Terminator or The Matrix.

ForbesBooks Radio host and technology correspondent Gregg Stebben said that Facebook shut down the artificial intelligence program not because the company was afraid the bots were going to take over, but because the bots did not accomplish the task they were assigned to do negotiate.

The bots are not really robots in the physical sense, Stebben said, but chat bots little servers or digital chips doing the responding. The bots were just discussing how to divide some items between them, according to Gizmodo.

The language the program created comprised English words with a syntax that would not be familiar to humans, Stebben said.

Below is a sample of the conversation between the bots, called Bob and Alice:

Bob: i can i i everything else

Alice: Balls have zero to me to me to me to me to me to me to me to me to

Though there is a method to the bots language, FAIR scientist Mike Lewis told FastCo Designthat the researchers interest was having bots who could talk to people.

If were calling it AI, why are we surprised when it shows intelligence? Stebben said. Increasingly we are going to begin communicating with beings that are not humans at all.

So should there be fail-safes to prevent an apocalyptic future controlled by machines?

What we will find is, we will never achieve a state where we have absolute control of machines, Stebben said. They will continue to surprise us, we will have to do things to continue to control them, and I think there will always be a risk that they will do things that we didnt expect.

WTOPs Dimitri Sotis contributed to this report.

Like WTOP on Facebook and follow @WTOP on Twitter to engage in conversation about this article and others.

2017 WTOP. All Rights Reserved.

Here is the original post:

No, Facebook did not shut down AI program for getting too smart - WTOP

Widex Introduces My Sound: A New Portfolio of AI-enabled Features for Customization of Its Industry-leading Widex MOMENT Hearing Aids – PRNewswire

HAUPPAUGE, N.Y., June 9, 2021 /PRNewswire/ --Building on the success of the revolutionary, artificial intelligence-based SoundSense Learn technology, Widex USA Inc. today announced Widex My Sound, a portfolio of AI features including a new solution that instantly enables intelligent customization of the company's cutting-edge Widex MOMENT hearing aids based on a user's activity and listening intent.

Widex was the first company to enable user-driven sound personalization by leveraging artificial intelligence in hearing aids. Now, within My Sound, Widex launches the third generation of its AI technology, vastly improving the usability of the AI solution based on the extensive data the company has gathered from the previous two generations.

This new AI solution further combines the capacity of artificial intelligence with users' personal real-world experience to deliver another level of automated customization. Through AI modeling and clustering of data collected via the Widex SoundSense Learn AI engine, highly qualified sound profile recommendations for the individual user can now be made based on the intent, need, and preferences of thousands of users in similar real-world situations.

"Widex is leading the industry by combining artificial intelligence and human intelligence to create natural sound experiences and foster social participation through better hearing," said Jodi Sasaki-Miraglia, AuD, Widex's Director of Professional Training and Education. "Once Widex Moment is fit properly by a local licensed hearing care professional, the user can, if necessary, customize their hearing aids with ease, choosing from multiple AI features. Plus, our latest generation delivers results in just seconds, putting control and intelligent personalization into the hands of every user."

My Sound is integrated into the Widex MOMENT app and is the home for all the powerful AI personalization Widex offers. The latest generation of AI utilizes the cloud-based user data of Widex users worldwide to make sound profile recommendations based on an individual user's current activity and listening intent. Users launch My Sound from the app and begin by selecting their activity, such as dining, then choosing their intent, such as socializing, conversation, or enjoying music.

Based on the user's selections, Widex can draw on tens of thousands of real-life data points, reflecting the preferences and listening situations of other Widex users who have used the app previously. In seconds, the user is presented with two recommendations, which can both be listened to before selecting the settings that sound best. In the event neither recommendation meets the individual user's needs, they can launch SoundSense Learn from the same screen to further personalize their hearing experience through that solution's sophisticated A/B testing process.

"Widex has created a radically different way of delivering hearing solutions for today's active hearing aid user," Sasaki-Miraglia continued. "Instead of having to program the hearing aid in a way that covers all situations the user might encounter, the hearing care professional ensures the best possible starting point for the user and My Sound then allows users to personalize their experience in real life, easily and instantly. In this way, the hearing solution adapts to the user's preferences and becomes even more personal."

The Widex MOMENT app, including My Sound with SoundSense Learn, is available for Apple and Android devices and is designed to work with Widex MOMENT Bluetooth hearing aids.

For more information about Widex MOMENT, click here. For high-res images and screen shots, click here.

AboutWidex

AtWidexwe believe in a world where there are no barriers to communication; a world where people interact freely, effortlessly and confidently. With sixty years' experience developing state-of-the-art technology, we provide hearing solutions that are easy to use, seamlessly integrated in daily life and enable people to hear naturally. As one of the world's leading hearing aid producers, our products are sold in more than one hundred countries, and we employ 4,000 people worldwide.

Media Contact: Dan Griffin Griffin360 212-481-3456 [emailprotected]

SOURCE Widex

Original post:

Widex Introduces My Sound: A New Portfolio of AI-enabled Features for Customization of Its Industry-leading Widex MOMENT Hearing Aids - PRNewswire

A Facebook AI Unexpectedly Created Its Own Unique Language – Futurism

In Brief While developing negotiating chatbot agents, Facebook researchers found that the bots spontaneously developed their own non-human language as they improved their techniques, highlighting how little we still know about how artificial intelligences learn. The Future of Language

A recent Facebook report on the way chatbots converse with each other has given the world a glimpse intothe future of language.

In the report, researchers from the Facebook Artificial Intelligence Research lab (FAIR) describe training their chatbot dialog agents to negotiate using machine learning. The chatbots were eager and successful dealmaking pupils, but the researchers eventually realized they needed to tweak their model because the bots were creating their own negotiation language, diverting from human languages.

To put it another way, when they used a model that allowed the chatbots to converse freely, using machine learning to incrementally improve their conversational negotiation strategies as they chatted, the bots eventually created and used their own non-human language.

The unique, spontaneous development of a non-human language was probably the most baffling and thrilling development for the researchers, but it wasnt the only one. The chatbots also proved to be smart about negotiating and used advanced strategies to improve their outcomes. For example, a bot might pretend to be interested in something that had no value to it in order to be able to sacrifice that thing later as part of a compromise.

Although Facebooks bargain-hunting bots arent a sign of an imminent singularity or anything even approaching that level of sophistication they are significant, in part because they prove once again that an important realm we once assumed was solely the domain of humans, language, is definitely a shared space. This discovery also highlights how much we still dont know about the ways that artificial intelligences (AIs) think and learn, even when we create them and model them after ourselves.

Read more:

A Facebook AI Unexpectedly Created Its Own Unique Language - Futurism