What Is Machine Learning and Why Is It Important? – SearchEnterpriseAI

What is machine learning?

Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values.

Recommendation enginesare a common use case for machine learning. Other popular uses include fraud detection, spam filtering, malware threat detection, business process automation (BPA) and Predictive maintenance.

Machine learning is important because it gives enterprises a view of trends in customer behavior and business operational patterns, as well as supports the development of new products. Many of today's leading companies, such as Facebook, Google and Uber, make machine learning a central part of their operations. Machine learning has become a significant competitive differentiator for many companies.

Classical machine learning is often categorized by how an algorithm learns to become more accurate in its predictions. There are four basic approaches:supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning. The type of algorithm data scientists choose to use depends on what type of data they want to predict.

Supervised machine learning requires the data scientist to train the algorithm with both labeled inputs and desired outputs. Supervised learning algorithms are good for the following tasks:

Unsupervised machine learning algorithms do not require data to be labeled. They sift through unlabeled data to look for patterns that can be used to group data points into subsets. Most types of deep learning, including neural networks, are unsupervised algorithms.Unsupervised learning algorithms are good for the following tasks:

Semi-supervised learning works by data scientists feeding a small amount of labeled training data to an algorithm. From this, the algorithm learns the dimensions of the data set, which it can then apply to new, unlabeled data. The performance of algorithms typically improves when they train on labeled data sets. But labeling data can be time consuming and expensive. Semi-supervised learning strikes a middle ground between the performance of supervised learning and the efficiency of unsupervised learning. Some areas where semi-supervised learning is used include:

Reinforcement learning works by programming an algorithm with a distinct goal and a prescribed set of rules for accomplishing that goal. Data scientists also program the algorithm to seek positive rewards -- which it receives when it performs an action that is beneficial toward the ultimate goal -- and avoid punishments -- which it receives when it performs an action that gets it farther away from its ultimate goal. Reinforcement learning is often used in areas such as:

Today, machine learning is used in a wide range of applications. Perhaps one of the most well-known examples of machine learning in action is the recommendation engine that powers Facebook's news feed.

Facebook uses machine learning to personalize how each member's feed is delivered. If a member frequently stops to read a particular group's posts, the recommendation engine will start to show more of that group's activity earlier in the feed.

Behind the scenes, the engine is attempting to reinforce known patterns in the member's online behavior. Should the member change patterns and fail to read posts from that group in the coming weeks, the news feed will adjust accordingly.

In addition to recommendation engines, other uses for machine learning include the following:

Machine learning has seen use cases ranging from predicting customer behavior to forming the operating system for self-driving cars.

When it comes to advantages, machine learning can help enterprises understand their customers at a deeper level. By collecting customer data and correlating it with behaviors over time, machine learning algorithms can learn associations and help teams tailor product development and marketing initiatives to customer demand.

Some companies use machine learning as a primary driver in their business models. Uber, for example, uses algorithms to match drivers with riders. Google uses machine learning to surface the ride advertisements in searches.

But machine learning comes with disadvantages. First and foremost, it can be expensive. Machine learning projects are typically driven by data scientists, who command high salaries. These projects also require software infrastructure that can be expensive.

There is also the problem of machine learning bias. Algorithms trained on data sets that exclude certain populations or contain errors can lead to inaccurate models of the world that, at best, fail and, at worst, are discriminatory. When an enterprise bases core business processes on biased models it can run into regulatory and reputational harm.

The process of choosing the right machine learning model to solve a problem can be time consuming if not approached strategically.

Step 1: Align the problem with potential data inputs that should be considered for the solution. This step requires help from data scientists and experts who have a deep understanding of the problem.

Step 2: Collect data, format it and label the data if necessary. This step is typically led by data scientists, with help from data wranglers.

Step 3: Chose which algorithm(s) to use and test to see how well they perform. This step is usually carried out by data scientists.

Step 4: Continue to fine tune outputs until they reach an acceptable level of accuracy. This step is usually carried out by data scientists with feedback from experts who have a deep understanding of the problem.

Explaining how a specific ML model works can be challenging when the model is complex. There are some vertical industries where data scientists have to use simple machine learning models because it's important for the business to explain how every decision was made. This is especially true in industries with heavy compliance burdens such as banking and insurance.

Complex models can produce accurate predictions, but explaining to a lay person how an output was determined can be difficult.

While machine learning algorithms have been around for decades, they've attained new popularity as artificial intelligence has grown in prominence. Deep learning models, in particular, power today's most advanced AI applications.

Machine learning platforms are among enterprise technology's most competitive realms, with most major vendors, including Amazon, Google, Microsoft, IBM and others, racing to sign customers up for platform services that cover the spectrum of machine learning activities, including data collection, data preparation, data classification, model building, training and application deployment.

As machine learning continues to increase in importance to business operations and AI becomes more practical in enterprise settings, the machine learning platform wars will only intensify.

Continued research into deep learning and AI is increasingly focused on developing more general applications. Today's AI models require extensive training in order to produce an algorithm that is highly optimized to perform one task. But some researchers are exploring ways to make models more flexible and are seeking techniques that allow a machine to apply context learned from one task to future, different tasks.

1642 - Blaise Pascal invents a mechanical machine that can add, subtract, multiply and divide.

1679 - Gottfried Wilhelm Leibniz devises the system of binary code.

1834 - Charles Babbage conceives the idea for a general all-purpose device that could be programmed with punched cards.

1842 - Ada Lovelace describes a sequence of operations for solving mathematical problems using Charles Babbage's theoretical punch-card machine and becomes the first programmer.

1847 - George Boole creates Boolean logic, a form of algebra in which all values can be reduced to the binary values of true or false.

1936 - English logician and cryptanalyst Alan Turing proposes a universal machine that could decipher and execute a set of instructions. His published proof is considered the basis of computer science.

1952 - Arthur Samuel creates a program to help an IBM computer get better at checkers the more it plays.

1959 - MADALINE becomes the first artificial neural network applied to a real-world problem: removing echoes from phone lines.

1985 - Terry Sejnowski's and Charles Rosenberg's artificial neural network taught itself how to correctly pronounce 20,000 words in one week.

1997 - IBM's Deep Blue beat chess grandmaster Garry Kasparov.

1999 - A CAD prototype intelligent workstation reviewed 22,000 mammograms and detected cancer 52% more accurately than radiologists did.

2006 - Computer scientist Geoffrey Hinton invents the term deep learning to describe neural net research.

2012 - An unsupervised neural network created by Google learned to recognize cats in YouTube videos with 74.8% accuracy.

2014 - A chatbot passes the Turing Test by convincing 33% of human judges that it was a Ukrainian teen named Eugene Goostman.

2014 - Google's AlphaGo defeats the human champion in Go, the most difficult board game in the world.

2016 - LipNet, DeepMind's artificial intelligence system, identifies lip-read words in video with an accuracy of 93.4%.

2019 - Amazon controls 70% of the market share for virtual assistants in the U.S.

Visit link:
What Is Machine Learning and Why Is It Important? - SearchEnterpriseAI

Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder – University of Arkansas Newswire

Photo by University Relations

Khoa Luu and Han-Seok Seo

Could artificial intelligence be used to assist with the early detection of autism spectrum disorder? Thats a question researchers at the University of Arkansas are trying to answer. But theyre taking an unusual tack.

Han-Seok Seo, an associate professor with a joint appointment in food science and the UA System Division of Agriculture, and Khoa Luu, an assistant professor in computer science and computer engineering, will identify sensory cues from various foods in both neurotypical children and those known to be on the spectrum. Machine learning technology will then be used to analyze biometric data and behavioral responses to those smells and tastes as a way of detecting indicators of autism.

There are a number of behaviors associated with ASD, including difficulties with communication, social interaction or repetitive behaviors. People with ASD are also known to exhibit some abnormal eating behaviors, such as avoidance of some if not many foods, specific mealtime requirements and non-social eating. Food avoidance is particularly concerning, because it can lead to poor nutrition, including vitamin and mineral deficiencies. With that in mind, the duo intend to identify sensory cues from food items that trigger atypical perceptions or behaviors during ingestion. For instance, odors like peppermint, lemons and cloves are known to evoke stronger reactions from those with ASD than those without, possibly triggering increased levels of anger, surprise or disgust.

Seo is an expert in the areas of sensory science, behavioral neuroscience, biometric data and eating behavior. He is organizing and leading this project, including screening and identifying specific sensory cues that can differentiate autistic children from non-autistic children with respect to perception and behavior. Luu isan expert in artificial intelligence with specialties in biometric signal processing, machine learning, deep learning and computer vision. He will develop machine learning algorithms for detecting ASD in children based on unique patterns of perception and behavior in response to specific test-samples.

The duo are in the second year of a three-year, $150,000 grant from the Arkansas Biosciences Institute.

Their ultimate goalis to create an algorithm that exhibits equal or better performance in the early detection of autism in children when compared to traditional diagnostic methods, which require trained healthcare and psychological professionals doing evaluations, longer assessment durations, caregiver-submitted questionnaires and additional medical costs. Ideally, they will be able to validate a lower-cost mechanism to assist with the diagnosis of autism. While their system would not likely be the final word in a diagnosis, it could provide parents with an initial screening tool, ideally eliminating children who are not candidates for ASD while ensuring the most likely candidates pursue a more comprehensive screening process.

Seo said that he became interested in the possibility of using multi-sensory processing to evaluate ASD when two things happened: he began working with a graduate student, Asmita Singh, who had background in working with autistic students, and the birth of his daughter. Like many first-time parents, Seo paid close attention to his newborn baby, anxious that she be healthy. When he noticed she wouldnt make eye contact, he did what most nervous parents do: turned to the internet for an explanation. He learned that avoidance of eye contact was a known characteristic of ASD.

While his child did not end up having ASD, his curiosity was piqued, particularly about the role sensitivities to smell and taste play in ASD. Further conversations with Singh led him to believe fellow anxious parents might benefit from an early detection tool perhaps inexpensively alleviating concerns at the outset. Later conversations with Luu led the pair to believe that if machine learning, developed by his graduate student Xuan-Bac Nguyen, could be used to identify normal reactions to food, it could be taught to recognize atypical responses, as well.

Seo is seeking volunteers 5-14 years old to participate in the study. Both neurotypical children and children already diagnosed with ASD are needed for the study. Participants receive a $150 eGift card for participating and are encouraged to contact Seo athanseok@uark.edu.

About the University of Arkansas:As Arkansas' flagship institution, the UofA provides an internationally competitive education in more than 200 academic programs. Founded in 1871, the UofA contributes more than$2.2 billion to Arkansas economythrough the teaching of new knowledge and skills, entrepreneurship and job development, discovery through research and creative activity while also providing training for professional disciplines. The Carnegie Foundation classifies the UofA among the few U.S. colleges and universities with the highest level of research activity.U.S. News & World Reportranks the UofA among the top public universities in the nation. See how the UofA works to build a better world atArkansas Research News.

See the article here:
Researchers Using Artificial Intelligence to Assist With Early Detection of Autism Spectrum Disorder - University of Arkansas Newswire

Join the challenge to explore the Moon! – EurekAlert

image:The Archytas Dome region of the lunar surface is the target area for the EXPLORE Lunar Data Challenges 2022. view more

Credit: Credit: NASA/GSFC/Arizona State University/EXPLORE/Jacobs University.https://exploredatachallenges.space/wp-content/uploads/2022/09/Archytas2.png

Lunar enthusiasts of all ages are challenged to help identify features on the Moon that might pose a hazard to rovers or astronauts exploring the surface.

The2022 EXPLORE Lunar Data Challengeis focused on theArchytas Dome region, close to the Apollo 17 landing site where the last humans set foot on the Moon 50 years ago this December.

The Machine Learning Lunar Data Challenge is open to students, researchers and professionals in areas related to planetary sciences, but also to anyone with expertise in data processing. There is also a Public Lunar Data Challenge to plot the safe traverse of a lunar rover across the surface of the Moon, open to anyone who wants to have a go, as well as a Classroom Lunar Data Challenge for schools, with hands-on activities about lunar exploration and machine learning.

Announcing the EXPLORE Machine Learning Lunar Data Challenge during the Europlanet Science Congress (EPSC) 2022 in Granada, Spain, this week Giacomo Nodjoumi said: The Challenge uses data of the Archytas Dome taken by the Narrow Angle Camera (NAC) on the Lunar Reconnaissance Orbiter (LRO) mission. This area of the Moon is packed craters of different ages, boulders, mounds, and a long, sinuous depression, or rille. The wide variety of features in this zone makes it a very interesting area for exploration and the perfect scenario for this Data Challenge.

The Machine Learning Lunar Data Challenge is in three steps: firstly, participants should train and test a model capable of recognising craters and boulders on the lunar surface. Secondly, they should use their model to label craters and boulders in a set of images of the Archytas zone. Finally, they should use the outputs of their models to create a map of an optimal traverse across the lunar surface to visit defined sites of scientific interest and avoid hazards, such as heavily cratered zones.

The public and schools are also invited to use lunar images to identify features and plot a journey for a rover. Prizes for the challenges include vouchers totalling 1500 Euros, as well as pieces of real Moon rock from lunar meteorites.

The EXPLORE project, which is funded through the European Commissions Horizon 2020 Programme, gathers experts from different fields of science and technical expertise to develop new tools that will promote the exploitation of space science data.

Through the EXPLORE Data Challenges, we aim to raise awareness of the scientific tools that we are developing, improve their accuracy by bringing in expertise from other communities, and involve schools and the public in space science research, said Nick Cox, the Coordinator of the EXPLORE project.

The deadline for entries closes on 21 November 2022 and winners will be announced in mid-December on the anniversaries of the Apollo 17 mission milestones.

The 2022 EXPLORE Data Challenges can be found at:https://exploredatachallenges.space

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read more from the original source:
Join the challenge to explore the Moon! - EurekAlert

Cellarity Releases Novel, Open-Source, Single-Cell Dataset and Invites the Machine Learning and Computational Biology Communities to Develop New…

SOMERVILLE, Mass.--(BUSINESS WIRE)--Cellarity, a life sciences company founded by Flagship Pioneering to transform the way medicines are created, announced today the release of a unique single-cell dataset to accelerate innovation in mapping multimodal genetic information across cell states and over time. This dataset will be used to power a competition hosted by Open Problems in Single-Cell Analysis.

Cells are among the most complex and dynamic systems and are regulated by the interplay of DNA, RNA, and proteins. Recent technological advances have made it possible to measure these cellular features and such data provide, for the first time, a direct and comprehensive view spanning the layers of gene regulation that drive biological systems and give rise to disease.

Advancements in single-cell technologies now make it possible to decode genetic regulation, and we are excited to generate another first-of-its-kind dataset to support Open Problems in Single Cell Analysis, said Fabrice Chouraqui, PharmD, CEO of Cellarity and a CEO-Partner at Flagship Pioneering. Developing new machine learning algorithms that can predict how a single-cell genome can drive a diversity of cellular states will provide new insights into how cells and tissues move from health to disease and support informed design of new medicines.

To drive innovation for such data, Cellarity generated a time course profiling in vitro differentiation of blood progenitors, a dataset designed in collaboration with scientists at Yale University, Chan Zuckerberg Biohub, and Helmholtz Munich. This time course will be used to power a competition to develop algorithms that learn the underlying relationships between DNA, RNA, and protein modalities across time. Solving this open problem will help elucidate complex regulatory processes that are the foundation for cell differentiation in health and disease.

While multimodal single-cell data is increasingly available, methods to analyze these data are still scarce and often treat cells as static snapshots without modeling the underlying dynamics of cell state, said Daniel Burkhardt, Ph.D., cofounder of Open Problems in Single-Cell Analysis and Machine Learning Scientist at Cellarity. New machine learning algorithms are needed to learn the rules that govern complex cell regulatory processes so we can predict how cell state changes over time. We hope these new algorithms can augment the value of existing or future single-modality datasets, which can be cost effectively generated at higher quality to streamline and accelerate research.

In 2021, Cellarity partnered with Open Problems collaborators to develop the first benchmark competition for multimodal single-cell data integration using a first-of-its-kind multi-omics benchmarking dataset (NeurIPS 2021). This dataset was the largest atlas of the human bone marrow measured across DNA, RNA, and proteins and was used to predict one modality from another and learn representations of multiple modalities measured in the same cells. The 2021 competition saw winning submissions from both computational biologists with deep single-cell expertise and machine learning practitioners for whom this competition marked their first foray into biology. This translation of knowledge across disciplines is expected to drive more powerful algorithms to learn fundamental rules of biology.

For 2022, Cellarity and Open Problems are extending the challenge to drive innovation in modeling temporal single-cell data measured in multiple modalities at multiple time points. For this years competition, Cellarity generated a 300,000-cell time course dataset of CD34+ hematopoietic stem and progenitor cells (HSPC) from four human donors at five time points. HSPCs are stem cells that give rise to all other cells in the blood throughout adult life, and a 10-day time course captures important biology in CD34+ HSPCs. Being able to solve the prediction problems over time is expected to yield new insights into how gene regulation influences differentiation.

Entries to the competition will be accepted until November 15, 2022. For more information, visit the competition page on Kaggle.

About Open Problems in Single Cell Analysis

Open Problems in Single-Cell Analysis was founded in 2020 bringing together academic, non-profit, and for-profit institutions to accelerate innovation in single-cell algorithm development. An explosion in single-cell analysis algorithms has resulted in more than 1,200 methods published in the last five years. However, few standard benchmarks exist for single-cell biology, both making it difficult to identify top performing algorithms and hindering collaboration with the machine learning community to accelerate single-cell science. Open Problems is a first-of-its-kind international consortium developing a centralized, open-source, and continuously updated framework for benchmarking single-cell algorithms to drive innovation and alignment in the field. For more information, visit https://openproblems.bio/.

About Cellarity

Cellaritys mission is to fundamentally transform the way medicines are created. Founded by Flagship Pioneering in 2017, Cellarity has developed unique capabilities combining high-resolution data, single cell technologies, and machine learning to encode biology, predict interventions, and purposefully design breakthrough medicines. By focusing on the cellular changes that underlie disease instead of a single target, Cellaritys approach uncovers new biology and treatments and is applicable to a vast array of disease areas. The company currently has programs underway in metabolic disease, hematology, immuno-oncology, and respiratory disease. For more info, visit http://www.cellarity.com.

About Flagship Pioneering

Flagship Pioneering conceives, creates, resources, and develops first-in-category bioplatform companies to transform human health and sustainability. Since its launch in 2000, the firm has, through its Flagship Labs unit, applied its unique hypothesis-driven innovation process to originate and foster more than 100 scientific ventures, resulting in more than $100 billion in aggregate value. To date, Flagship has deployed over $2.9 billion in capital toward the founding and growth of its pioneering companies alongside more than $19 billion of follow-on investments from other institutions. The current Flagship ecosystem comprises 41 transformative companies, including Denali Therapeutics (NASDAQ: DNLI), Evelo Biosciences (NASDAQ: EVLO), Foghorn Therapeutics (NASDAQ: FHTX), Moderna (NASDAQ: MRNA), Omega Therapeutics (NASDAQ: OMGA), Rubius Therapeutics (NASDAQ: RUBY), Sana Biotechnology (NASDAQ: SANA), and Seres Therapeutics (NASDAQ: MCRB).

Go here to read the rest:
Cellarity Releases Novel, Open-Source, Single-Cell Dataset and Invites the Machine Learning and Computational Biology Communities to Develop New...

Tackling the reproducibility and driving machine learning with digitisation – Scientific Computing World

Dr Birthe Nielsen discusses the role of the Methods Database in supporting life sciences research by digitising methods data across different life science functions.

Reproducibility of experiment findings and data interoperability are two of the major barriers facing life sciences R&D today. Independently verifying findings by re-creating experiments and generating the same results is fundamental to progressing research to the next stage in its lifecycle, be it advancing a drug to clinical development, or a product to market. Yet, in the field of biology alone, one study found that 70 per cent of researchers are unable to reproduce the findings of other scientists, and 60 per cent are unable to reproduce their own findings.

This causes delays to the R&D process throughout the life sciences ecosystem. For example, biopharmaceutical companies often use an external Contract Research Organisation (CROs) to conduct clinical studies. Without a centralised repository to provide consistent access, analytical methods are often shared with CROs via email or even by physical documents, and not in a standard format but using an inconsistent terminology. This leads to unnecessary variability and several versions of the same analytical protocol. This makes it very challenging for a CRO to re-establish and revalidate methods without a labour-intensive process that is open to human interpretation and thus error.

To tackle issues like this, the Pistoia Alliance launched the Methods Hub project. The project aims to overcome the issue of reproducibility by digitising methods data across different life science functions, and ensuring data is FAIR (Findable, Accessible, Interoperable, Reusable) from the point of creation. This will enable seamless and secure sharing within the R&D ecosystem, reduce experiment duplication, standardise formatting to make data machine-readable, and increase reproducibility and efficiency. Robust data management is also the building block for machine learning and is the stepping-stone to realising the benefits of AI.

Digitisation of paper-based processes increases the efficiency and quality of methods data management. But it goes beyond manually keying in method parameters on a computer or using an Electronic Lab Notebook; A digital and automated workflow increases efficiency, instrument usages and productivity. Applying a shared data standards ensures consistency and interoperability in addition to fast and secure transfer of information between stakeholders.

One area that organisations need to address to comply with FAIR principles, and a key area in which the Methods Hub project helps, is how analytical methods are shared. This includes replacing free-text data capture with a common data model and standardised ontologies. For example, in a High-Performance Liquid Chromatography (HPLC) experiment, rather than manually typing out the analytical parameters (pump flow, injection volume, column temperature etc. etc.), the scientist will simply download a method which will automatically populate the execution parameters in any given Chromatographic Data System (CSD). This not only saves time during data entry, but the common format eliminates room for human interpretation or error.

Additionally, creating a centralised repository like the Methods Hub in a vendor-neutral format is a step towards greater cyber-resiliency in the industry. When information is stored locally on a PC or an ELN and is not backed up, a single cyberattack can wipe it out instantly. Creating shared spaces for these notes via the cloud protects data and ensures it can be easily restored.

A proof of concept (PoC) via the Methods Hub project was recently successfully completed to demonstrate the value of methods digitisation. The PoC involved the digital transfer via cloud of analytical HPLC methods, proving it is possible to move analytical methods securely between two different companies and CDS vendors with ease. It has been successfully tested in labs at Merck and GSK, where there has been an effective transfer of HPLC-UV information between different systems. The PoC delivered a series of critical improvements to methods transfer that eliminated the manual keying of data, reduces risk, steps, and error, while increasing overall flexibility and interoperability.

The Alliance project team is now working to extend the platforms functionality to connect analytical methods with results data, which would be an industry first. The team will also be adding support for columns and additional hardware and other analytical techniques, such as mass spectrometry and nuclear magnetic resonance spectroscopy (NMR). It also plans to identify new use cases, and further develop the cloud platform that enables secure methods transfer.

If industry-wide data standards and approaches to data management are to be agreed on and implemented successfully, organisations must collaborate. The Alliance recognises methods data management is a big challenge for the industry, and the aim is to make Methods Hub an integral part of the system infrastructure in every analytical lab.

Tackling issues such as digitisation of methods data doesnt just benefit individual companies but will have a knock-on effect for the whole life sciences industry. Introducing shared standards accelerates R&D, improves quality, and reduces the cost and time burden on scientists and organisations. Ultimately this ensures that new therapies and breakthroughs reach patients sooner. We are keen to welcome new contributors to the project, so we can continue discussing common barriers to successful data management, and work together to develop new solutions.

Dr Birthe Nielsen is the Pistoia Alliance Methods Database project manager

Read more:
Tackling the reproducibility and driving machine learning with digitisation - Scientific Computing World

This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT – Hackster.io

Those who own an outdoor cat or even several might run into the occasional problem of having to let them back in. Due to finding it annoying when having to constantly monitor for when his cat wanted to come inside the house, GitHub user gamename opted for a more automated system.

The solution gamename came up with involves listening to ambient sounds with a single Raspberry Pi and an attached USB microphone. Whenever the locally-running machine learning model detects a meow, it sends a message to an AWS service over the internet where it can then trigger a text to be sent. This has the advantage of limiting false events while simultaneously providing an easy way for the cat to be recognized at the door.

This project started by installing the AWS command-line interface (CLI) onto the Raspberry Pi 4 and then signing in with an account. From here, gamename registered a new IoT device, downloaded the resulting configuration files, and ran the setup script. After quickly updating some security settings, a new function was created that waits for new messages coming from the MQTT service and causes a text message to be sent with the help of the SNS service.

After this plethora of services and configurations had been made to the AWS project, gamename moved onto the next step of testing to see if messages are sent at the right time. His test script simply emulates a positive result by sending the certificates, key, topic, and message to the endpoint, where the user can then watch as the text appears on their phone a bit later.

The Raspberry Pi and microSD card were both placed into an off-the-shelf chassis, which sits just inside the house's entrance. After this, the microphone was connected with the help of two RJ45-to-USB cables that allow the microphone to sit outside inside of a waterproof housing up to 150 feet away.

Running on the Pi is a custom bash script that starts every time the board boots up, and its role is to launch the Python program. This causes the Raspberry Pi to read samples from the microphone and pass them to a TensorFlow audio classifier, which attempts to recognize the sound clip. If the primary noise is a cat, then the AWS API is called in order to publish the message to the MQTT topic. More information about this project can be found here in gamename's GitHub repository.

Continue reading here:
This Smart Doorbell Responds to Meowing Cats Using Machine Learning and IoT - Hackster.io

Multicolored Mars: Researchers Use Machine Learning To Map Source of Ancient Martian Meteorites – Tech Times

Marswas struck by an asteroid between five and ten million years ago. It produced a huge crater and launched a fresh meteorite made of ancient Martian crust into space, which finally plummeted into Africa.

Thanks to asupercomputer-powered techniquethat enabled scientists to study the geology of planets without leaving the planet, the meteorite source was located.

(Photo : A. Lagain et al./Nature Commun.)By using a machine learning method on one of the fastest supercomputers in the Southern Hemisphere, located at the Pawsey Supercomputing Research Centre, the team was able to identify around 90 million impact craters. Kosta Servis, a senior data scientist at the Center, contributed to the algorithm's development.

A global team of researchers discovered around 90 million impact craters on the Red planet using a machine learning algorithm on one of the fastest supercomputers in the Southern Hemisphere, located at the Pawsey Supercomputing Research Centre, according toNature's report.

The map was created by researchers looking into the origin of the Black Beauty meteorite, which was discovered in the Sahara Desert in 2011.

Martian rocksthat makeup Black Beauty was created roughly 4.5 billion years ago, when the crusts of Earth and Mars were still developing, according to the study.

The researchers eventually determined the precise area of this impact after using the algorithm to eliminate some of the potential outcomes. The 10-kilometer-wide Karratha crater, according to the researchers, may serve as the focal point of a future Mars mission.

The technique underlying the discovery will also be used to locate billions of impact craters on the surfaces of Mercury and the Moon, as well as to determine the origin of other Martian meteorites. There have been over 300 Martian meteorites discovered on Earth thus far.

(Photo : NASA/Jet Propulsion Laboratory/ Cornell University via Getty Images)MARS - JANUARY 6: In this handout released by NASA, angular and smooth surface of rocks are seen in an image taken by the panoramic camera on the Mars Exploration Rover Spirit January 6, 2003. The rover landed on Mars January 3 and sent it's first high resolution color image January 6.

Read also: Drones on Mars: Skycopters May Be Exploring Mars Soon After Ground-Breaking Test in A Volcano!

The crater was given the name Karratha by researchers in honor of a Western Australian city home to some of the planet's oldest rocks. The team wantsNASAto give the area around Karratha Crater top priority as a potential location for a future Mars landing.

Thousands of high-resolution planetary photos from several Mars missions were analyzed by the team in order to identify the origin of the rocks on Mars.

Dr. Anthony Lagain of the Space Science and Technology Centre at Curtin University served as the study's principal investigator, and co-authors included researchers from Paris-Saclay University, the Paris Observatory, the Museum of Natural History, the French National Centre for Scientific Research, the Flix Houphouet-Boigny University in Cte d'Ivoire, Northern Arizona University, and Rutgers University in the US.

Related Article:Spaghetti on Mars? NASA Finally Reveals What This Weird Noodle-Like Object Is

This article is owned by Tech Times

Written by Joaquin Victor Tacla

2022 TECHTIMES.com All rights reserved. Do not reproduce without permission.

See the original post:
Multicolored Mars: Researchers Use Machine Learning To Map Source of Ancient Martian Meteorites - Tech Times

Are You Making These Deadly Mistakes With Your AI Projects? – Forbes

Since data is at the heart of AI, it should come as no surprise that AI and ML systems need enough good quality data to learn. In general, a large volume of good quality data is needed, especially for supervised learning approaches, in order to properly train the AI or ML system. The exact amount of data needed may vary depending on which pattern of AI youre implementing, the algorithm youre using, and other factors such as in house versus third party data. For example, neural nets need a lot of data to be trained while decision trees or Bayesian classifiers dont need as much data to still produce high quality results.

So you might think more is better, right? Well, think again. Organizations with lots of data, even exabytes, are realizing that having more data is not the solution to their problems as they might expect. Indeed, more data, more problems. The more data you have, the more data you need to clean and prepare. The more data you need to label and manage. The more data you need to secure, protect, mitigate bias, and more. Small projects can rapidly turn into very large projects when you start multiplying the amount of data. In fact, many times, lots of data kills projects.

Clearly the missing step between identifying a business problem and getting the data squared away to solve that problem is determining which data you need and how much of it you really need. You need enough, but not too much. Goldilocks data is what people often say: not too much, not too little, but just right. Unfortunately, far too often, organizations are jumping into AI projects without first addressing an understanding of their data. Questions organizations need to answer include figuring out where the data is, how much of it they already have, what condition it is in, what features of that data are most important, use of internal or external data, data access challenges, requirements to augment existing data, and other crucial factors and questions. Without these questions answered, AI projects can quickly die, even drowning in a sea of data.

Getting a better understanding of data

In order to understand just how much data you need, you first need to understand how and where data fits into the structure of AI projects. One visual way of understanding the increasing levels of value we get from data is the DIKUW pyramid (sometimes also referred to as the DIKW pyramid) which shows how a foundation of data helps build greater value with Information, Knowledge, Understanding and Wisdom.

DIKW pyramid

With a solid foundation of data, you can gain additional insights at the next information layer which helps you answer basic questions about that data. Once you have made basic connections between data to gain informational insight, you can find patterns in that information to gain understanding of the how various pieces of information are connected together for greater insight. Building on a knowledge layer, organizations can get even more value from understanding why those patterns are happening, providing an understanding of the underlying patterns. Finally, the wisdom layer is where you can gain the most value from information by providing the insights into the cause and effect of information decision making.

This latest wave of AI focuses most on the knowledge layer, since machine learning provides the insight on top of the information layer to identify patterns. Unfortunately, machine learning reaches its limits in the understanding layer, since finding patterns isnt sufficient to do reasoning. We have machine learning, not but the machine reasoning required to understand why the patterns are happening. You can see this limitation in effect any time you interact with a chatbot. While the Machine learning-enabled NLP is really good at understanding your speech and deriving intent, it runs into limitations rying to understand and reason.For example, if you ask a voice assistant if you should wear a raincoat tomorrow, it doesn't understand that youre asking about the weather. A human has to provide that insight to the machine because the voice assistant doesnt know what rain actually is.

Avoiding Failure by Staying Data Aware

Big data has taught us how to deal with large quantities of data. Not just how its stored but how to process, manipulate, and analyze all that data. Machine learning has added more value by being able to deal with the wide range of different types of unstructured, semi-structured or structured data collected by organizations. Indeed, this latest wave of AI is really the big data-powered analytics wave.

But its exactly for this reason why some organizations are failing so hard at AI. Rather than run AI projects with a data-centric perspective, they are focusing on the functional aspects. To gain a handle of their AI projects and avoid deadly mistakes, organizations need a better understanding not only of AI and machine learning but also the Vs of big data. Its not just about how much data you have, but also the nature of that data. Some of those Vs of big data include:

With decades of experience managing big data projects, organizations that are successful with AI are primarily successful with big data. The ones that are seeing their AI projects die are the ones who are coming at their AI problems with application development mindsets.

Too Much of the Wrong Data, and Not Enough of the Right Data is Killing AI Projects

While AI projects start off on the right foot, the lack of the necessary data and the lack of understanding and then solving real problems are killing AI projects. Organizations are powering forward without actually having a real understanding of the data that they need and the quality of that data. This poses real challenges.

One of the reasons why organizations are making this data mistake is that they are running their AI projects without any real approach to doing so, other than using Agile or app dev methods. However, successful organizations have realized that using data-centric approaches focus on data understanding as one of the first phases of their project approaches. The CRISP-DM methodology, which has been around for over two decades, specifies data understanding as the very next thing to do once you determine your business needs. Building on CRISP-DM and adding Agile methods, the Cognitive Project Management for AI (CPMAI) Methodology requires data understanding in its Phase II. Other successful approaches likewise require a data understanding early in the project, because after all, AI projects are data projects. And how can you build a successful project on a foundation of data without running your projects with an understanding of data? Thats surely a deadly mistake you want to avoid.

Read more:
Are You Making These Deadly Mistakes With Your AI Projects? - Forbes

The ABCs of AI, algorithms and machine learning – Marketplace

Advanced computer programs influence, and can even dictate, meaningful parts of our lives. Think of streaming services, credit scores, facial recognition software.

As this technology becomes more sophisticated and more pervasive, its important to understand the basic terminology.

People often use algorithm, machine learning and artificial intelligence interchangeably. There is some overlap, but theyre not the same things.

We decided to call up a few experts to help us get a firm grasp on these concepts, starting with a basic definition of algorithm. The following is an edited transcript of the episode.

Melanie Mitchell, Davis professor of complexity at the Santa Fe Institute, offered a simple explanation of a computer algorithm.

An algorithm is a set of steps for solving a problem or accomplishing a goal, she said.

The next step up is machine learning, which uses algorithms.

Rather than a person programming in the rules, the system itself has learned, Mitchell said.

For example, speech recognition software, which uses data to learn which sounds combine to become words and sentences. And this kind of machine learning is a key component of artificial intelligence.

Artificial intelligence is basically capabilities of computers to mimic human cognitive functions, said Anjana Susarla, who teaches responsible AI at Michigan State Universitys Broad College of Business.

She said we should think of AI as an umbrella term.

AI is much more broader, all-encompassing, compared to only machine learning or algorithms, Susarla said.

Thats why you might hear AI as a loose description for a range of things that show some level of intelligence. Like software that examines the photos on your phone to sort out the ones with cats to advanced spelunking robots that explore caves.

Heres another way to think of the differences among these tools: cooking.

Bethany Edmunds, professor and director of computing programs at Northeastern University, compares it to cooking.

She says an algorithm is basically a recipe step-by-step instructions on how to prepare something to solve the problem of being hungry.

If you took the machine learning approach, you would show a computer the ingredients you have and what you want for the end result. Lets say, a cake.

So maybe it would take every combination of every type of food and put them all together to try and replicate the cake that was provided for it, she said.

AI would turn the whole problem of being hungry over to the computer program, determining or even buying ingredients, choosing a recipe or creating a new one. Just like a human would.

So why do these distinctions matter? Well, for one thing, these tools sometimes produce results with biased outcomes.

Its really important to be able to articulate what those concerns are, Edmunds said. So that you can really dissect where the problem is and how we go about solving it.

Because algorithms, machine learning and AI are pretty much baked into our lives at this point.

Columbia Universitys engineering school has a further explanation of artificial intelligence and machine learning, and it lists other tools besides machine learning that can be part of AI. Like deep learning, neural networks, computer vision and natural language processing.

Over at the Massachusetts Institute of Technology, they point out that machine learning and AI are often used interchangeably because these days, most AI includes some amount of machine learning. A piece from MITs Sloan School of Management also gets into the different subcategories of machine learning. Supervised, unsupervised and reinforcement, like trial and error with kind of digital rewards. For example, teaching an autonomous vehicle to drive by letting the system know when it made the right decision like not hitting a pedestrian, for instance.

That piece also points to a 2020 survey from Deloitte, which found that 67% of companies are already using machine learning, and 97% were planning to in the future.

IBM has a helpful graphic to explain the relationship among AI, machine learning, neural networks and deep learning, presenting them as Russian nesting dolls with the broad category of AI as the biggest one.

And finally, with so many businesses using these tools, the Federal Trade Commission has a blog laying out some of the consumer risks associated with AI and the agencys expectations of how companies should deploy it.

Excerpt from:
The ABCs of AI, algorithms and machine learning - Marketplace

Covision Quality joins NVIDIA Metropolis to scale its industrial visual inspection software leveraging unsupervised machine learning – GlobeNewswire

BRESSANONE, Italy, July 25, 2022 (GLOBE NEWSWIRE) -- Covision Quality, a leading provider of visual inspection software based on unsupervised machine learning technology, today announced it has joined NVIDIA Metropolis a partner program, application framework, and set of developer tools that bring to market a new generation of vision AI applications that make the worlds most important spaces and operations safer and more efficient.

Covision Qualitys interface from the perspective of the end-of-line quality control operator. In this case, the red border on the image of the manufactured part indicates that the part is not OK, thus can not be sent to the end customer and needs to be discarded.

Thanks to its unsupervised machine learning technology, the Covision Quality software can be trained in an hour on average and generates reduction of pseudo-scrap rates by up to 90% for its customers. Its workstations that are deployed at customer sites harness the power of NVIDIA RTX A5000 GPU-accelerated computing, which allows the software to run in real time processing images, inspecting components, and communicating decisions to the PLC. In addition, Covision Quality leverages NVIDIA Metropolis, the TensorRT SDK, and CUDA software.

NVIDIA Metropolis makes it easier and more cost effective for enterprises, governments, and integration partners to use world-class AI-enabled solutions to improve critical operational efficiency and solve safety problems. The NVIDIA Metropolis ecosystem contains a large and growing breadth of members who are investing in the most advanced AI techniques and most efficient deployment platforms, and using an enterprise-class approach to their solutions. Members have the opportunity to gain early access to NVIDIA platform updates to further enhance and accelerate their AI application development efforts. The program also offers the opportunity for members to collaborate with industry-leading experts and other AI-driven organizations.

Covision Quality is a spin-off of Covision Lab, a leading European computer vision and machine learning application center and company builder. Covision Quality licenses its visual inspection software product to manufacturing companies in several industries, ranging from metal manufacturing to packaging. Customers of Covision Quality include GKN Sinter Metals, a global market leader for sinter metal components, and Aluflexpack Group, a leading international manufacturer of flexible packaging.

Franz Tschimben, CEO of Covision Quality, sees an important value-add in joining the NVIDIA Metropolis program: Joining NVIDIA Metropolis marks yet another milestone in our companys young history and in our relationship with NVIDIA, which started with our company joining the NVIDIA Inception program last year. It is a testament to the great work the team is doing in providing a scalable visual inspection software product to our customers, drastically reducing time to deployment of visual inspection systems and pseudo scrap rates. We expect that NVIDIA Metropolis, which sits at the heart of many developments that are happening in the industry today, will give us a boost in our go-to-market efforts and support us in connecting to customers and system integrators.

About Covision QualityCovision Quality licenses its visual inspection software product to manufacturing companies in several industries, ranging from metal manufacturing to packaging. Thanks to its unsupervised machine learning technology, the Covision Quality software can be trained in an hour on average and generates reduction of pseudo-scrap rates for its customers by up to 90%. Covision Quality is the recipient of the Cowen Startup award at Automate Show 2022 in Detroit, United States.

Covision Quality is a spin-off of Covision Lab, a leading European computer vision and machine learning application center and company builder.For more information, visit http://www.covisionquality.com

Contact information:Covision Qualityhttps://www.covisionquality.com/en 39042 Bressanone, Italy+39 333 4421494info@covisionlab.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/19998b6c-83b8-41df-8e60-c5d558e3e408

Read this article:
Covision Quality joins NVIDIA Metropolis to scale its industrial visual inspection software leveraging unsupervised machine learning - GlobeNewswire

Explained: How to tell if artificial intelligence is working the way we want it to – MIT News

About a decade ago, deep-learning models started achieving superhuman results on all sorts of tasks, from beating world-champion board game players to outperforming doctors at diagnosing breast cancer.

These powerful deep-learning models are usually based on artificial neural networks, which were first proposed in the 1940s and have become a popular type of machine learning. A computer learns to process data using layers of interconnected nodes, or neurons, that mimic the human brain.

As the field of machine learning has grown, artificial neural networks have grown along with it.

Deep-learning models are now often composed of millions or billions of interconnected nodes in many layers that are trained to perform detection or classification tasks using vast amounts of data. But because the models are so enormously complex, even the researchers who design them dont fully understand how they work. This makes it hard to know whether they are working correctly.

For instance, maybe a model designed to help physicians diagnose patients correctly predicted that a skin lesion was cancerous, but it did so by focusing on an unrelated mark that happens to frequently occur when there is cancerous tissue in a photo, rather than on the cancerous tissue itself. This is known as a spurious correlation. The model gets the prediction right, but it does so for the wrong reason. In a real clinical setting where the mark does not appear on cancer-positive images, it could result in missed diagnoses.

With so much uncertainty swirling around these so-called black-box models, how can one unravel whats going on inside the box?

This puzzle has led to a new and rapidly growing area of study in which researchers develop and test explanation methods (also called interpretability methods) that seek to shed some light on how black-box machine-learning models make predictions.

What are explanation methods?

At their most basic level, explanation methods are either global or local. A local explanation method focuses on explaining how the model made one specific prediction, while global explanations seek to describe the overall behavior of an entire model. This is often done by developing a separate, simpler (and hopefully understandable) model that mimics the larger, black-box model.

But because deep learning models work in fundamentally complex and nonlinear ways, developing an effective global explanation model is particularly challenging. This has led researchers to turn much of their recent focus onto local explanation methods instead, explains Yilun Zhou, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) who studies models, algorithms, and evaluations in interpretable machine learning.

The most popular types of local explanation methods fall into three broad categories.

The first and most widely used type of explanation method is known as feature attribution. Feature attribution methods show which features were most important when the model made a specific decision.

Features are the input variables that are fed to a machine-learning model and used in its prediction. When the data are tabular, features are drawn from the columns in a dataset (they are transformed using a variety of techniques so the model can process the raw data). For image-processing tasks, on the other hand, every pixel in an image is a feature. If a model predicts that an X-ray image shows cancer, for instance, the feature attribution method would highlight the pixels in that specific X-ray that were most important for the models prediction.

Essentially, feature attribution methods show what the model pays the most attention to when it makes a prediction.

Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted, says Zhou.

A second type of explanation method is known as a counterfactual explanation. Given an input and a models prediction, these methods show how to change that input so it falls into another class. For instance, if a machine-learning model predicts that a borrower would be denied a loan, the counterfactual explanation shows what factors need to change so her loan application is accepted. Perhaps her credit score or income, both features used in the models prediction, need to be higher for her to be approved.

The good thing about this explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didnt get it, this explanation would tell them what they need to do to achieve their desired outcome, he says.

The third category of explanation methods are known as sample importance explanations. Unlike the others, this method requires access to the data that were used to train the model.

A sample importance explanation will show which training sample a model relied on most when it made a specific prediction; ideally, this is the most similar sample to the input data. This type of explanation is particularly useful if one observes a seemingly irrational prediction. There may have been a data entry error that affected a particular sample that was used to train the model. With this knowledge, one could fix that sample and retrain the model to improve its accuracy.

How are explanation methods used?

One motivation for developing these explanations is to perform quality assurance and debug the model. With more understanding of how features impact a models decision, for instance, one could identify that a model is working incorrectly and intervene to fix the problem, or toss the model out and start over.

Another, more recent, area of research is exploring the use of machine-learning models to discover scientific patterns that humans havent uncovered before. For instance, a cancer diagnosing model that outperforms clinicians could be faulty, or it could actually be picking up on some hidden patterns in an X-ray image that represent an early pathological pathway for cancer that were either unknown to human doctors or thought to be irrelevant, Zhou says.

It's still very early days for that area of research, however.

Words of warning

While explanation methods can sometimes be useful for machine-learning practitioners when they are trying to catch bugs in their models or understand the inner-workings of a system, end-users should proceed with caution when trying to use them in practice, says Marzyeh Ghassemi, an assistant professor and head of the Healthy ML Group in CSAIL.

As machine learning has been adopted in more disciplines, from health care to education, explanation methods are being used to help decision makers better understand a models predictions so they know when to trust the model and use its guidance in practice. But Ghassemi warns against using these methods in that way.

We have found that explanations make people, both experts and nonexperts, overconfident in the ability or the advice of a specific recommendation system. I think it is very important for humans not to turn off that internal circuitry asking, let me question the advice that I amgiven, she says.

Scientists know explanations make people over-confident based on other recent work, she adds, citing some recent studies by Microsoft researchers.

Far from a silver bullet, explanation methods have their share of problems. For one, Ghassemis recent research has shown that explanation methods can perpetuate biases and lead to worse outcomes for people from disadvantaged groups.

Another pitfall of explanation methods is that it is often impossible to tell if the explanation method is correct in the first place. One would need to compare the explanations to the actual model, but since the user doesnt know how the model works, this is circular logic, Zhou says.

He and other researchers are working on improving explanation methods so they are more faithful to the actual models predictions, but Zhou cautions that, even the best explanation should be taken with a grain of salt.

In addition, people generally perceive these models to be human-like decision makers, and we are prone to overgeneralization. We need to calm people down and hold them back to really make sure that the generalized model understanding they build from these local explanations are balanced, he adds.

Zhous most recent research seeks to do just that.

Whats next for machine-learning explanation methods?

Rather than focusing on providing explanations, Ghassemi argues that more effort needs to be done by the research community to study how information is presented to decision makers so they understand it, and more regulation needs to be put in place to ensure machine-learning models are used responsibly in practice. Better explanation methods alone arent the answer.

I have been excited to see that there is a lot more recognition, even in industry, that we cant just take this information and make a pretty dashboard and assume people will perform better with that. You need to have measurable improvements in action, and Im hoping that leads to real guidelines about improving the way we display information in these deeply technical fields, like medicine, she says.

And in addition to new work focused on improving explanations, Zhou expects to see more research related to explanation methods for specific use cases, such as model debugging, scientific discovery, fairness auditing, and safety assurance. By identifying fine-grained characteristics of explanation methods and the requirements of different use cases, researchers could establish a theory that would match explanations with specific scenarios, which could help overcome some of the pitfalls that come from using them in real-world scenarios.

Link:
Explained: How to tell if artificial intelligence is working the way we want it to - MIT News

Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 – Digital Journal

According to the latest research by SkyQuest Technology, the Global Machine Learning Market was valued at US$ 16.2 billion in 2021, and it is expected to reach a market size of US$ 164.05 billion by 2028, at a CAGR of 39.2 % over the forecast period 20222028. The research provides up-to-date Machine Learning Market analysis of the current market landscape, latest trends, drivers, and overall market environment.

Software systems may forecast events more correctly with the use of machine learning (ML), a type of artificial intelligence (AI), without needing to be explicitly told to do so. Machine learning algorithms use historical data as input to anticipate new output values. As organizations adopt more advanced security frameworks, the global machine learning market is anticipated to grow as machine learning becomes a prominent trend in security analytics. Due to the massive amount of data being generated and communicated over several networks, cyber professionals struggle considerably to identify and assess potential cyber threats and assaults.

Machine-learning algorithms can assist businesses and security teams in anticipating, detecting, and recognising cyber-attacks more quickly as these risks become more widespread and sophisticated. For example, supply chain attacks increased by 42% in the first quarter of 2021 in the US, affecting up to 7,000,000 people. For instance, AT&T and IBM claim that the promise of edge computing and 5G wireless networking for the digital revolution will be proven. They have created virtual worlds that, when paired with IBM hybrid cloud and AI technologies, allow business clients to truly experience the possibilities of an AT&T connection.

Computer vision is a cutting-edge technique that combines machine learning and deep learning for medical imaging diagnosis. This has been accepted by the Microsoft InnerEye programme, which focuses on image diagnostic tools for image analysis. For instance, using minute samples of linguistic data, an AI model created by a team of researchers from IBM and Pfizer can accurately forecast the eventual onset of Alzheimers disease in healthy persons by 71 percent (obtained via clinical verbal cognition tests).

Read Market Research Report, Global Machine Learning Market by Component, (Solutions, and Services), Enterprise Size (SMEs And Large Enterprises), Deployment (Cloud, On-Premise), End-User [Healthcare, Retail, IT and Telecommunications, Banking, Financial Services and Insurance (BFSI), Automotive & Transportation, Advertising & Media, Manufacturing, Others (Energy & Utilities, Etc.)], and Region Forecast and Analysis 20222028 By Skyquest

Get Sample PDF : https://skyquestt.com/sample-request/machine-learning-market

Large enterprises segment dominated the machine learning market in 2021. This is because data science and artificial intelligence technologies are being used more often to incorporate quantitative insights into business operations. For instance, under a contract between Pitney Bowes and IBM, IBM will offer managed infrastructure, IT automation, and machine learning services to help Pitney Bowes convert and adopt hybrid cloud computing to support its global business strategy and goals.

Small and midsized firms are expected to grow considerably throughout the anticipated timeframe. It is projected that AI and ML would be the main technologies allowing SMEs to reduce ICT investments and access digital resources. For instance, the IPwe Platform, IPwe Registry, and Global Patent Marketplace are just a few of the small- and medium-sized firms (SMEs) and other organizations that are reportedly already using IPwes technology.

The healthcare sector had the biggest share the global machine learning market in 2021 owing to the industrys leading market players doing rapid research and development, as well as the partnerships formed in an effort to increase their market share. For instance, per the terms of the two businesses signed definitive agreement, Francisco Partners would buy IBMs healthcare data and analytics assets that are presently a part of the Watson Health company. An established worldwide investment company with a focus on working with IT startups is called Francisco Partners. Francisco Partners acquired a wide range of assets, including Health Insights, MarketScan, Clinical Development, Social Program Management, Micromedex, and imaging software services.

The prominent market players are constantly adopting various innovation and growth strategies to capture more market share. The key market players are IBM Corporation, SAP SE, Oracle Corporation, Hewlett Packard Enterprise Company, Microsoft Corporation, Amazon Inc., Intel Corporation, Fair Isaac Corporation, SAS Institute Inc., BigML, Inc., among others.

The report published by SkyQuest Technology Consulting provides in-depth qualitative insights, historical data, and verifiable projections about Machine Learning Market Revenue. The projections featured in the report have been derived using proven research methodologies and assumptions.

Speak With Our Analyst : https://skyquestt.com/speak-with-analyst/machine-learning-market

Report Findings

What does this Report Deliver?

SkyQuest has Segmented the Global Machine Learning Market based on Component, Enterprise Size, Deployment, End-User, and Region:

Read Full Report : https://skyquestt.com/report/machine-learning-market

Key Players in the Global Machine Learning Market

About Us-SkyQuest Technology Group is a Global Market Intelligence, Innovation Management & Commercialization organization that connects innovation to new markets, networks & collaborators for achieving Sustainable Development Goals.

Find Insightful Blogs/Case Studies on Our Website:Market Research Case Studies

Go here to see the original:
Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 - Digital Journal

Google Is Selling Advanced AI to Israel, Documents Reveal – The Intercept

Training materials reviewed by The Intercept confirm that Google is offering advanced artificial intelligence and machine-learning capabilities to the Israeli government through its controversial Project Nimbus contract. The Israeli Finance Ministry announced the contract in April 2021 for a $1.2 billion cloud computing system jointly built by Google and Amazon. The project is intended to provide the government, the defense establishment and others with an all-encompassing cloud solution, the ministry said in its announcement.

Google engineers have spent the time since worrying whether their efforts would inadvertently bolster the ongoing Israeli military occupation of Palestine. In 2021, both Human Rights Watch and Amnesty International formally accused Israel of committing crimes against humanity by maintaining an apartheid system against Palestinians. While the Israeli military and security services already rely on a sophisticated system of computerized surveillance, the sophistication of Googles data analysis offerings could worsen the increasingly data-driven military occupation.

According to a trove of training documents and videos obtained by The Intercept through a publicly accessible educational portal intended for Nimbus users, Google is providing the Israeli government with the full suite of machine-learning and AI tools available through Google Cloud Platform. While they provide no specifics as to how Nimbus will be used, the documents indicate that the new cloud would give Israel capabilities for facial detection, automated image categorization, object tracking, and even sentiment analysis that claims to assess the emotional content of pictures, speech, and writing. The Nimbus materials referenced agency-specific trainings available to government personnel through the online learning service Coursera, citing the Ministry of Defense as an example.

A slide presented to Nimbus users illustrating Google image recognition technology.

Credit: Google

The former head of Security for Google Enterprise who now heads Oracles Israel branch has publicly argued that one of the goals of Nimbus is preventing the German government from requesting data relating on the Israel Defence Forces for the International Criminal Court, said Poulson, who resigned in protest from his job as a research scientist at Google in 2018, in a message. Given Human Rights Watchs conclusion that the Israeli government is committing crimes against humanity of apartheid and persecution against Palestinians, it is critical that Google and Amazons AI surveillance support to the IDF be documented to the fullest.

Though some of the documents bear a hybridized symbol of the Google logo and Israeli flag, for the most part they are not unique to Nimbus. Rather, the documents appear to be standard educational materials distributed to Google Cloud customers and presented in prior training contexts elsewhere.

Google did not respond to a request for comment.

The documents obtained by The Intercept detail for the first time the Google Cloud features provided through the Nimbus contract. With virtually nothing publicly disclosed about Nimbus beyond its existence, the systems specific functionality had remained a mystery even to most of those working at the company that built it.In 2020, citing the same AI tools, U.S Customs and Border Protection tapped Google Cloud to process imagery from its network of border surveillance towers.

Many of the capabilities outlined in the documents obtained by The Intercept could easily augment Israels ability to surveil people and process vast stores of data already prominent features of the Israeli occupation.

Data collection over the entire Palestinian population was and is an integral part of the occupation, Ori Givati of Breaking the Silence, an anti-occupation advocacy group of Israeli military veterans, told The Intercept in an email. Generally, the different technologicaldevelopments we are seeing in the Occupied Territories all direct to one central element which is more control.

The Israeli security state has for decades benefited from the countrys thriving research and development sector, and its interest in using AI to police and control Palestinians isnt hypothetical. In 2021, the Washington Post reported on the existence of Blue Wolf, a secret military program aimed at monitoring Palestinians through a network of facial recognition-enabled smartphones and cameras.

Living under a surveillance state for years taught us that all the collected information in the Israeli/Palestinian context could be securitized and militarized, said Mona Shtaya, a Palestinian digital rights advocate at 7amleh-The Arab Center for Social Media Advancement, in a message. Image recognition, facial recognition, emotional analysis, among other things will increase the power of the surveillance state to violate Palestinian right to privacy and to serve their main goal, which is to create the panopticon feeling among Palestinians that we are being watched all the time, which would make the Palestinian population control easier.

The educational materials obtained by The Intercept show that Google briefed the Israeli government on using whats known as sentiment detection, an increasingly controversial and discredited form of machine learning. Google claims that its systems can discern inner feelings from ones face and statements, a technique commonly rejected as invasive and pseudoscientific, regarded as being little better than phrenology. In June, Microsoft announced that it would no longer offer emotion-detection features through its Azure cloud computing platform a technology suite comparable to what Google provides with Nimbus citing the lack of scientific basis.

Google does not appear to share Microsofts concerns. One Nimbus presentation touted the Faces, facial landmarks, emotions-detection capabilities of Googles Cloud Vision API, an image analysis toolset. The presentation then offered a demonstration using the enormous grinning face sculpture at the entrance of Sydneys Luna Park. An included screenshot of the feature ostensibly in action indicates that the massive smiling grin is very unlikely to exhibit any of the example emotions. And Google was only able to assess that the famous amusement park is an amusement park with 64 percent certainty, while it guessed that the landmark was a place of worship or Hindu Temple with 83 percent and 74 percent confidence, respectively.

A slide presented to Nimbus users illustrating Google AIs ability to detect image traits.

Credit: Google

Vision API is a primary concern to me because its so useful for surveillance, said one worker, who explained that the image analysis would be a natural fit for military and security applications. Object recognition is useful for targeting, its useful for data analysis and data labeling. An AI can comb through collected surveillance feeds in a way a human cannot to find specific people and to identify people, with some error, who look like someone. Thats why these systems are really dangerous.

A slide presented to Nimbus users outlining various AI features through the companys Cloud Vision API.

Credit: Google

Training an effective model from scratch is often resource intensive, both financially and computationally. This is not so much of a problem for a world-spanning company like Google, with an unfathomable volume of both money and computing hardware at the ready. Part of Googles appeal to customers is the option of using a pre-trained model, essentially getting this prediction-making education out of the way and letting customers access a well-trained program thats benefited from the companys limitless resources.

An AI can comb through collected surveillance feeds in a way a human cannot to find specific people and to identify people, with some error, who look like someone. Thats why these systems are really dangerous.

Custom models generated through AutoML, one presentation noted, can be downloaded for offline edge use unplugged from the cloud and deployed in the field.

That Nimbus lets Google clients use advanced data analysis and prediction in places and ways that Google has no visibility into creates a risk of abuse, according to Liz OSullivan, CEO of the AI auditing startupParity and a member of the U.S. National Artificial Intelligence Advisory Committee. Countries can absolutely use AutoML to deploy shoddy surveillance systems that only seem like they work, OSullivan said in a message. On edge, its even worse think bodycams, traffic cameras, even a handheld device like a phone can become a surveillance machine and Google may not even know its happening.

In one Nimbus webinar reviewed by The Intercept, the potential use and misuse of AutoML was exemplified in a Q&A session following a presentation. An unnamed member of the audience asked the Google Cloud engineers present on the call if it would be possible to process data through Nimbus in order to determine if someone is lying.

Im a bit scared to answer that question, said the engineer conducting the seminar, in an apparent joke. In principle: Yes. I will expand on it, but the short answer is yes. Another Google representative then jumped in: It is possible, assuming that you have the right data, to use the Google infrastructure to train a model to identify how likely it is that a certain person is lying, given the sound of their own voice. Noting that such a capability would take a tremendous amount of data for the model, the second presenter added that one of the advantages of Nimbus is the ability to tap into Googles vast computing power to train such a model.

Id be very skeptical for the citizens it is meant to protect that these systems can do what is claimed.

A broad body of research, however, has shown that the very notion of a lie detector, whether the simple polygraph or AI-based analysis of vocal changes or facial cues, is junk science. While Googles reps appeared confident that the company could make such a thing possible through sheer computing power, experts in the field say that any attempts to use computers to assess things as profound and intangible as truth and emotion are faulty to the point of danger.

One Google worker who reviewed the documents said they were concerned that the company would even hint at such a scientifically dubious technique. The answer should have been no, because that does not exist, the worker said. It seems like it was meant to promote Google technology as powerful, and its ultimately really irresponsible to say that when its not possible.

Andrew McStay, a professor of digital media at Bangor University in Wales andhead of the Emotional AI Lab, told The Intercept that the lie detector Q&A exchange was disturbing, as is Googles willingness to pitch pseudoscientific AI tools to a national government. It is [a] wildly divergent field, so any technology built on this is going to automate unreliability, he said. Again, those subjected to them will suffer, but Id be very skeptical for the citizens it is meant to protect that these systems can do what is claimed.

According to some critics, whether these tools work might be of secondary importance to a company like Google that is eager to tap the ever-lucrative flow of military contract money. Governmental customers too may be willing to suspend disbelief when it comes to promises of vast new techno-powers. Its extremely telling that in the webinar PDF that they constantly referred to this as magical AI goodness, said Jathan Sadowski, a scholar of automation technologies and research fellow at Monash University, in an interview with The Intercept. It shows that theyre bullshitting.

Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif. Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI. Those principles, released by Pichai, commit Google to building AI applications that are socially beneficial, that avoid creating or reinforcing bias and that are accountable to people.

Photo: Jeff Chiu/AP

Israel, though, has set up its relationship with Google to shield it from both the companys principles and any outside scrutiny. Perhaps fearing the fate of the Pentagons Project Maven, a Google AI contract felled by intense employee protests, the data centers that power Nimbus will reside on Israeli territory, subject to Israeli lawand insulated from political pressures. Last year, the Times of Israel reported that Google would be contractually barred from shutting down Nimbus services or denying access to a particular government office even in response to boycott campaigns.

Google employees interviewed by The Intercept lamented that the companys AI principles are at best a superficial gesture. I dont believe its hugely meaningful, one employee told The Intercept, explaining that the company has interpreted its AI charter so narrowly that it doesnt apply to companies or governments that buy Google Cloud services. Asked how the AI principles are compatible with the companys Pentagon work, a Google spokesperson told Defense One, It means that our technology can be used fairly broadly by the military.

Google is backsliding on its commitments to protect people from this kind of misuse of our technology. I am truly afraid for the future of Google and the world.

Moreover, this employee added that Google lacks both the ability to tell if its principles are being violated and any means of thwarting violations. Once Google offers these services, we have no technical capacity to monitor what our customers are doing with these services, the employee said. They could be doing anything. Another Google worker told The Intercept, At a time when already vulnerable populations are facing unprecedented and escalating levels of repression, Google is backsliding on its commitments to protect people from this kind of misuse of our technology. I am truly afraid for the future of Google and the world.

Ariel Koren, a Google employee who claimed earlier this year that she faced retaliation for raising concerns about Nimbus, said the companys internal silence on the program continues. I am deeply concerned that Google has not provided us with any details at all about the scope of the Project Nimbus contract, let alone assuage my concerns of how Google can provide technology to the Israeli government and military (both committing grave human rights abuses against Palestinians daily) while upholding the ethical commitments the company has made to its employees and the public, she told The Intercept in an email. I joined Google to promote technology that brings communities together and improves peoples lives, not service a government accused of the crime of apartheid by the worlds two leading human rights organizations.

Sprawling techcompanies have published ethical AI charters to rebut critics who say that their increasingly powerful products are sold unchecked and unsupervised. The same critics often counter that the documents are a form of ethicswashing essentially toothless self-regulatory pledges that provide only the appearance of scruples, pointing to examples like the provisions in Israels contract with Google that prevent thecompany from shutting down its products. The way that Israel is locking in their service providers through this tender and this contract, said Sadowski, the Monash University scholar, I do feel like that is a real innovation in technology procurement.

To Sadowski, it matters little whether Google believes what it peddles about AI or any other technology. What the company is selling, ultimately, isnt just software, but power. And whether its Israel and the U.S. today or another government tomorrow, Sadowski says that some technologies amplify the exercise of power to such an extent that even their use by a country with a spotless human rights record would provide little reassurance. Give them these technologies, and see if they dont get tempted to use them in really evil and awful ways, he said. These are not technologies that are just neutral intelligence systems, these are technologies that are ultimately about surveillance, analysis, and control.

Read more here:
Google Is Selling Advanced AI to Israel, Documents Reveal - The Intercept

Biologists train AI to generate medicines and vaccines – UW Medicine Newsroom

Scientists have developed artificial intelligence software that can create proteins that may be useful as vaccines, cancer treatments, or even tools for pulling carbon pollution out of the air.

This research, reported today in the journal Science, was led by the University of Washington School of Medicine and Harvard University. The article is titled"Scaffolding protein functional sites using deep learning."

The proteins we find in nature are amazing molecules, but designed proteins can do so much more, said senior author David Baker, an HHMI Investigator and professor of biochemistry at UW Medicine. In this work, we show that machine learning can be used to design proteins with a wide variety of functions.

For decades, scientists have used computers to try to engineer proteins. Some proteins, such as antibodies and synthetic binding proteins, have been adapted into medicines to combat COVID-19. Others, such as enzymes, aid in industrial manufacturing. But a single protein molecule often contains thousands of bonded atoms; even with specialized scientific software, they are difficult to study and engineer.

Inspired by how machine learning algorithms can generate stories or even images from prompts, the team set out to build similar software for designing new proteins. The idea is the same: neural networks can be trained to see patterns in data. Once trained, you can give it a prompt and see if it can generate an elegant solution. Often the results are compelling or even beautiful, said lead author Joseph Watson, a postdoctoral scholar at UW Medicine.

The team trained multiple neural networks using information from the Protein Data Bank, which is a public repository of hundreds of thousands of protein structures from across all kingdoms of life. The neural networks that resulted have surprised even the scientists who created them.

The team developed two approaches for designing proteins with new functions. The first, dubbed hallucination is akin to DALL-E or other generative A.I. tools that produce new output based on simple prompts. The second, dubbed inpainting, is analogous to the autocomplete feature found in modern search bars and email clients.

Most people can come up with new images of cats or write a paragraph from a prompt if asked, but with protein design, the human brain cannot do what computers now can, said lead author Jue Wang, a postdoctoral scholar at UW Medicine. Humans just cannot imagine what the solution might look like, but we have set up machines that do.

To explain how the neural networks hallucinate a new protein, the team compares it to how it might write a book: You start with a random assortment of words total gibberish. Then you impose a requirement such as that in the opening paragraph, it needs to be a dark and stormy night. Then the computer will change the words one at a time and ask itself Does this make my story make more sense? If it does, it keeps the changes until a complete story is written, explains Wang.

Both books and proteins can be understood as long sequences of letters. In the case of proteins, each letter corresponds to a chemical building block called an amino acid. Beginning with a random chain of amino acids, the software mutates the sequence over and over until a final sequence that encodes the desired function is generated. These final amino acid sequences encode proteins that can then be manufactured and studied in the laboratory.

The team also showed that neural networks can fill in missing pieces of a protein structure in only a few seconds. Such software could aid in the development of new medicines.

With autocomplete, or Protein Inpainting, we start with the key features we want to see in a new protein, then let the software come up with the rest. Those features can be known binding motifs or even enzyme active sites, explains Watson.

Laboratory testing revealed that many proteins generated through hallucination and inpainting functioned as intended. This included novel proteins that can bind metals as well as those that bind the anti-cancer receptor PD-1.

The new neural networks can generate several different kinds of proteins in as little as one second. Some include potential vaccines for the deadly respiratory syncytial virus,orRSV.

All vaccines work by presenting a piece of a pathogen to the immune system. Scientists often know which piece would work best, but creating a vaccine that achieves a desired molecular shape can be challenging. Using the new neural networks, the team prompted a computer to create new proteins that included the necessary pathogen fragment as part of their final structure. The software was free to create any supporting structures around the key fragment, yielding several potential vaccines with diverse molecular shapes.

When tested in the lab, the team found that known antibodies against RSV stuck to three of their hallucinated proteins. This confirms that the new proteins adopted their intended shapes and suggests they may be viable vaccine candidates that could prompt the body to generate its own highly specific antibodies. Additional testing, including in animals, is still needed.

I started working on the vaccine stuff just as a way to test our new methods, but in the middle of working on the project, my two-year-old son got infected by RSV and spent an evening in the ER to have his lungs cleared. It made me realize that even the test problems we were working on were actually quite meaningful, said Wang.

These are very powerful new approaches, but there is still much room for improvement, said Baker, who was a recipient of the 2021 Breakthrough Prize in Life Sciences. Designing high activity enzymes, for example, is still very challenging. But every month our methods just keep getting better! Deep learning transformed protein structure prediction in the past two years, we are now in the midst of a similar transformation of protein design.

This project was led by Jue Wang, Doug Tischer, and Joseph L. Watson, who are postdoctoral scholars at UW Medicine, as well as Sidney Lisanza and David Juergens, who are graduate students at UW Medicine. Senior authors include Sergey Ovchinnikov, a John Harvard Distinguished Science Fellow at Harvard University, and David Baker, professor of biochemistry at UW Medicine.

Compute resources for this work were donated by Microsoft and Amazon Web Services.

Funding was provided by the Audacious Project at the Institute for Protein Design; Microsoft; Eric and Wendy Schmidt by recommendation of the Schmidt Futures; the DARPA Synergistic Discovery and Design project (HR001117S0003 contract FA8750-17-C-0219); the DARPA Harnessing Enzymatic Activity for Lifesaving Remedies project (HR001120S0052 contract HR0011-21-2-0012); the Washington Research Foundation; the Open Philanthropy Project Improving Protein Design Fund; Amgen; the Human Frontier Science Program Cross Disciplinary Fellowship (LT000395/2020-C) and EMBO Non-Stipendiary Fellowship (ALTF 1047-2019); the EMBO Fellowship (ALTF 191-2021); the European Molecular Biology Organization (ALTF 139-2018); the la Caixa Foundation; the National Institute of Allergy and Infectious Diseases (HHSN272201700059C), the National Institutes ofHealth (DP5OD026389); the National Science Foundation (MCB 2032259); the Howard Hughes Medical Institute, the National Institute on Aging (5U19AG065156); the National Cancer Institute (R01CA240339); the Swiss National Science Foundation; the Swiss National Center of Competence for Molecular Systems Engineering; the Swiss National Center of Competence in Chemical Biology; and the European Research Council(716058).

Written by Ian Haydon, UW Medicine Institute for Protein Design

Read more:
Biologists train AI to generate medicines and vaccines - UW Medicine Newsroom

Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning – Hackaday

[mat kelcey] was so impressed and inspired by the concept of a very slow movie player (which is the playing of a movie at a slow rate on a kind of DIY photo frame) that he created his own with a high-resolution e-ink display. It shows high definition frames from Alien (1979) at a rate of about one frame every 200 seconds, but a surprising amount of work went into getting a color film intended to look good on a movie screen also look good when displayed on black & white e-ink.

The usual way to display images on a screen that is limited to black or white pixels is dithering, or manipulating relative densities of white and black to give the impression of a much richer image than one might otherwise expect. By itself, a dithering algorithm isnt a cure-all and [mat] does an excellent job of explaining why, complete with loads of visual examples.

One consideration is the e-ink display itself. With these displays, changing the screen contents is where all the work happens, and it can be a visually imperfect process when it does. A very slow movie player aims to present each frame as cleanly as possible in an artful and stylish way, so rewriting the entire screen for every frame would mean uglier transitions, and that just wouldnt do.

So the overall challenge [mat] faced was twofold: how to dither a frame in a way that looked great, but also tried to minimize the number of pixels changed from the previous frame? All of a sudden, he had an interesting problem to solve and chose to solve it in an interesting way: training a GAN to generate the dithers, aiming to balance best image quality with minimal pixel change from the previous frame. The results do a great job of delivering quality visuals even when there are sharp changes in scene contrast to deal with. Curious about the code? Heres the GitHub repository.

Heres the original Very Slow Movie Player that so inspired [mat], and heres a color version that helps make every frame a work of art. And as for dithering? Its been around for ages, but that doesnt mean there arent new problems to solve in that space. For example, making dithering look good in the game Return of the Obra Dinn required a custom algorithm.

View original post here:
Very Slow Movie Player Avoids E-Ink Ghosting With Machine Learning - Hackaday

Machine learning innovation among power industry companies dropped off in the last quarter – Power Technology

Research and innovation in machine learning in the power industry operations and technologies sector has declined in the last quarter but remains higher than it was a year ago.

The most recent figures show that the number of related patent applications in the industry stood at 108 in the three months ending March up from 103 over the same period in 2021.

Figures for patent grants related to followed a similar pattern to filings growing from 15 in the three months ending March 2021 to 19 in the same period in 2022.

The figures are compiled by GlobalData, which tracks patent filings and grants from official offices around the world. Using textual analysis, as well as official patent classifications, these patents are grouped into key thematic areas and linked to key companies across various industries.

Machine learning is one of the key areas tracked by GlobalData. It has been identified as being a key disruptive force facing companies in the coming years, and is one of the areas that companies investing resources in now are expected to reap rewards from. The figures also provide an insight into the largest innovators in the sector.

Siemens was the top innovator in the power industry operations and technologies sector in the latest quarter. The company, which has its headquarters in Germany, filed 83 related patents in the three months ending March. That was up from 77 over the same period in 2021.

It was followed by the Switzerland-based ABB with 11 patent applications, South Korea-based Korea Electric Power Corp (9 applications), and the US-based Honeywell International Inc (9 applications).

ABB has recently ramped up R&D in machine learning. It saw growth of 36.4% in related patent applications in the three months ending March compared to the same period in 2021 the highest percentage growth out of all companies tracked with more than 10 quarterly patents in the power industry operations and technologies sector.

Fabric Expansion Joints, Metal Expansion Joints and Elastomer Expansion Joints

Excerpt from:
Machine learning innovation among power industry companies dropped off in the last quarter - Power Technology

Artificial Intelligence Computing Software Market Analysis Report 2022: Complete Information of the AI-related Processors Specifications and…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence Computing Software: Market Analysis" report has been added to ResearchAndMarkets.com's offering.

Market is predicted to grow from $ 6.9B in 2021 to $ 37.6B in 2026 and may become a new sector of the economy.

This research contains complete information of the AI-related processors specifications and capabilities which were produced by the key market players and start-ups.

This comprehensive analysis can aid you in your technology acquisitions or investment decisions related to the fast-growing AI processors market.

After the main breakthrough at the turn of the century AI started to incorporate more and more artificial neural networks, connected in an ever-growing number of layers, now known as Deep Learning (DL). They can compete and outperform classical ML techniques like clustering but are more flexible and can work with much more complex datasets, including images and audio.

As machine learning entered exponential growth, it expanded into areas usually dominated by high-performance computing - such as protein folding and many-particle interactions. At the same time, our lives become increasingly dependent on its availability and reliability. This poses a number of new technical challenges but at the same time opens a road to novel solutions and technologies, in a similar way as space exploration or fundamental physics does.

More so, the commercial success of AI-enabled systems (autopilots, image processing, speech recognition and translation, to name just a few) ensures that no shortage of funds could hinder this growth. It has clearly become a new industry, if not a sector of the economy, one that is gaining importance with every passing year.

As any industry, it depends on several factors to prosper. Rising consumer demand has led to the consensus of major forecasters on the rapid growth of the sector - around 40% yearly in the near future, so funds shortage is not an issue. Instead, we must concentrate on other requirements for the efficient functioning of the industry.

The three main components are the availability of processing tools, the abundance of raw materials, and the workforce. Raw materials in this case are represented by big data, and there is often more of it than our current systems can make sense of. The workforce also seems to grow sufficiently fast, as ML cements its place in the university curriculum. So the processing tools, as well as the available energy to run them are clear bottlenecks in the exponential growth.

The end of Moore's extrapolation law due to quantum tunnelling and such, which become increasingly important with the reduction in transistor size, sets clear bounds on where we can go. To ensure long-term investments in the industry, a clear strategy must be developed to offset what will happen in 10 years

Key Highlights

Key Topics Covered:

1. Deep learning challenges

1.1 Architectural limitations

1.2 Brief introduction to deep learning

1.3 Cutting corners

1.4 Processing tools

2. Market analysis

2.1 Market overview

2.2 CPU

2.3 Edge and Mobile

2.4 GPU

2.5 FPGA

2.6 ASIC

2.6.1 Tech giants

2.6.2 Startups

2.7 Neuromorphic processors

2.8 Photonic computing

3. Glossary

4. Infographics

For more information about this report visit https://www.researchandmarkets.com/r/5wsx87

Originally posted here:
Artificial Intelligence Computing Software Market Analysis Report 2022: Complete Information of the AI-related Processors Specifications and...

Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports – Nature.com

Participants

This study was conducted as part of the ongoing Study on the Design of a Comprehensive Medical System for Chronic Kidney Disease (CKD) Based on Individual Risk Assessment by Specific Health Examination (J-SHC Study). A specific health checkup is conducted annually for all residents aged 4074years, covered by the National Health Insurance in Japan. In this study, a baseline survey was conducted in 685,889 people (42.7% males, age 4074years) who participated in specific health checkups from 2008 to 2014 in eight regions (Yamagata, Fukushima, Niigata, Ibaraki, Toyonaka, Fukuoka, Miyazaki, and Okinawa prefectures). The details of this study have been described elsewhere11. Of the 685,889 baseline participants, 169,910 were excluded from the study because baseline data on lifestyle information or blood tests were not available. In addition, 399,230 participants with a survival follow-up of fewer than 5years from the baseline survey were excluded. Therefore, 116,749 patients (42.4% men) with a known 5-year survival or mortality status were included in this study.

This study was conducted in accordance with the Declaration of Helsinki guidelines. This study was approved by the Ethics Committee of Yamagata University (Approval No. 2008103). All data were anonymized before analysis; therefore, the ethics committee of Yamagata University waived the need for informed consent from study participants.

For the validation of a predictive model, the most desirable way is a prospective study on unknown data. In this study, the data on health checkup dates were available. Therefore, we divided the total data into training and test datasets to build and test predictive models based on health checkup dates. The training dataset consisted of 85,361 participants who participated in the study in 2008. The test dataset consisted of 31,388 participants who participated in this study from 2009 to 2014. These datasets were temporally separated, and there were no overlapping participants. This method would evaluate the model in a manner similar to a prospective study and has an advantage that can demonstrate temporal generalizability. Clipping was performed for 0.01% outliers for preprocessing, and normalization was performed.

Information on 38 variables was obtained during the baseline survey of the health checkups. When there were highly correlated variables (correlation coefficient greater than 0.75), only one of these variables was included in the analysis. High correlations were found between body weight, abdominal circumference, body mass index, hemoglobin A1c (HbA1c), fasting blood sugar, and AST and alanine aminotransferase (ALT) levels. We then used body weight, HbA1c level, and AST level as explanatory variables. Finally, we used the following 34 variables to build the prediction models: age, sex, height, weight, systolic blood pressure, diastolic blood pressure, urine glucose, urine protein, urine occult blood, uric acid, triglycerides, high-density lipoprotein cholesterol (HDL-C), LDL-C, AST, -glutamyl transpeptidase (GTP), estimated glomerular filtration rate (eGFR), HbA1c, smoking, alcohol consumption, medication (for hypertension, diabetes, and dyslipidemia), history of stroke, heart disease, and renal failure, weight gain (more than 10kg since age 20), exercise (more than 30min per session, more than 2days per week), walking (more than 1h per day), walking speed, eating speed, supper 2h before bedtime, skipping breakfast, late-night snacks, and sleep status.

The values of each item in the training data set for the alive/dead groups were compared using the chi-square test, Student t-test, and MannWhitney U test, and significant differences (P<0.05) were marked with an asterisk (*) (Supplementary Tables S1 and S2).

We used two machine learning-based methods (gradient boosting decision tree [XGBoost], neural network) and one conventional method (logistic regression) to build the prediction models. All the models were built using Python 3.7. We used the XGBoost library for GBDT, TensorFlow for neural network, and Scikit-learn for logistic regression.

The data obtained in this study contained missing values. XGBoost can be trained to predict even with missing values because of its nature; however, neural network and logistic regression cannot be trained to predict with missing values. Therefore, we complemented the missing values using the k-nearest neighbor method (k=5), and the test data were complemented using an imputer trained using only the training data.

The parameters required for each model were determined for the training data using the RandomizedSearchCV class of the Scikit-learn library and repeating fivefold cross-validation 5000 times.

The performance of each prediction model was evaluated by predicting the test dataset, drawing a ROC curve, and using the AUC. In addition, the accuracy, precision, recall, F1 scores (the harmonic mean of precision and recall), and confusion matrix were calculated for each model. To assess the importance of explanatory variables for the predictive models, we used SHAP and obtained SHAP values that express the influence of each explanatory variable on the output of the model4,12. The workflow diagram of this study is shown in Fig.5.

Workflow diagram of development and performance evaluation of predictive models.

See the rest here:
Prediction of mortality risk of health checkup participants using machine learning-based models: the J-SHC study | Scientific Reports - Nature.com

AI and Machine Learning in Finance: How Bots are Helping the Industry – ReadWrite

Artificial intelligence and ML are making considerable inroads in finance. They are the critical aspect of variousfinancial applications, including evaluating risks, managing assets, calculating credit scores, and approving loans.

Businesses use AI and ML:

Taking the above points into account, its no wonder that companies like Forbes and Venture beat are usingAI to predict the cash flow and detect fraud.

In this article, we present the financial domain areas in which AI and ML have a more significant impact. Well also discuss why financial companies should care about and implement these technologies.

Machine learning is a branch of artificial intelligence that allows learning and improvement without any programming. Simply put, data scientists train the MI model with existing data sets and automatically adjust its parameters to improve the outcome.

According to Statista, digital payments are expected to show an annual growth rate of 12.77% and grow to 20% by 2026. This vast number of global revenues, done online requires an intelligent fraud system.

Source: Mordor Intelligence

Traditionally, to check the authenticity of users, fraud-detection systems analyze websites through factors like location, merchant ID, the amount spent, etc. However, while this method is appropriate for a few transactions, it would not cope with the increased transactional amount.

And, analyzing the surge of digital payments, businesses cant rely on traditional fraud-detection methods to process payments. This gives rise to AI-based systems with advanced features.

An AI and ML-powered payment gateway will look at various factors to evaluate the risk score. These technologies consider a large volume of data (location of the merchant, time zone, IP address, etc.) to detect unexpected anomalies, and verify the authenticity of the customer.

Additionally, the finance industry, through AI, can process transactions in real-time, allowing the payment industry to process large transactions with high accuracy and low error rates.

The financial sector, including the banks, trading, and other fintech firms, are using AI to reduce operational costs, improve productivity, enhance users experience, and improve security.

The benefits of AI and ML revolve around their ability to work with various datasets. So lets have a quick look at some other ways AI and ML are making roads into this industry:

Considering how people invest their money in automation, AI significantly impacts the payment landscape. It improves efficiency and helps businesses to rethink and reconstruct their process. For example, businesses can use AI to decrease the credit card processing (gettrx dot com card processing guide for merchants) time, increase automation and seamlessly improve cash flow.

You can predict credit, lending, security, trading, baking, and process optimization with AI and machine learning.

Human error has always been a huge problem; however, with machine learning models, you can reduce human errors compared to humans doing repetitive tasks.

Incorporating security and ease of use is a challenge that AI can help the payment industry overcome. Merchants and clients want a payment system that is easy to use and authentic.

Until now, the customers have to perform various actions to authenticate themselves to complete a transaction. However, with AI, the payment providers can smooth transactions, and customers have low risk.

AI can efficiently perform high volume; labor-intensive tasks like quickly scrapping data and formatting things. Also, AI-based businesses are focused and efficient; they have minimum operational cost and can be used in the areas like:

Creating more Value:

AI and machine learning models can generate more value for their customers. For instance:

Improved customer experience: Using bots, financial sectors like banks can eliminate the need to stand in long queues. Payment gateways can automatically reach new customers by gathering their historical data and predicting user behavior. Besides, Ai used in credit scoring helps detect fraud activity.

There are various ways in which machine learning and artificial intelligence are being employed in the finance industry. Some of them are:

Process Automation:

Process automation is one of the most common applications as the technology helps automate manual and repetitive work, thereby increasing productivity.

Moreover, AI and ML can easily access data, follow and recognize patterns and interpret the behavior of customers. This could be used for the customer support system.

Minimizing Debit and Credit Card Frauds:

Machine learning algorithms help detect transactional funds by analyzing various data points that mostly get unnoticed by humans. ML also reduces the number of false rejections and improves the real-time approvals by gauging the clients behavior on the Internet.

Apart from spotting fraudulent activity, AI-powered technology is used to identify suspicious account behavior and fraudulent activity in real-time. Today, banks already have a monitoring system trained to catch the historical payment data.

Reducing False Card Declines:

Payment transactions declined at checkout can be frustrating for customers, putting huge repercussions on banks and their reputations. Card transactions are declined when the transaction is flagged as fraud, or the payment amount crosses the limit. AI-based systems are used to identify transaction issues.

The influx of AI in the financial sector has raised new concerns about its transparency and data security. Companies must be aware of these challenges and follow safeguards measures:

One of the main challenges of AI in finance is the amount of data gathered in confidential and sensitive forms. The correct data partner will give various security options and standards and protect data with the certification and regulations.

Creating AI models in finance that provide accurate predictions is only successful if they are explained to and understood by the clients. In addition, since customers information is used to develop such models, they want to ensure that their personal information is collected, stored, and handled securely.

So, it is essential to maintain transparency and trust in the finance industry to make customers feel safe with their transactions.

Apart from simply implementing AI in the online finance industry, the industry leaders must be able to adapt to the new working models with new operations.

Financial institutions often work with substantial unorganized data sets in vertical silos. Also, connecting dozens of data pipeline components and tons of APIS on top of security to leverage a silo is not easy. So, financial institutions need to ensure that their gathered data is appropriately structured.

AI and ML are undoubtedly the future of the financial sector; the vast volume of processes, transactions, data, and interactions involved with the transaction make them ideal for various applications. By incorporating AI, the finance sector will get vast data-processing capabilities at the best prices, while the clients will enjoy the enhanced customer experience and improved security.

Of course, the power of AI can be realized within transaction banking, which sits on the organizations usage. Today, AI is very much in progress, but we can remove its challenges by using the technology. Lastly, AI will be the future of finance you must be ready to embrace its revolution.

Featured Image Credit: Photo by Anna Nekrashevich; Pexels; Thank you!

See more here:
AI and Machine Learning in Finance: How Bots are Helping the Industry - ReadWrite

Artificial intelligence was supposed to transform health care. It hasn’t. – POLITICO

Companies come in promising the world and often dont deliver, said Bob Wachter, head of the department of medicine at the University of California, San Francisco. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Administrators say algorithms the software that processes data from outside companies dont always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But its slow going. Research based on job postings shows health care behind every industry except construction in adopting AI.

The Food and Drug Administration has taken steps to develop a model for evaluating AI, but it is still in its early days. There are questions about how regulators can monitor algorithms as they evolve and rein in the technologys detrimental aspects, such as bias that threaten to exacerbate health care inequities.

Sometimes theres an assumption that AI is working, and its just a matter of adopting it, which is not necessarily true, said Florenta Teodoridis, a professor at the University of Southern Californias business school whose research focuses on AI. She added that being unable to understand why an algorithm came to a certain result is fine for things like predicting the weather. But in health care, its impact is potentially life-changing.

Despite the obstacles, the tech industry is still enthusiastic about AIs potential to transform health care.

The transition is slightly slower than I hoped but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026, Hinton told POLITICO via email. He said he never suggested that we should get rid of radiologists, but that we should let AI read scans for them.

If hes right, artificial intelligence will start taking on more of the rote tasks in medicine, giving doctors more time to spend with patients to reach the right diagnosis or develop a comprehensive treatment plan.

I see us moving as a medical community to a better understanding of what it can and cannot do, said Lara Jehi, chief research information officer for the Cleveland Clinic. It is not going to replace radiologists, and it shouldnt replace radiologists.

Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hours-long process oncologists and physicists undertake to map out a surgical plan for removing complicated head and neck tumors.

An algorithm can do the job in an hour, said John D. Halamka, president of Mayo Clinic Platform: Weve taken 80 percent of the human effort out of it. The technology gives doctors a blueprint they can review and tweak without having to do the basic physics themselves, he said.

NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebooks Artificial Intelligence Research group to reduce the time it takes to get an MRI from one hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who worked on the research, sees opportunity in AIs ability to downsize the amount of data doctors need to review.

When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Bob Wachter, head of the department of medicine at the University of California, San Francisco

Covid has accelerated AIs development. Throughout the pandemic, health providers and researchers shared data on the disease and anonymized patient data to crowdsource treatments.

Microsoft and Adaptive Biotechnologies, which partner on machine learning to better understand the immune system, put their technology to work on patient data to see how the virus affected the immune system.

The amount of knowledge thats been obtained and the amount of progress has just been really exciting, said Peter Lee, corporate vice president of research and incubations at Microsoft.

There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to adopt it, the health system created a response team to monitor the technology for alerts and take action when needed.

Im calling it our care traffic control, said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis is declining.

The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.

Health systems need to enable algorithms to access patient data. Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. But thats not as easy for smaller players.

Another problem is that every health system is unique in its technology and the way it treats patients. That means an algorithm may not work as well everywhere.

Over the last year, an independent study on a widely used sepsis detection algorithm from EHR giant Epic showed poor results in real-world settings, suggesting where and how hospitals used the AI mattered.

This quandary has led top health systems to build out their own engineering teams and develop AI in-house.

That could create complications down the road. Unless health systems sell their technology, its unlikely to undergo the type of vetting that commercial software would. That could allow flaws to go unfixed for longer than they might otherwise. Its not just that the health systems are implementing AI while no ones looking. Its also that the stakeholders in artificial intelligence, in health care, technology and government, havent agreed upon standards.

A lack of quality data which gives algorithms material to work with is another significant barrier in rolling out the technology in health care settings.

Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence.|Elaine Thompson/AP Photo

Much data comes from electronic health records but is often siloed among health care systems, making it more difficult to gather sizable data sets. For example, a hospital may have complete data on one visit, but the rest of a patients medical history is kept elsewhere, making it harder to draw inferences about how to proceed in caring for the patient.

We have pieces and parts, but not the whole, said Aneesh Chopra, who served as the governments chief technology officer under former President Barack Obama and is now president of data company CareJourney.

While some health systems have invested in pulling data from a variety of sources into a single repository, not all hospitals have the resources to do that.

Health care also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the sector behind others in terms of algorithmic horsepower.

Importantly, not enough strong data on health outcomes is available, making it more difficult for providers to use AI to improve how they treat patients.

That may be changing. A recent series of studies on a sepsis algorithm included copious details on how to use the technology in practice and documented physician adoption rates. Experts have hailed the studies as a good template for how future AI studies should be conducted.

But working with health care data is also more difficult than in other sectors because it is highly individualized.

We found that even internally across our different locations and sites, these models dont have a uniform performance, said Jehi of the Cleveland Clinic.

And the stakes are high if things go wrong. The number of paths that patients can take are very different than the number of paths that I can take when Im on Amazon trying to order a product, Wachter said.

Health experts also worry that algorithms could amplify bias and health care disparities.

For example, a 2019 study found that a hospital algorithm more often pushed white patients toward programs aiming to provide better care than Black patients, even while controlling for the level of sickness.

Last year, the FDA published a set of guidelines for using AI as a medical device, calling for the establishment of good machine learning practices, oversight of how algorithms behave in real-world scenarios and development of research methods for rooting out bias.

The agency subsequently published more specific guidelines on machine learning in radiological devices, requiring companies to outline how the technology is supposed to perform and provide evidence that it works as intended. The FDA has cleared more than 300 AI-enabled devices, largely in radiology, since 1997.

Regulating algorithms is a challenge, particularly given how quickly the technology advances. The FDA is attempting to head that off by requiring companies to institute real-time monitoring and submit plans on future changes.

But in-house AI isnt subject to FDA oversight. Bakul Patel, former head of the FDAs Center for Devices and Radiological Health and now Googles senior director for global digital health strategy and regulatory affairs, said that the FDA is thinking about how it might regulate noncommercial artificial intelligence inside of health systems, but he adds, theres no easy answer.

FDA has to thread the needle between taking enough action to mitigate flaws in algorithms while also not stifling AIs potential, he said.

Some argue that public-private standards for AI would help advance the technology. Groups, including the Coalition for Health AI, whose members include major health systems and universities as well as Google and Microsoft, are working on this approach.

But the standards they envision would be voluntary, which could blunt their impact if not widely adopted.

Original post:
Artificial intelligence was supposed to transform health care. It hasn't. - POLITICO