Why AI and machine learning are drifting away from the cloud – Protocol

"Our government is correct: Companies actually need to pay more attention," said Lou Steinberg, formerly the CTO at TD Ameritrade.

In recent years, threats from Russia have driven much of the cybersecurity attention and investment among businesses in the U.S. and Western Europe, especially after Russias invasion of Ukraine in February. Understandably, the threat of ransomware and disruption of critical infrastructure tends to provoke a response.

But when it comes to state-sponsored intrusions, China was behind a stunning 67% of the attacks between mid-2020 and mid-2021, compared to just 1% for the Russian government, according to data from CrowdStrike.

Without a doubt, China "stands out as the leading nation in terms of threat relevance, at least for America," said Tom Hegel, a senior threat researcher at SentinelOne.

In July, the FBI and MI5 issued an unprecedented joint warning about the threat of IP theft by China. During an address to business leaders in London, FBI Director Christopher Wray said that China's hacking program is "bigger than that of every other major country combined" and that the Chinese government is "set on stealing your technology whatever it is that makes your industry tick."

"The Chinese government poses an even more serious threat to Western businesses than even many sophisticated businesspeople realize," Wray said.

During his three years as a researcher at Secureworks, Marc Burnard has seen Chinese government hackers go after customers in chemicals manufacturing, aviation, telecommunications and pharmaceuticals to name just a few.

"It's quite difficult to point out what the key sectors are for China, because they target so many," Burnard said. "It's a scale that just completely dwarfs anything from the likes of Iran, North Korea and Russia."

One of the most brazen examples was China's release of bomber jets with strikingly similar designs to the F-35 starting in 2011, according to Nicolas Chaillan, former chief software officer for the U.S. Air Force. Documents leaked by former NSA contractor Edward Snowden appeared to confirm that Chinese government hackers stole data on the F-35 Lightning II, which is believed to have been used in the design of Chinese jets including the J-31 and J-20.

Chaillan who resigned in protest over the military's progress on IT modernization amid the China threat said the recent FBI warning on China is telling. "It takes a lot for the government to start saying stuff like that," he told Protocol. "That usually gives you a hint that it's really, really bad."

China "stands out as the leading nation in terms of threat relevance, at least for America."

Wray has made a number of public remarks on the China cyber threat this year. In a January speech, he said the FBI had 2,000 open investigations related to attempted theft of technology and information by the Chinese government. The FBI is opening a new case related to Chinese intelligence roughly every 12 hours, he said at the time.

In July 2021, the White House denounced the Chinese government over its "pattern of malicious cyber activity," in tandem with the European Union, the U.K. and NATO. The action made it clear that the Biden administration believes China has been ignoring its 2015 agreement to cease hacking activities meant to steal the IP of U.S. businesses.

Major incidents have included the Chinese government's widespread exploitation of vulnerabilities in Microsoft Exchange in 2021, which led to the compromise of 10,000 U.S. companies' networks, Wray said in January.

In analyzing the Chinese cyber threat, the key is to understand the larger context for why China is targeting Western IP, said Michael Daniel, formerly cybersecurity coordinator and special assistant to the president during the Obama administration.

"China is an expanding power that fundamentally sees itself as challenging the West, and challenging the world order that the Western European system has set up," Daniel said.

A central part of that aspiration is challenging the West economically, but China is prone to taking shortcuts, experts say.

The Chinese government laid out its "Made in China 2025" strategy, which identifies the industries that it considers to be most important going forward, in 2015. The document is extremely helpful when it comes to defending against IP theft by China's government, said Daniel, who is now president and CEO of the Cyber Threat Alliance, an industry group.

"If your company is in one of those industries identified in that strategy, you are a target for Chinese intelligence," he said. "It's that simple, actually."

Some of the industries that now face the biggest threat of IP theft from China such as energy, aerospace defense technology and quantum computing are already well aware of it, according to Steinberg, now the founder of cybersecurity research lab CTM Insights.

But other industries should be paying closer attention than they are, he said. Those include the AI/robotics, agricultural technology and electric vehicle sectors which are among the industries mentioned in the "Made in China 2025" plan.

"If you're on their list, they've got an army of skilled people who are trying to figure out how to get your intellectual property," Steinberg said.

"If your company is in one of those industries identified in that strategy, you are a target for Chinese intelligence."

Christian Sorensen, formerly a U.S. Cyber Command official and U.S. Air Force officer, said there's been a clear shift in China's IP theft priorities from its traditional focus on defense-related technologies such as the designs for the F-35 and into the high-tech and biotech sectors. For instance, in mid-2020, the U.S. accused Chinese government hackers of attempting to steal data from COVID-19 vaccine developer Moderna.

Threats of this sort can be more difficult for perennially overwhelmed security teams to prioritize, however, said Sorensen, who is now founder and CEO of cybersecurity vendor SightGain.

"Everybody pays attention to what's right in their face," he said. "Our intellectual property is just flying out of our borders, which is a serious strategic threat. But it's not always the front-burner threat."

That has been particularly the case in 2022 the year of "Shields Up."

Documents leaked by former NSA contractor Edward Snowden appeared to confirm that Chinese government hackers stole data on the U.S.'s F-35 Lightning II. Photo: Robert Atanasovski/AFP via Getty Images

Following the invasion of Ukraine, there was a widespread expectation that the U.S. and other allies of Ukraine would face disruptive cyberattacks by Russia. So far, major retaliatory attacks from Russia have not materialized though experts believe a Russian escalation of this sort could still come as soon as later this year, depending on how events play out with Ukraine and sanctions.

America's focus on its cyber adversaries tends to go in cycles, experts say. And even prior to the Ukraine war, Russian threat actors have been constantly in the spotlight, from the SolarWinds breach by Russia's intelligence forces in 2020 to the Colonial Pipeline and Kaseya ransomware attacks by cybercriminals operating out of the country in 2021.

It's not out of the question that China might pursue similar disruptive cyberattacks against the U.S. and Western Europe in the future, however, if China wants to prevent aid to Taiwan, Daniel said. It's believed that China has been seeking the ability to strike critical infrastructure for a situation such as that, he said.

To date, however, China's cyber activity has been "almost entirely covert cyber espionage campaigns," said Josephine Wolff, associate professor of cybersecurity policy at Tufts University.

Whereas Russian cyberattacks are often meant to create noise and chaos, Wolff said, China's attacks are "meant to happen undercover. They don't want anyone to know it's them."

U.S.-China tensions rose Tuesday as House Speaker Nancy Pelosi visited Taiwan. Mandiant's John Hultquist said in a statement that China is expected to carry out significant cyber espionage against targets in Taiwan and the U.S. related to the situation.

Notably, the Chinese government is very effective at organizing the hacking activities, said SentinelOne's Hegel. "It's a well-oiled machine for mass espionage."

While China's hacking program often does not perform the most technically advanced attacks, its sheer size and persistence allows it to be successful over the longer-term, he said.

But because China's motives are different compared to Russia, "you've got to defend yourself [in] a completely different way," said CTM Insights' Steinberg.

The go-to technologies in these situations are data-loss prevention, data exfiltration detection and deception technologies such as tripwires, he said. Rather than expecting to prevent an intrusion every time, the key to stopping IP theft is "Can you catch it happening and shut it down?"

Businesses should also concentrate on applying special protections to systems that are hosting IP, said Burnard, who is senior consultant for information security research at Secureworks. That might include network segmentation and enhanced monitoring for those parts of the system, he said.

One way that Chinas hackers have been evolving can be seen in their methods for gaining initial access to corporate systems, experts say. Recent years have seen Chinese attackers increasingly exploiting vulnerabilities, instead of just relying on phishing, said Kevin Gonzalez, director of security at cybersecurity vendor Anvilogic.

China-based attackers exploited a dozen published vulnerabilities in 2021, up from just two the prior year, CrowdStrike reported making the Chinese government's hacking operation the "leader in vulnerability exploitation."

The threat actors have shown capabilities for exploiting both previously unknown, zero-day vulnerabilities as well as unpatched known vulnerabilities, Hegel said.

Additionally, Chinas government hackers are now scanning for vulnerabilities the second they pop up online," he said for instance, in the case of Log4Shell, a severe vulnerability in the widely used Apache Log4j software that was uncovered in December 2021. The Chinese government reportedly punished China-based tech giant Alibaba for informing the developers behind Log4j about the flaw prior to telling the government.

China has used more innovative techniques as well, such as software supply chain attacks. The compromises of CCleaner and Asus Live Update in 2017 are among the past instances.

Still, while China's focus on IP theft makes some defenses unique from those needed to stop ransomware, there are plenty of countermeasures that can help against both Russia- and China-style threats, experts said.

Placing an emphasis on strong security hygiene, vulnerability and patch management, identity authentication and zero-trust architecture will go a long way toward defending against attacks regardless of what country they're coming from, said Adam Meyers, senior vice president of intelligence at CrowdStrike.

Threat hunting is also a valuable investment, whether you're concerned about threats from Russia, China or anywhere else, Meyers said. "You have to be out there looking for these threats, because the adversary is constantly moving," he said.

But hacking is not the only cyber threat that China poses to the U.S. and the West, experts say. And it may not even be the most challenging, said Samuel Visner, a longtime cybersecurity executive and former NSA official, who currently serves as technical fellow at MITRE.

The harder question, according to Visner, is how to respond to China's initiative to build a "Digital Silk Road" across much of the globe using exported Chinese IT infrastructure. The technology is believed to be capable of facilitating surveillance on citizens. Ultimately, the fear is that the Digital Silk Road could be used to feed information about Americans or Europeans traveling abroad back to the Chinese government, he said.

While meeting a different definition of cybersecurity, Visner said, "that is also a security challenge."

See the original post:
Why AI and machine learning are drifting away from the cloud - Protocol

Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates – AiThority

This is our AI Daily Roundup today. We are covering the top updates from around the world. The updates will feature state-of-the-art capabilities inartificial intelligence (AI),Machine Learning, Robotic Process Automation, Fintech, and human-system interactions. We cover the role of AI Daily Roundup and its application in various industries and daily lives.

Burns & LevinsonrepresentedSyrup Tech, the AI-powered predictive software platform for inventory excellence in commerce, in its$6.3 millionseed funding round led byGradient Ventures, Googles AI-focused venture fund. The round also includedFlybridgeCapital,FirstminuteCapital,RackhouseVentures, as well as Angel investors including (former) executives at Adidas,Bonobos, Salesforce,ASOS,ThredUp, Casper,Zalando, and Stripe. 1984 Ventures, who led the companys pre-seed round last year,continued investing in this round.

InterDigital, Inc., a mobile and video technology research and development company, applauded the appointment of Xiaofei Wang to serve as Chair of the Topic Interest Group for Artificial Intelligence and Machine Learning (AIML) in IEEE 802.11, the IEEE working group dedicated to standards for wireless local area networks (WLAN).

JuniperNetworks a leader in secure, AI-driven networks, announced that Frasers Property Australia and Frasers Property Industrial in Australia, one of Australias leading diversified property groups and an Australian divisions of the multi-national Frasers Property Limited, has selected Juniper Networks to upgrade its network infrastructure, enhancing business agility and IT efficiency across Australia. With a modern, AI-driven network, Frasers Property Australia has increased customer satisfaction and loyalty while reducing the time to implement network changes from an average of six weeks to five minutes.

Accenture has acquired Tenbu, a cloud datafirm that specializes in solutions for intelligent decision-making and planning through areas such as analytics, big data and machine learning. With more than 150 certifications, Tenbus team of 170 data specialists will join the Data & AI team withinAccenture Cloud First. Terms of the acquisition were not disclosed.

Crowdworks (CEOPark Min-woo), an Artificial Intelligence (AI) training data platform company, recently announced that it had completed the registration of a US patent for Technology for AI target training image sampling.

Leadingenterprise automation software company, UiPath announced it has acquired Re:infer, a London-based natural language processing (NLP) company for unstructured documents and communications. Founded in 2015 by Ph.D. scientists from the AI research lab at University College London, Re:infer uses machine learning (ML) technology to mine context from communication messages and transform them into actionable data with speed and accuracy.

Read more:
Daily AI Roundup: Biggest Machine Learning, Robotic And Automation Updates - AiThority

Application to The application to the AIMS African Masters of Machine Intelligence (AMMI) is open! – Uganda

KIGALI, Rwanda, 3 August 2022,-/African Media Agency (AMA)/-

Why AMMI?

The African Masters of Machine Intelligence (AMMI) will prepare well-rounded Machine Intelligence (MI) researchers by focusing on basic research in MI and developing a vast array of applications that respond to both the present and future needs of Africa and the world. AMMI graduates will go on to create and/or join the best industrial and public R&D labs in Africa and beyond, strengthening the African MI community and the scientific community at large, achieving crucial breakthroughs for the global good.

ADMISSION REQUIREMENTS

Basic Requirements

The minimum admission requirements are:

For more detailed information on acceptance documents, please contact theAMMI Admissions Office

Note: Your Original degrees, diplomas, academic certificates and transcripts should be in your possession upon arrival on Campus.

Academic History

Required transcripts records: Upload your official transcripts with your application. These include transcripts from every post-secondary institution attended, including summer sessions and extension programs. All academic records that are not originally in English or French should be issued in their original language and accompanied by English-certified translations.

References

At least three references are required. Be sure to inform your recommenders that they will be reached to provide a recommendation letter on your behalf. Your recommenders are asked to give their impressions of your intellectual ability, aptitude in research or professional skills, character, and previous work quality.

Personal Statement

The personal statement should tell us about yourself, your academic achievements, aspirations and other important accomplishments (i.e., projects, online courses, awards, etc.) related to AI and Machine Learning. It should also paint a picture of your academic aspirations, including post-masters. (500 words max)

Background Summary

Note: Prospective students applying for AIMS are welcome to apply for AMMI.

Application Deadline:31 August 2022.

Distributed byAfrican Media Agency (AMA)on behalf of TheAfrican Institute for Mathematical Sciences (AIMS).

The post Application toThe application to the AIMS African Masters of Machine Intelligence (AMMI) is open! appeared first on African Media Agency.

Source : African Media Agency (AMA)

Related

Excerpt from:
Application to The application to the AIMS African Masters of Machine Intelligence (AMMI) is open! - Uganda

U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M – Business Wire

DENVER--(BUSINESS WIRE)--Palantir Technologies Inc. (NYSE: PLTR) today announced that it will expand its work with the U.S. Army Research Laboratory to implement data and artificial intelligence (AI)/machine learning (ML) capabilities for users across the combatant commands (COCOMs). The contract totals $99.9 million over two years.

Palantir first partnered with the Army Research Lab to provide those on the frontlines with state-of-the-art operational data and AI capabilities in 2018. Palantirs platform has supported the integration, management, and deployment of relevant data and AI model training to all of the Armed Services, COCOMs, and special operators. This extension grows Palantirs operational RDT&E work to more users globally.

Maintaining a leading edge through technology is foundational to our mission and partnership with the Army Research Laboratory, said Akash Jain, President of Palantir USG. Our nations armed forces require best-in-class software to fulfill their missions today while rapidly iterating on the capabilities they will need for tomorrows fight. We are honored to support this critical work by teaming up to deliver the most advanced operational AI capabilities available with dozens of commercial and public sector partners.

By working with the U.S. Army Research Lab, integrating with partner vendors, and iterating with users on the front lines, Palantirs software platforms will continue to quickly implement advanced AI capabilities against some of DODs most pressing problem sets. Were looking forward to fielding our newest ML, Edge, and Space technologies alongside our U.S. military partners, said Shannon Clark, Senior Vice President of Innovation, Federal. These technologies will enable operators in the field to leverage AI insights to make decisions across many fused domains. From outer space to the sea floor, and everything in between.

About Palantir Technologies Inc.

Foundational software of tomorrow. Delivered today. Additional information is available at https://www.palantir.com.

Forward-Looking Statements

This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, Palantirs expectations regarding the amount and the terms of the contract and the expected benefits of our software platforms. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond our control. These risks and uncertainties include our ability to meet the unique needs of our customer; the failure of our platforms to satisfy our customer or perform as desired; the frequency or severity of any software and implementation errors; our platforms reliability; and our customers ability to modify or terminate the contract. Additional information regarding these and other risks and uncertainties is included in the filings we make with the Securities and Exchange Commission from time to time. Except as required by law, we do not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

See the rest here:
U.S. Army Research Lab Expands Artificial Intelligence and Machine Learning Contract with Palantir for $99.9M - Business Wire

Researchers Partner With NIH and Google to Develop AI Learning Modules – University of Arkansas Newswire

Photo by University Relations

Data science researchers will build cloud-based learning modules for biomedical research.

FAYETTEVILLE, Ark. With supplemental funding from the National Institutes of Health, a team of researchers led by Justin Zhan, professor of data science at the University of Arkansas, will collaborate with NIH and Google software engineers to build cloud-based learning modules for biomedical research.

These modules will help educate biomedical researchers on the ways that artificial intelligence and machine learning, both rapidly becoming important tools in biomedical research, can enhance and streamline data analysis for different types of medical and scientific images.

The new funding, $140,135, has been awarded through the National Institute of General Medical Sciences Institutional Development Award Program. Zhan partnered with Kyle Quinn, associate professor of biomedical engineering, and Larry Cornett, director of the Arkansas IDeA Network of Biomedical Research Excellence at the University of Arkansas for Medical Sciences, which is administering the grant.

In addition to the Arkansas IDeA Networks support, case studies for the learning modules will be developed with support from the data science and the imaging and spectroscopy cores of the Arkansas Integrative Metabolic Research Center.

Big data is transforming health and biomedical science, Zhan said. The new technology is rapidly expanding the quantity and variety of imaging modalities, for example, which can tell doctors so much more about their patients. But this transformation has created challenges, particularly with storing and managing massive data sets. Also, while the big data revolution transforms biology and medicine into data-driven sciences, traditional education is responding slowly. Addressing this shortcoming is part of what were trying to do.

The researchers will secure the technical expertise and resources needed to provide training to students and health-care professionals on the use of artificial intelligence and machine learning, as they apply to biomedical research.

Artificial intelligence is the ability of computer systems to perform tasks that have traditionally required human intelligence. One example of artificial intelligence is machine learning, in which algorithms and computations become more accurate than humans at predicting outcomes. This process demands tremendous computational power, more than standard computer clusters can handle.

The Arkansas researchers will parter with software engineers at Google and the National Institute of General Medical Sciences to address the computational requirements of artificial intellegence-driven research through the use of cloud computing. Cloud computing provides access to computing services over the internet, allowing faster and more flexible solutions in biomedical research.

The cloud computing modules developed by Zhans team will help researchers understand how artificial intelligence can be used in biomedical sciences to analyze big data. Case studies involving the identification of unique features in large biomedical image sets and the prediction of disease states is expected to help scientists, researchers and clinicians understand how to implement these powerful tools in their work.

About the Arkansas Integrative Metabolic Research Center: Established by a $10.8 million NIH grant in 2021, the Arkansas Integrative Metabolic Research Center focuses on the role of cell and tissue metabolism in disease, development, and repair through research involving advanced imaging, bioenergetics and data science. Quinn is the center director, and Zhan directs centers Data Science Core.

About the University of Arkansas: As Arkansas' flagship institution, the UofA provides an internationally competitive education in more than 200 academic programs. Founded in 1871, the UofA contributes more than $2.2 billion to Arkansas economy through the teaching of new knowledge and skills, entrepreneurship and job development, discovery through research and creative activity while also providing training for professional disciplines. The Carnegie Foundation classifies the UofA among the few U.S. colleges and universities with the highest level of research activity. U.S. News & World Report ranks the UofA among the top public universities in the nation. See how the UofA works to build a better world at Arkansas Research News.

Read the rest here:
Researchers Partner With NIH and Google to Develop AI Learning Modules - University of Arkansas Newswire

Can artificial intelligence really help us talk to the animals? – The Guardian

A dolphin handler makes the signal for together with her hands, followed by create. The two trained dolphins disappear underwater, exchange sounds and then emerge, flip on to their backs and lift their tails. They have devised a new trick of their own and performed it in tandem, just as requested. It doesnt prove that theres language, says Aza Raskin. But it certainly makes a lot of sense that, if they had access to a rich, symbolic way of communicating, that would make this task much easier.

Raskin is the co-founder and president of Earth Species Project (ESP), a California non-profit group with a bold ambition: to decode non-human communication using a form of artificial intelligence (AI) called machine learning, and make all the knowhow publicly available, thereby deepening our connection with other living species and helping to protect them. A 1970 album of whale song galvanised the movement that led to commercial whaling being banned. What could a Google Translate for the animal kingdom spawn?

The organisation, founded in 2017 with the help of major donors such as LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to unlock communication within our lifetimes. The end we are working towards is, can we decode animal communication, discover non-human language, says Raskin. Along the way and equally important is that we are developing technology that supports biologists and conservation now.

Understanding animal vocalisations has long been the subject of human fascination and study. Various primates give alarm calls that differ according to predator; dolphins address one another with signature whistles; and some songbirds can take elements of their calls and rearrange them to communicate different messages. But most experts stop short of calling it a language, as no animal communication meets all the criteria.

Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. People are starting to use it, says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. But we dont really understand yet how much we can do.

Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative Project CETI (which stands for the Cetacean Translation Initiative) plans to use machine learning to translate the communication of sperm whales.

Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals for example primates, whales and dolphins the goal is to develop tools that could be applied to the entire animal kingdom. Were species agnostic, says Raskin. The tools we develop can work across all of biology, from worms to whales.

The motivating intuition for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages without the need for any prior knowledge.

This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, king has a relationship to man with the same distance and direction that woman has to queen. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)

It was later noticed that these shapes are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the words point in English. You can translate most words decently well, says Raskin.

ESPs aspiration is to create these kinds of representations of animal communication working on both individual species and many species at once and then explore questions such as whether there is overlap with the universal human shape. We dont know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. I dont know which will be the more incredible the parts where the shapes overlap and we can directly communicate or translate, or the parts where we cant.

He adds that animals dont only communicate vocally. Bees, for example, let others know of a flowers location via a waggle dance. There will be a need to translate across different modes of communication too.

The goal is like going to the moon, acknowledges Raskin, but the idea also isnt to get there all at once. Rather, ESPs roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.

For example, ESP recently published a paper (and shared its code) on the so called cocktail party problem in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment.

To our knowledge, no one has done this end-to-end detangling [of animal sound] before, says Raskin. The AI-based model developed by ESP, which was tried on dolphin signature whistles, macaque coo calls and bat vocalisations, worked best when the calls came from individuals that the model had been trained on; but with larger datasets it was able to disentangle mixtures of calls from animals not in the training cohort.

Another project involves using AI to generate novel animal calls, with humpback whales as a test species. The novel calls made by splitting vocalisations into micro-phonemes (distinct units of sound lasting a hundredth of a second) and using a language model to speak something whale-like can then be played back to the animals to see how they respond. If the AI can identify what makes a random change versus a semantically meaningful one, it brings us closer to meaningful communication, explains Raskin. It is having the AI speak the language, even though we dont know what it means yet.

A further project aims to develop an algorithm that ascertains how many call types a species has at its command by applying self-supervised machine learning, which does not require any labelling of data by human experts to learn patterns. In an early test case, it will mine audio recordings made by a team led by Christian Rutz, a professor of biology at the University of St Andrews, to produce an inventory of the vocal repertoire of the Hawaiian crow a species that, Rutz discovered, has the ability to make and use tools for foraging and is believed to have a significantly more complex set of vocalisations than other crow species.

Rutz is particularly excited about the projects conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is being bred for reintroduction to the wild. It is hoped that, by taking recordings made at different times, it will be possible to track whether the speciess call repertoire is being eroded in captivity specific alarm calls may have been lost, for example which could have consequences for its reintroduction; that loss might be addressed with intervention. It could produce a step change in our ability to help these birds come back from the brink, says Rutz, adding that detecting and classifying the calls manually would be labour intensive and error prone.

Meanwhile, another project seeks to understand automatically the functional meanings of vocalisations. It is being pursued with the laboratory of Ari Friedlaender, a professor of ocean sciences at the University of California, Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, behave underwater and runs one of the worlds largest tagging programmes. Small electronic biologging devices attached to the animals capture their location, type of motion and even what they see (the devices can incorporate video cameras). The lab also has data from strategically placed sound recorders in the ocean.

ESP aims to first apply self-supervised machine learning to the tag data to automatically gauge what an animal is doing (for example whether it is feeding, resting, travelling or socialising) and then add the audio data to see whether functional meaning can be given to calls tied to that behaviour. (Playback experiments could then be used to validate any findings, along with calls that have been decoded previously.) This technique will be applied to humpback whale data initially the lab has tagged several animals in the same group so it is possible to see how signals are given and received. Friedlaender says he was hitting the ceiling in terms of what currently available tools could tease out of the data. Our hope is that the work ESP can do will provide new insights, he says.

But not everyone is as gung ho about the power of AI to achieve such grand aims. Robert Seyfarth is a professor emeritus of psychology at University of Pennsylvania who has studied social behaviour and vocal communication in primates in their natural habitat for more than 40 years. While he believes machine learning can be useful for some problems, such as identifying an animals vocal repertoire, there are other areas, including the discovery of the meaning and function of vocalisations, where he is sceptical it will add much.

The problem, he explains, is that while many animals can have sophisticated, complex societies, they have a much smaller repertoire of sounds than humans. The result is that the exact same sound can be used to mean different things in different contexts and it is only by studying the context who the individual calling is, how are they related to others, where they fall in the hierarchy, who they have interacted with that meaning can hope to be established. I just think these AI methods are insufficient, says Seyfarth. Youve got to go out there and watch the animals.

There is also doubt about the concept that the shape of animal communication will overlap in a meaningful way with human communication. Applying computer-based analyses to human language, with which we are so intimately familiar, is one thing, says Seyfarth. But it can be quite different doing it to other species. It is an exciting idea, but it is a big stretch, says Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm.

Raskin acknowledges that AI alone may not be enough to unlock communication with other species. But he refers to research that has shown many species communicate in ways more complex than humans have ever imagined. The stumbling blocks have been our ability to gather sufficient data and analyse it at scale, and our own limited perception. These are the tools that let us take off the human glasses and understand entire communication systems, he says.

See the original post:
Can artificial intelligence really help us talk to the animals? - The Guardian

Elon Musk and Silicon Valley’s Overreliance on Artificial Intelligence – The Wire

When the richest man in the world is being sued by one of the most popular social media companies, its news. But while most of the conversation about Elon Musks attempt to cancel his $44 billion contract to buy Twitter is focusing on the legal, social, and business components, we need to keep an eye on how the discussion relates to one of tech industrys most buzzy products: artificial intelligence.

The lawsuit shines a light on one of the most essential issues for the industry to tackle: What can and cant AI do, and what should and shouldnt AI do? The Twitter v Musk contretemps reveals a lot about the thinking about AI in tech and startup land and raises issues about how we understand the deployment of the technology in areas ranging from credit checks to policing.

At the core of Musks claim for why he should be allowed out of his contract with Twitter is an allegation that the platform has done a poor job of identifying and removing spam accounts. Twitter has consistently claimed in quarterly filings that less than 5% of its active accounts are spam; Musk thinks its much higher than that. From a legal standpoint, it probably doesnt really matter if Twitters spam estimate is off by a few percent, and Twitters been clear that its estimate is subjective and that others could come to different estimates with the same data. Thats presumably why Musks legal team lost in a hearing on July 19when they asked for more time to perform detailed discovery on Twitters spam-fighting efforts, suggesting that likely isnt the question on which the trial will turn.

Regardless of the legal merits, its important to scrutinise the statistical and technical thinking from Musk and his allies. Musks position is best summarised in his filing from July 15, which states: In a May 6 meeting with Twitter executives, Musk was flabbergasted to learn just how meager Twitters process was. Namely: Human reviewers randomly sampled 100 accounts per day (less than 0.00005% of daily users) and applied unidentified standards to somehow conclude every quarter for nearly three years that fewer than 5% of Twitter users were false or spam. The filing goes on to express the flabbergastedness of Musk by adding, Thats it. No automation, no AI, no machine learning.

Perhaps the most prominent endorsement of Musks argument here came from venture capitalist David Sacks,who quoted it while declaring, Twitter is toast. But theres an irony in Musks complaint here: If Twitter were using machine learning for the audit as he seems to think they should, and only labeling spam that was similar to old spam, it would actually produce a lower, less-accurate estimate than it has now.

There are three components to Musks assertion that deserve examination: his basic statistical claim about what a representative sample looks like, his claim that the spam-level auditing process should automated or use AI or machine learning, and an implicit claim about what AI can actually do.

On the statistical question, this is something any professional anywhere near the machine learning space should be able to answer (so can many high school students). Twitter uses a daily sampling of accounts to scrutinise a total of 9,000 accounts per quarter (averaging about 100 per calendar day) to arrive at its under-5% spam estimate. Though that sample of 9,000 users per quarter is, as Musk notes, a very small portion of the 229 million active users the company reported in early 2022, a statistics professor (or student) would tell you that thats very much not the point. Statistical significance isnt determined by what percentage of the population is sampled but simply by the actual size of the sample in question. As Facebook whistleblower Sophie Zhang put it, you can make the comparison to soup: It doesnt matter if you have a small or giant pot of soup, if its evenly mixed you just need a spoonful to taste-test.

The whole point of statistical sampling is that you can learn most of what you need to know about the variety of a larger population by studying a much-smaller but decently sized portion of it. Whether the person drawing the sample is a scientist studying bacteria, or a factory quality inspector checking canned vegetables, or a pollster asking about political preferences, the question isnt what percentage of the overall whole am I checking, but rather how much should I expect my sample to look like the overall population for the characteristics Im studying? If you had to crack open a large percentage of your cans of tomatoes to check for their quality, youd have a hard time making a profit, so you want to check the fewest possible to get within a reasonable range of confidence in your findings.

Also read: Why Understanding This 60S Sci-Fi Novel Is Key to Understanding Elon Musk

While this thinking does go against the grain of certain impulses (theres a reason why many people make this mistake), there is also a way to make this approach to sampling more intuitive. Think of the goal in setting sample size as getting a reasonable answer to the question, If I draw another sample of the same size, how different would I expect it to be? A classic approach to explaining this problem is to imagine youve bought a great mass of marbles, that are supposed to come in a specific ratio: 95% purple marbles and 5% yellow marbles. You want to do a quality inspection to ensure the delivery is good, so you load them into one of those bingo game hoppers, turn the crank, and start counting the marbles you draw, in each color. Lets say your first sample of 20 marbles has 19 purple and one yellow; should you be confident that you got the right mix from your vendor? You can probably intuitively understand that the next 20 random marbles you draw could end up being very different, with zero yellows or seven. But what if you draw 1,000 marbles, around the same as the typical political poll? What if you draw 9,000 marbles? The more marbles you draw, the more youd expect the next drawing to look similar, because its harder to hide random fluctuations in larger samples.

There are onlinecalculators that can let you run the numbers yourself. If you only draw 20 marbles and get one yellow, you can have 95% confidence that the yellows would be between 0.13% and 24.9% of the total not very exact. If you draw 1,000 marbles and get 50 yellows, you can have 95% confidence that yellows would be between 3.7% and 6.5% of the total; closer, but perhaps not something youd sign your name to in a quarterly filing. At 9,000 marbles with 450 yellow, you can have 95% confidence the yellows are between 4.56% and 5.47%; youre now accurate to within a range of less than half a percent, and at that point Twitters lawyers presumably told them theyd done enough for their public disclosure.

Printed Twitter logos are seen in this picture illustration taken April 28, 2022. Photo: Reuters/Dado Ruvic/Illustration/File Photo

This reality that statistical sampling works to tell us about large populations based on much-smaller samples underpins every area where statistics is used, from checking the quality of the concrete used to make the building youre currently sitting in, to ensuring the reliable flow of internet traffic to the screen youre reading this on.

Its also what drives all current approaches to artificial intelligence today. Specialists in the field almost never use the term artificial intelligence to describe their work, preferring to use machine learning. But another common way to describe the entire field as it currently stands is applied statistics. Machine learning today isnt really computers thinking in anything like what we assume humans do (to the degree we even understand how humans think, which isnt a great degree); its mostly pattern-matching and -identification, based on statistical optimisation. If you feed a convolutional neural network thousands of images of dogs and cats and then ask the resulting model to determine if the next image is of a dog or a cat, itll probably do a good job, but you cant ask it to explain what makes a cat different from a dog on any broader level; its just recognising the patterns in pictures, using a layering of statistical formulas.

Stack up statistical formulas in specific ways, and you can build a machine learning algorithm that, fed enough pictures, will gradually build up a statistical representation of edges, shapes, and larger forms until it recognises a cat, based on the similarity to thousands of other images of cats it was fed. Theres also a way in which statistical sampling plays a role: You dont need pictures of all the dogs and cats, just enough to get a representative sample, and then your algorithm can infer what it needs to about all the other pictures of dogs and cats in the world. And the same goes for every other machine learning effort, whether its an attempt to predict someones salary using everything else you know about them, with a boosted random forests algorithm, or to break down a list of customers into distinct groups, in a clustering algorithm like a support vector machine.

You dont absolutely have to understand statistics as well as a student whos recently taken a class in order to understand machine learning, but it helps. Which is why the statistical illiteracy paraded by Musk and his acolytes here is at least somewhat surprising.

But more important, in order to have any basis for overseeing the creation of a machine-learning product, or to have a rationale for investing in a machine-learning company, its hard to see how one could be successful without a decent grounding in the rudiments of machine learning, and where and how it is best applied to solve a problem. And yet, team Musk here is suggesting they do lack that knowledge.

Once you understand that all machine learning today is essentially pattern-matching, it becomes clear why you wouldnt rely on it to conduct an audit such as the one Twitter performs to check for the proportion of spam accounts. Theyre hand-validating so that they ensure its high-quality data, explained security professional Leigh Honeywell, whos been a leader at firms like Slack and Heroku, in an interview. She added, any data you pull from your machine learning efforts will by necessity be not as validated as those efforts. If you only rely on patterns of spam youve already identified in the past and already engineered into your spam-detection tools, in order to find out how much spam there is on your platform, youll only recognise old spam patterns, and fail to uncover new ones.

Also read: India Versus Twitter Versus Elon Musk Versus Society

Where Twitter should be using automation and machine learning to identify and remove spam is outside of this audit function, which the company seems to do. It wouldnt otherwise be possible tosuspend half a million accountsevery day and lock millions of accounts each week, as CEO Parag Agrawal claims. In conversations Ive had with cybersecurity workers in the field, its quite clear that large amounts of automation is used at Twitter (though machine learning specifically is actually relatively rare in the field because the results often arent as good as other methods, marketing claims by allegedly AI-based security firms to the contrary).

At least in public claims related to this lawsuit, prominent Silicon Valley figures are suggesting they have a different understanding of what machine learning can do, and when it is and isnt useful. This disconnect between how many nontechnical leaders in that world talk about AI, and what it actually is, has significant implications for how we will ultimately come to understand and use the technology.

The general disconnect between the actual work of machine learning and how its touted by many company and industry leaders is something data scientists often chalk up to marketing. Its very common to hear data scientists in conversation among themselves declare that AI is just a marketing term. Its also quite common to have companies using no machine learning at all describe their work as AI to investors and customers, who rarely know the difference or even seem to care.

This is a basic reality in the world of tech. In my own experience talking with investors who make investments in AI technology, its often quite clear that they know almost nothing about these basic aspects of how machine learning works. Ive even spoken to CEOs of rather large companies that rely at their core on novel machine learning efforts to drive their product, who also clearly have no understanding of how the work actually gets done.

Not knowing or caring how machine learning works, what it can or cant do, and where its application can be problematic could lead society to significant peril. If we dont understand the way machine learning actually works most often by identifying a pattern in some dataset and applying that pattern to new data we can be led deep down a path in which machine learning wrongly claims, for example, to measure someones face for trustworthiness (when this is entirely based on surveys in which people reveal their own prejudices), or that crime can be predicted (when many hyperlocal crime numbers are highly correlated with more police officers being present in a given area, who then make more arrests there), based almost entirely on a set of biased data or wrong-headed claims.

If were going to properly manage the influence of machine learning on our society on our systems and organisations and our government we need to make sure these distinctions are clear. It starts with establishing a basic level of statistical literacy, and moves on to recognising that machine learning isnt magicand that it isnt, in any traditional sense of the word, intelligent that it works by pattern-matching to data, that the data has various biases, and that the overall project can produce many misleading and/or damaging outcomes.

Its an understanding one might have expected or at least hoped to find among some of those investing most of their life, effort, and money into machine-learning-related projects. If even people that deep arent making those efforts to sort fact from fiction, its a poor omen for the rest of us, and the regulators and other officials who might be charged with keeping them in check.

This article was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.

Here is the original post:
Elon Musk and Silicon Valley's Overreliance on Artificial Intelligence - The Wire

PhD Candidate in Machine Learning and Signal Processing job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY – NTNU | 303403 – Times Higher…

About the position

At theDepartment of Electronic Systems(IES) we have vacancy for a PhD candidate in machine learning and signal processing.

The position is associated with theCentre for Geophysical Forecasting (CGF)at NTNU, which is one of the Norwegian centres for research-driven innovation, funded by the research council of Norway and industry partners. The goal of the CGF is to become a world-leading research and innovation hub for the geophysical sciences, creating innovative new products and services in earth sensing and forecasting domains. As the global ecosystem enters a period of dramatic change, there is a strong need for accurate monitoring and forecasting of the Earth. Machine learning and signal processing play important roles here.

The PhD project will focus on applying state-of-the-art machine learning and signal processing techniques for the effective analysis of massive-size geophysical data. The models should be able to produce predictions and enable early warning systems in various geosciences applications. Special focus will be devoted on the interpretability of the model predictions.

This PhD project is further part of thePERSEUS doctoral programme: A collaboration between NTNU- Norways largest university, 11 top-level academic partners in 8 European countries, and 8 industrial partners within sectors of high societal relevance.PERSEUSwill recruit 40 PhD candidates who want to contribute to a smart, safe and sustainable future. We are looking for highly skilled PhD candidates motivated to approach societal challenges within one of the following thematic areas:

The current PhD, with its focus on machine learning and signal processing, goes particularly well with the first area in this list.

All participants in thePERSEUS networkbring unique and important qualities with them into the doctoral programme. The PERSEUS PhD candidates will have the opportunity to collaborate with researchers in the partner institutions and in other project consortia, and benefit from these collaborative research and education activities.You will work alongside other highly motivated and talented PhD candidates and researchers. You will also have access to the knowledge base, state-of-the-art research infrastructure, and impact orientation of the partners in the team.

In addition to your education and development within the thematic research area, you will gain transferable skills within project development and management, science communication, research ethics, innovation and entrepreneurial thinking, as well as basic university didactics.

You will be employed by NTNU. During your stay, you will do a 23 month international stay and a 1-2 month national stay with one of the PERSEUS partners. This will most fruitfully be achieved by having a strong contact with partners in CGF. This will allow you to extend your network within academia and industry, and to learn about your research area from an academic, innovation, and societal perspective.

The duration of the PhD employment is 36 months.

Starting gross salary is 501.200 NOK/year (equal to approx. 49.312EUR/year by the exchange rate of July 2022).

We are looking for PhD candidates from all nationalities, who want to contribute to our quest to create knowledge for a better world. PERSEUS recruits candidates according to the EUs mobility rule, i.e. applicants cannot have spent more than 12 months in Norway during the last 3 years, be within the first four years of their research careers and not yet be awarded a doctoral degree.

We believe in fair and open processes. All applications will be considered through a transparent evaluation procedure, with independent observers involved.

The position's working place is NTNU campus in Trondheim. You will report to Department Head.

We look forward to welcoming you to the CGF and the PERSEUS teams.

Duties of the position

Required selection criteria

In addition, the candidate must have:

The appointment is to be made in accordance with Regulations concerning the degrees ofPhilosophiaeDoctor (PhD)andPhilosodophiaeDoctor (PhD) in artistic researchnational guidelines for appointment as PhD, post doctor and research assistant

Preferred selection criteria

Personal characteristics

We offer

Salary and conditions

PhD candidates are remunerated in code 1017 and are normally remunerated at NOK 501 200 per annum before tax, however it may be negotiable (increased) depending on high level of qualifications and research experience of the candidate. From the salary, 2% is deducted as a contribution to the Norwegian Public Service Pension Fund.

The period of employment is 3 years.

Appointment to a PhD position requires that you are admitted thePhD program in Electronic Systemswithin three months of employment, and that you participate in an organized PhD programme during the employment period.

The engagement is to be made in accordance with the regulations in force concerningState Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criteria in the latter law will be prohibited from recruitment to NTNU. After the appointment you must assume that there may be changes in the area of work.

It is a prerequisite you can be present at and accessible to the institution daily.

About the application

The application and supporting documentation to be used as the basis for the assessment must be in English.

Publications and other scientific work must follow the application. Please note that applications are only evaluated based on the information available on the application deadline. You should ensure that your application shows clearly how your skills and experience meet the criteria which are set out above.

Please submit your application electronically via Jobbnorge website. Applications submitted elsewhere/incomplete applications will not be considered. Applicants must upload the following documents within the closing date:

In the evaluation of which candidate is best qualified, emphasis will be placed on education, experience and personal suitability.

NTNU is committed to following evaluation criteria for research quality according toThe San Francisco Declaration on Research Assessment - DORA.

Working at NTNU

NTNU believes that inclusion and diversity is our strength. We want to recruit people with different competencies, educational backgrounds, life experiences and perspectives to contribute to solving our social responsibilities within education and research. We will facilitate for our employees needs.

NTNU is working actively to increase the number of women employed in scientific positions and has a number of resources topromote equality.

The city of Trondheimis a modern European city with a rich cultural scene. Trondheim is the innovation capital of Norway with a population of 200,000. The Norwegian welfare state, including healthcare, schools, kindergartens and overall equality, is probably the best of its kind in the world. Professional subsidized day-care for children is easily available. Furthermore, Trondheim offers great opportunities for education (including international schools) and possibilities to enjoy nature, culture and family life and has low crime rates and clean air quality.

As an employeeatNTNU, you must at all times adhere to the changes that the development in the subject entails and the organizational changes that are adopted.

A public list of applicants with name, age, job title and municipality of residence is prepared after the application deadline. If you want to reserve yourself from entry on the public applicant list, this must be justified. Assessment will be made in accordance withcurrent legislation. You will be notified if the reservation is not accepted.

If you have any questions about the position, please contact Giampiero Salvi (giampiero.salvi@ntnu.no).

Application deadline: 30.09.2022.

NTNU - knowledge for a better world

The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life.

Department of Electronic Systems

The digitalization of Norway is impossible withoutelectronic systems.We are Norways leading academic environment in this field, and contribute with our expertise in areas ranging from nanoelectronics, phototonics, signal processing, radio technology and acoustics to satellite technology and autonomous systems. Knowledge of electronic systems is also vital for addressing important challenges in transport, energy, the environment, and health.The Department of Electronic Systemsis one of seven departments in theFaculty of Information Technology and Electrical Engineering .

Deadline30th September 2022EmployerNTNU - Norwegian University of Science and TechnologyMunicipalityTrondheimScopeFulltimeDuration TemporaryPlace of service NTNU Campus Trondheim

The rest is here:
PhD Candidate in Machine Learning and Signal Processing job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY - NTNU | 303403 - Times Higher...

Covision Quality joins NVIDIA Metropolis to scale its industrial visual inspection software leveraging unsupervised machine learning – GlobeNewswire

BRESSANONE, Italy, July 25, 2022 (GLOBE NEWSWIRE) -- Covision Quality, a leading provider of visual inspection software based on unsupervised machine learning technology, today announced it has joined NVIDIA Metropolis a partner program, application framework, and set of developer tools that bring to market a new generation of vision AI applications that make the worlds most important spaces and operations safer and more efficient.

Covision Qualitys interface from the perspective of the end-of-line quality control operator. In this case, the red border on the image of the manufactured part indicates that the part is not OK, thus can not be sent to the end customer and needs to be discarded.

Thanks to its unsupervised machine learning technology, the Covision Quality software can be trained in an hour on average and generates reduction of pseudo-scrap rates by up to 90% for its customers. Its workstations that are deployed at customer sites harness the power of NVIDIA RTX A5000 GPU-accelerated computing, which allows the software to run in real time processing images, inspecting components, and communicating decisions to the PLC. In addition, Covision Quality leverages NVIDIA Metropolis, the TensorRT SDK, and CUDA software.

NVIDIA Metropolis makes it easier and more cost effective for enterprises, governments, and integration partners to use world-class AI-enabled solutions to improve critical operational efficiency and solve safety problems. The NVIDIA Metropolis ecosystem contains a large and growing breadth of members who are investing in the most advanced AI techniques and most efficient deployment platforms, and using an enterprise-class approach to their solutions. Members have the opportunity to gain early access to NVIDIA platform updates to further enhance and accelerate their AI application development efforts. The program also offers the opportunity for members to collaborate with industry-leading experts and other AI-driven organizations.

Covision Quality is a spin-off of Covision Lab, a leading European computer vision and machine learning application center and company builder. Covision Quality licenses its visual inspection software product to manufacturing companies in several industries, ranging from metal manufacturing to packaging. Customers of Covision Quality include GKN Sinter Metals, a global market leader for sinter metal components, and Aluflexpack Group, a leading international manufacturer of flexible packaging.

Franz Tschimben, CEO of Covision Quality, sees an important value-add in joining the NVIDIA Metropolis program: Joining NVIDIA Metropolis marks yet another milestone in our companys young history and in our relationship with NVIDIA, which started with our company joining the NVIDIA Inception program last year. It is a testament to the great work the team is doing in providing a scalable visual inspection software product to our customers, drastically reducing time to deployment of visual inspection systems and pseudo scrap rates. We expect that NVIDIA Metropolis, which sits at the heart of many developments that are happening in the industry today, will give us a boost in our go-to-market efforts and support us in connecting to customers and system integrators.

About Covision QualityCovision Quality licenses its visual inspection software product to manufacturing companies in several industries, ranging from metal manufacturing to packaging. Thanks to its unsupervised machine learning technology, the Covision Quality software can be trained in an hour on average and generates reduction of pseudo-scrap rates for its customers by up to 90%. Covision Quality is the recipient of the Cowen Startup award at Automate Show 2022 in Detroit, United States.

Covision Quality is a spin-off of Covision Lab, a leading European computer vision and machine learning application center and company builder.For more information, visit http://www.covisionquality.com

Contact information:Covision Qualityhttps://www.covisionquality.com/en 39042 Bressanone, Italy+39 333 4421494info@covisionlab.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/19998b6c-83b8-41df-8e60-c5d558e3e408

Continued here:
Covision Quality joins NVIDIA Metropolis to scale its industrial visual inspection software leveraging unsupervised machine learning - GlobeNewswire

Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 – Digital Journal

According to the latest research by SkyQuest Technology, the Global Machine Learning Market was valued at US$ 16.2 billion in 2021, and it is expected to reach a market size of US$ 164.05 billion by 2028, at a CAGR of 39.2 % over the forecast period 20222028. The research provides up-to-date Machine Learning Market analysis of the current market landscape, latest trends, drivers, and overall market environment.

Software systems may forecast events more correctly with the use of machine learning (ML), a type of artificial intelligence (AI), without needing to be explicitly told to do so. Machine learning algorithms use historical data as input to anticipate new output values. As organizations adopt more advanced security frameworks, the global machine learning market is anticipated to grow as machine learning becomes a prominent trend in security analytics. Due to the massive amount of data being generated and communicated over several networks, cyber professionals struggle considerably to identify and assess potential cyber threats and assaults.

Machine-learning algorithms can assist businesses and security teams in anticipating, detecting, and recognising cyber-attacks more quickly as these risks become more widespread and sophisticated. For example, supply chain attacks increased by 42% in the first quarter of 2021 in the US, affecting up to 7,000,000 people. For instance, AT&T and IBM claim that the promise of edge computing and 5G wireless networking for the digital revolution will be proven. They have created virtual worlds that, when paired with IBM hybrid cloud and AI technologies, allow business clients to truly experience the possibilities of an AT&T connection.

Computer vision is a cutting-edge technique that combines machine learning and deep learning for medical imaging diagnosis. This has been accepted by the Microsoft InnerEye programme, which focuses on image diagnostic tools for image analysis. For instance, using minute samples of linguistic data, an AI model created by a team of researchers from IBM and Pfizer can accurately forecast the eventual onset of Alzheimers disease in healthy persons by 71 percent (obtained via clinical verbal cognition tests).

Read Market Research Report, Global Machine Learning Market by Component, (Solutions, and Services), Enterprise Size (SMEs And Large Enterprises), Deployment (Cloud, On-Premise), End-User [Healthcare, Retail, IT and Telecommunications, Banking, Financial Services and Insurance (BFSI), Automotive & Transportation, Advertising & Media, Manufacturing, Others (Energy & Utilities, Etc.)], and Region Forecast and Analysis 20222028 By Skyquest

Get Sample PDF : https://skyquestt.com/sample-request/machine-learning-market

Large enterprises segment dominated the machine learning market in 2021. This is because data science and artificial intelligence technologies are being used more often to incorporate quantitative insights into business operations. For instance, under a contract between Pitney Bowes and IBM, IBM will offer managed infrastructure, IT automation, and machine learning services to help Pitney Bowes convert and adopt hybrid cloud computing to support its global business strategy and goals.

Small and midsized firms are expected to grow considerably throughout the anticipated timeframe. It is projected that AI and ML would be the main technologies allowing SMEs to reduce ICT investments and access digital resources. For instance, the IPwe Platform, IPwe Registry, and Global Patent Marketplace are just a few of the small- and medium-sized firms (SMEs) and other organizations that are reportedly already using IPwes technology.

The healthcare sector had the biggest share the global machine learning market in 2021 owing to the industrys leading market players doing rapid research and development, as well as the partnerships formed in an effort to increase their market share. For instance, per the terms of the two businesses signed definitive agreement, Francisco Partners would buy IBMs healthcare data and analytics assets that are presently a part of the Watson Health company. An established worldwide investment company with a focus on working with IT startups is called Francisco Partners. Francisco Partners acquired a wide range of assets, including Health Insights, MarketScan, Clinical Development, Social Program Management, Micromedex, and imaging software services.

The prominent market players are constantly adopting various innovation and growth strategies to capture more market share. The key market players are IBM Corporation, SAP SE, Oracle Corporation, Hewlett Packard Enterprise Company, Microsoft Corporation, Amazon Inc., Intel Corporation, Fair Isaac Corporation, SAS Institute Inc., BigML, Inc., among others.

The report published by SkyQuest Technology Consulting provides in-depth qualitative insights, historical data, and verifiable projections about Machine Learning Market Revenue. The projections featured in the report have been derived using proven research methodologies and assumptions.

Speak With Our Analyst : https://skyquestt.com/speak-with-analyst/machine-learning-market

Report Findings

What does this Report Deliver?

SkyQuest has Segmented the Global Machine Learning Market based on Component, Enterprise Size, Deployment, End-User, and Region:

Read Full Report : https://skyquestt.com/report/machine-learning-market

Key Players in the Global Machine Learning Market

About Us-SkyQuest Technology Group is a Global Market Intelligence, Innovation Management & Commercialization organization that connects innovation to new markets, networks & collaborators for achieving Sustainable Development Goals.

Find Insightful Blogs/Case Studies on Our Website:Market Research Case Studies

Original post:
Global Machine Learning Market is Expected to Grow at a CAGR of 39.2 % by 2028 - Digital Journal