From the archives: A forecast on artificial intelligence, from the 1980s and beyond – Popular Science

To mark our 150th year, were revisiting thePopular Sciencestories (both hits and misses) that helped define scientific progress, understanding, and innovationwith an added hint of modern context. Explore theNotable pagesand check out all our anniversary coveragehere.

Social psychologist Frank Rosenblatt had such a passion for brain mechanics that he built a computer model fashioned after a human brains neural network, and trained it to recognize simple patterns. He called his IBM 704-based model Perceptron. A New York Times headline called it an Embryo of Computer Designed to Read and Grow Wiser. Popular Science called Perceptrons Machines that learn. At the time, Rosenblatt claimed it would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence. The year was 1958.

Many assailed Rosenblatts approach to artificial intelligence as being computationally impractical and hopelessly simplistic. A critical 1969 book by Turing Award winner Marvin Minsky marked the onset of a period dubbed the AI winter, when little funding was devoted to such researcha short revival in the early 80s notwithstanding.

In a 1989 Popular Science piece, Brain-Style Computers, science and medical writer Naomi Freundlich was among the first journalists to anticipate the thaw of that long winter, which lingered into the 90s. Even before Geoffrey Hinton, considered one of the founders of modern deep learning techniques, published his seminal 1992 explainer in Scientific American, Freundlichs reporting offered one of the most comprehensive insights into what was about to unfold in AI in the next two decades.

The resurgence of more-sophisticated neural networks, wrote Freundlich, was largely due to the availability of low-cost memory, greater computer power, and more-sophisticated learning laws. Of course, the missing ingredient in 1989 was datathe vast troves of information, labeled and unlabeled, that todays deep-learning neural networks inhale to train themselves. It was the rapid expansion of the internet, starting in the late 1990s, that made big data possible and, coupled with the other ingredients noted by Freundlich, unleashed AInearly half a century after Rosenblatts Perceptron debut.

I walked into the semi-circular lecture hall at Columbia University and searched for a seat within the crowded tiered gallery. An excited buzz petered off to a few coughs and rustling paper as a young man wearing circular wire-rimmed glasses walked toward the lectern carrying a portable stereo tape player under his arm. Dressed in a tweed jacket and corduroys, he looked like an Ivy League student about to play us some of his favorite rock tunes. But instead, when he pushed the on button, a string of garbled baby talk-more specifically, baby-computer talk-came flooding out. At first unintelligible, really just bursts of sounds, the child-robot voice repeated the string over and over until it became ten distinct words.

This is a recording of a computer that taught itself to pronounce English text overnight, said Terrence Sejnowski, a biophysicist at Johns Hopkins University. A jubilant crowd broke into animated applause. Sejnowski had just demonstrated a learning computer, one of the first of a radically new kind of artificial-intelligence machine.

Called neural networks, these computers are loosely modeled after the interconnected web of neurons, or nerve cells, in the brain. They represent a dramatic change in the way scientists are thinking about artificial intelligence- a leaning toward a more literal interpretation of how the brain functions. The reason: Although some of todays computers are extremely powerful processors that can crunch numbers at phenomenal speeds, they fail at tasks a child does with ease-recognizing faces, learning to speak and walk, or reading printed text. According to one expert, the visual system of one human being can do more image processing than all the supercomputers in the world put together. These kinds of tasks require an enormous number of rules and instructions embodying every possible variable. Neural networks do not require this kind of programming, but rather, like humans, they seem to learn by experience.

For the military, this means target-recognition systems, self-navigating tanks, and even smart missiles that chase targets. For the business world, neural networks promise handwriting-and face-recognition systems and computer loan officers and bond traders. And for the manufacturing sector, quality-control vision systems and robot control are just two goals.

Interest in neural networks has grown exponentially. A recent meeting in San Diego brought 2,000 participants. More than 100 companies are working on neural networks, including several small start-ups that have begun marketing neural-network software and peripherals. Some computer giants, such as IBM, AT&T, Texas Instruments, Nippon Electric Co., and Fujitsu, are also going full ahead with research. And the Defense Advanced Research Projects Agency (or DARPA) released a study last year that recommended neural-network funding of $400 million over eight years. It would be one of the largest programs ever undertaken by the agency.

Ever since the early days of computer science, the brain has been a model for emerging machines. But compared with the brain, todays computers are little more than glorified calculators. The reason: A computer has a single processor operating on programmed instructions. Each task is divided into many tiny steps that are performed quickly, one at a time. This pipeline approach leaves computers vulnerable to a condition commonly found on California freeways: One stalled car-one unsolvable step-can back up traffic indefinitely. The brain, in contrast, is made up of billions of neurons, or nerve cells, each connected to thousands of others. A specific task enlists the activity of whole fields of neurons; the communication pathways among them lead to solutions.

The excitement over neural networks is not new and neither are the brain makers. Warren S. McCulloch, a psychiatrist at the Universities of Illinois and Chicago, and his student Walter H. Pitts began studying neurons as logic devices in the early 1940s. They wrote an article outlining how neurons communicate with each other electrochemically: A neuron receives inputs from surrounding cells. If the sum of the inputs is positive and above a certain preset threshold, the neuron will fire. Suppose, for example, that a neuron has a threshold of two and has two connections, A and B. The neuron will be on only if both A and B are on. This is called a logical and operation. Another logic operation called the inclusive or is achieved by setting the threshold at one: If either A or B is on, the neuron is on. If both A and B are on, then the neuron is also on.

In 1958 Cornell University psychologist Frank Rosenblatt used hundreds of these artificial neurons to develop a two-layer pattern-learning network called the perceptron. The key to Rosenblatts system was that it learned. In the brain, learning occurs predominantly by modification of the connections between neurons. Simply put, if two neurons are active at once and theyre connected, then the synapses (connections) between them will get stronger. This learning rule is called Hebbs rule and was the basis for learning in the perceptron. Using Hebbs rule, the network appears to learn by experience because connections that are used often are reinforced. The electronic analog of a synapse is a resistor and in the perceptron resistors controlled the amount of current that flowed between transistor circuits.

Other simple networks were also built at this time. Bernard Widrow, an electrical engineer at Stanford University, developed a machine called Adaline (for adaptive linear neurons) that could translate speech, play blackjack, and predict weather for the San Francisco area better than any weatherman. The neural network field was an active one until 1969.

In that year the Massachusetts Institute of Technologys Marvin Minsky and Seymour Papertmajor forces in the rule-based AI fieldwrote a book called Perceptrons that attacked the perceptron design as being too simple to be serious. The main problem: The perceptron was a two-layer system-input led directly into output-and learning was limited. What Rosenblatt and others wanted to do basically was to solve difficult problems with a knee-jerk reflex, says Sejnowski.

The other problem was that perceptrons were limited in the logic operations they could execute, and therefore they could only solve clearly definable problemsdeciding between an L and a T for example. The reason: Perceptrons could not handle the third logic operation called the exclusive or. This operation requires that the logic unit turn on if either A or B is on, but not if they both are.

According to Tom Schwartz, a neural-network consultant in Mountain View, Calif., technology constraints limited the success of perceptrons. The idea of a multilayer perceptron was proposed by Rosenblatt, but without a good multilayer learning law you were limited in what you could do with neural nets. Minskys book, combined with the perceptrons failure to achieve developers expectations, squelched the neural-network boom. Computer scientists charged ahead with traditional artificial intelligence, such as expert systems.

During the dark ages as some call the 15 years between the publication of Minskys Perceptrons and the recent revival of neural networks, some die-hard connectionists neural-network adherentprevailed. One of them was physicist John J. Hopfield, who splits his time between the California Institute of Technology and AT&T Bell Laboratories. A paper he wrote in 1982 described mathematically how neurons could act collectively to process and store information, comparing a problems solution in a neural network with achieving the lowest energy state in physics. As an example, Hopfield demonstrated how a network could solve the traveling salesman problem- finding the shortest route through a group of cities a problem that had long eluded conventional computers. This paper is credited with reinvigorating the neural network field. It took a lot of guts to publish that paper in 1982, says Schwartz. Hopfield should be known as the fellow who brought neural nets back from the dead.

The resurgence of more-sophisticated neural networks was largely due to the availability of low-cost memory, greater computer power, and more-sophisticated learning laws. The most important of these learning laws is some- thing called back-propagation, illustrated dramatically by Sejnowskis NetTalk, which I heard at Columbia.

With NetTalk and subsequent neural networks, a third layer, called the hidden layer, is added to the two-layer network. This hidden layer is analogous to the brains interneurons, which map out pathways between the sensory and motor neurons. NetTalk is a neural-network simulation with 300 processing units-representing neurons- and over 10,000 connections arranged in three layers. For the demonstration I heard, the initial training input was a 500-word text of a first-graders conversation. The output layer consisted of units that encoded the 55 possible phonemes-discreet speech sounds-in the English language. The output units can drive a digital speech synthesizer that produces sounds from a string of phonemes. When NetTalk saw the letter N (in the word can for example) it randomly (and erroneously) activated a set of hidden layer units that signaled the output ah. This output was then compared with a model: a correct letter-to-phoneme translation, to calculate the error mathematically. The learning rule, which is actually a mathematical formula, corrects this error by apportioning the blame-reducing the strengths of the connections between the hidden layer that corresponds to N and the output that corresponds to ah. At the beginning of NetTalk all the connection strengths are random, so the output that the network produces is random, says Sejnowski. Very quickly as we change the weights to minimize error, the network starts picking up on the regular pattern. It distinguishes consonants and vowels, and can make finer distinctions according to particular ways of pronouncing individual letters.

Trained on 1,000 words, within a week NetTalk developed a 20,000-word dictionary. The important point is that the network was not only able to memorize the training words, but it generalized. It was able to predict new words it had never seen before, says Sejnowski. Its similar to how humans would generalize while reading Jabberwocky.

Generalizing is an important goal for neural networks. To illustrate this, Hopfield described a munition identification problem he worked on two summers ago in Fort Monmouth, N.J. Lets say a battalion needs to identify an unexploded munition before it can be disarmed, he says. Unfortunately there are 50,000 different kinds of hardware it might be. A traditional computer would make the identification using a treelike decision process, says Hopfield. The first decision could be based on the length of the munition. But theres one problem: It turns out the munitions nose is buried in the sand, and obviously a soldier cant go out and measure how long it is. Although youve got lots of information, there are always going to be pieces that you are not allowed to get. As a result you cant go through a treelike structure and make an identification.

Hopfield sees this kind of problem as approachable from a neural-network point of view. With a neural net you could know ten out of thirty pieces of information about the munition and get an answer.

Besides generalizing, another important feature of neural networks is that they degrade gracefully. The human brain is in a constant state of degradation-one night spent drinking alcohol consumes thousands of brain cells. But because whole fields of neurons contribute to every task, the loss of a few is not noticeable. The same is true with neural networks. David Rumelhart, a psychologist and neural-network researcher at Stanford University, explains: The behavior of the network is not determined by one little localized part, but in fact by the interactions of all the units in the network. If you delete one of the units, its not terribly important. Deleting one of the components in a conventional computer will typically bring computation to a halt.

Although neural networks can be built from wires and transistors, according to Schwartz, Ninety-nine percent of what people talk about in neural nets are really software simulations of neural nets run on conventional processors. Simulating a neural network means mathematically defining the nodes (processors) and weights (adaptive coefficients) assigned to it. The processing that each element does is determined by a mathematical formula that defines the elements output signal as a function of whatever input signals have just arrived and the adaptive coefficients present in the local memory, explains Robert Hecht-Nielsen, president of Hecht-Nielsen Neurocomputer Corp.

Some companies, such as Hecht- Nielsen Neurocomputer in San Diego, Synaptics Inc. in San Jose, Calif., and most recently Nippon Electric Co., are selling specially wired boards that link to conventional computers. The neural network is simulated on the board and then integrated via software to an IBM PC-type machine.

Other companies are providing commercial software simulations of neural networks. One of the most successful is Nestor, Inc., a Providence, Rl.,-based company that developed a software package that allows users to simulate circuits on desk-top computers. So far several job-specific neural networks have been developed. They include: a signature-verification system; a network that reads handwritten numbers on checks; one that helps screen mortgage loans; a network that identifies abnormal heart rates; and another that can recognize 11 different aircraft, regardless of the observation angle.

Several military contractors including Bendix Aerospace, TRW, and the University of Pennsylvania are also going ahead with neural networks for signal processing-training networks to identify enemy vehicles by their radar or sonar patterns, for example.

Still, there are some groups concentrating on neural network chips. At Bell Laboratories a group headed by solid-state physicist Larry Jackel constructed an experimental neural-net chip that has 75,000 transistors and an array of 54 simple processors connected by a network of resistors. The chip is about the size of a dime. Also developed at Bell Labs is a chip containing 14,400 artificial neurons made of light-sensitive amorphous silicon and deposited as a thin film on glass. When a slide is projected on the film several times, the image gets stored in the network. If the network is then shown just a small part of the image, it will reconstruct the original picture.

Finally, at Synaptics, CalTechs Carver Mead is designing analog chips modeled after human retina and cochlea.

According to Scott E. Fahlman, a senior research scientist at Carnegie Mellon University in Pittsburgh, Pa., building a chip for just one network can take two or three years. The problem is that the process of laying out all the interconnected wires requires advanced techniques. Simulating networks on digital machines allows researchers to search for the best architecture before committing to hardware.

There are at least fifty different types of networks being explored in research or being developed for applications, says Hecht-Nielsen. The differences are mainly in the learning laws implemented and the topology [detailed mapping] of the connections. Most of these networks are called feed-forward networks-information is passed forward in the layered network from inputs to hidden units and finally outputs.

John Hopfield is not sure this is the best architecture for neural nets. In neurobiology there is an immense amount of feedback. You have connections coming back through the layers or interconnections within the layers. That makes the system much more powerful from a computational point of view.

That kind of criticism brings up the question of how closely neural networks need to model the brain. Fahlman says that neural-network researchers and neurobiologists are loosely coupled. Neurobiologists can tell me that the right number of elements to think about is tens of billions. They can tell me that the right kind of interconnection is one thousand or ten thousand to each neuron. And they can tell me that there doesnt seem to be a lot of flow backward through a neuron, he says. But unfortunately, he adds, they cant provide information about exactly whats going on in the synapse of the neuron.

Neural networks, according to the DARPA study, are a long way off from achieving the connectivity of the human brain; at this point a cockroach looks like a genius. DARPA projects that in five years the electronic neurons of a neural network could approach the complexity of a bees nervous system. That kind of complexity would allow applications like stealth aircraft detection, battlefield surveillance, and target recognition using several sensor types. Bees are pretty smart compared with smart weapons, commented Craig I. Fields, deputy director of research for the agency. Bees can evade. Bees can choose routes and choose targets.

Some text has been edited to match contemporary standards and style.

See the original post:
From the archives: A forecast on artificial intelligence, from the 1980s and beyond - Popular Science

Leveraging Artificial Intelligence in the Financial Service Industry – HPCwire

In financial services, it is important to gain any competitive advantage. Your competition has access to most of the same data you do, as historical data is available to everyone in your industry. Your advantage comes with the ability to exploit that data better, faster, and more accurately than your competitors. With a rapidly fluctuating market, the ability to process data faster gives you the opportunity to respond quicker than ever before. This is where AI-first intelligence can give you the leg up.

To implement AI infrastructure there are some key considerations to maximize your return on investment (ROI).

When designing for high utilization workloads like AI for financial analytics, it is best practice to keep systems on premise. On premise computing is more cost effective than cloud-based computing when highly utilized. Cloud service costs can add up quickly and any cloud outages inevitably leads to downtime.

You can leverage a range of networking options, but we typically recommend high speed fabrics like 100 gig Ethernet or 200 gig HDR InfiniBand.

You should also consider that the size of your data set is just as important as the quality of your model. So, you will want to allow for a modern AI focused storage design. This will allow you to scale as needed to maximize your ROI

It is also important to keep primary storage close to on premise computing resources to maximize network bandwidth while limiting latency. Keeping storage on premise also keeps your sensitive data safe. Let us look at how storage should be set up to maximize efficiency.

Traditional storage, like NAS (Network Attached Storage), cannot keep up. Bandwidth is limited to around 10 gigabits per second, and it is not scalable enough for AI workloads. Fast local storage does not work for modern parallel problems because it results in constantly copying data in and out of nodes which clogs the network.

AI optimized storage should be parallel and support a single namespace data lake. This enables the storage to deliver large data sets to compute nodes for model training.

Your AI optimized storage must also support high bandwidth fabrics. A good storage solution should enable object storage tiering to remain cost effective, and to serve as an affordable long term scale storage option for regulatory retention requirements.

With AI and machine learning, you can significantly reduce the number of false positives, leading to higher customer satisfaction. Automating minor insurance claims can often now be done by AI, allowing employees to focus on larger and more complex issues.

AI can also be used to review claims or flag cases for more thorough, in-depth analysis by detecting potential fraud or human error. Regular tasks prone to human error can either be reviewed, or in many cases performed entirely by applications with AI, often increasing both efficiency and accuracy.

The chat bot today is different from years past. They are more advanced and can now often replace menial tasks or requests and assist customers looking for self-service, thereby reducing both call volume and length.

AI provides a new future to financial analytics, increasing your ROI and allowing your employees to use their time more efficiently.

Learn more in this webinar.

Read more here:
Leveraging Artificial Intelligence in the Financial Service Industry - HPCwire

Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action – Gibson Dunn

May 23, 2022

Click for PDF

On May 12, 2022, more than six months after the Equal Employment Opportunity Commission (EEOC) announced its Initiative on Artificial Intelligence and Algorithmic Fairness,[1] the agency issued its first guidance regarding employers use of Artificial Intelligence (AI).[2]

The EEOCs guidance outlines best practices and key considerations that, in the EEOCs view, help ensure that employment tools do not disadvantage applicants or employees with disabilities in violation of the Americans with Disabilities Act (ADA). Notably, the guidance came just one week after the EEOC filed a complaint against a software company alleging intentional discrimination through applicant software under the Age Discrimination in Employment Act (ADEA), potentially signaling more AI and algorithmic-based enforcement actions to come.

The EEOCs AI Guidance

The EEOCs non-binding, technical guidance provides suggested guardrails for employers on the use of AI technologies in their hiring and workforce management systems.

Broad Scope. The EEOCs guidance encompasses a broad-range of technology that incorporates algorithmic decision-making, including automatic resume-screening software, hiring software, chatbot software for hiring and workflow, video interviewing software, analytics software, employee monitoring software, and worker management software.[3] As an example of such software that has been frequently used by employers, the EEOC identifies testing software that provides algorithmically-generated personality-based job fit or cultural fit scores for applicants or employees.

Responsibility for Vendor Technology. Even if an outside vendor designs or administers the AI technology, the EEOCs guidance suggests that employers will be held responsible under the ADA if the use of the tool results in discrimination against individuals with disabilities. Specifically, the guidance states that employers may be held responsible for the actions of their agents, which may include entities such as software vendors, if the employer has given them authority to act on the employers behalf.[4] The guidance further states that an employer may also be liable if a vendor administering the tool on the employers behalf fails to provide a required accommodation.

Common Ways AI Might Violate the ADA. The EEOCs guidance outlines the following three ways in which an employers tools may, in the EEOCs view, be found to violate the ADA, although the list is non-exhaustive and intended to be illustrative:

Tips for Avoiding Pitfalls. In addition to illustrating the agencys view of how employers may run afoul of the ADA through their use of AI and algorithmic decision-making technology, the EEOCs guidance provides several practical tips for how employers may reduce the risk of liability. For example:

Enforcement Action

As previewed above, on May 5, 2022just one week before releasing its guidancethe EEOC filed a complaint in the Eastern District of New York alleging that iTutorGroup, Inc., a software company providing online English-language tutoring to adults and children in China, violated the ADEA.[11]

The complaint alleges that a class of plaintiffs were denied employment as tutors because of their age. Specifically, the EEOC asserts that the companys application software automatically denied hundreds of older, qualified applicants by soliciting applicant birthdates and automatically rejecting female applicants age 55 or older and male applicants age 60 or older. The complaint alleges that the charging party was rejected when she used her real birthdate because she was over the age of 55 but was offered an interview when she used a more recent date of birth with an otherwise identical application. The EEOC seeks a range of damages including back wages, liquidated damages, a permanent injunction enjoining the challenged hiring practice, and the implementation of policies, practices, and programs providing equal employment opportunities for individuals 40 years of age and older. iTutorGroup has not yet filed a response to the complaint.

Takeaways

Given the EEOCs enforcement action and recent guidance, employers should evaluate their current and contemplated AI tools for potential risk. In addition to consulting with vendors who design or administer these tools to understand the traits being measured and types of information gathered, employers might also consider reviewing their accommodations processes for both applicants and employees.

___________________________

[1] EEOC, EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness (Oct.28, 2021), available at https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness.

[2] EEOC, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022), available at https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence?utm_content=&utm_medium=email&utm_name=&utm_source=govdelivery&utm_term [hereinafter EEOC AI Guidance].

[3] Id.

[4] Id. at 3, 7.

[5] Id. at 11.

[6] Id. at 13.

[7] Id. at 14.

[8] For more information, please see Gibson Dunns Client Alert, New York City Enacts Law Restricting Use of Artificial Intelligence in Employment Decisions.

[9] EEOC AI Guidance at 14.

[10] Id.

[11] EEOC v. iTutorGroup, Inc., No. 1:22-cv-02565 (E.D.N.Y. May 5, 2022).

The following Gibson Dunn attorneys assisted in preparing this client update: Harris Mufson, Danielle Moss, Megan Cooney, and Emily Maxim Lamm.

Gibson Dunns lawyers are available to assist in addressing any questions you may have regarding these developments. To learn more about these issues, please contact the Gibson Dunn lawyer with whom you usually work, any member of the firmsLabor and Employmentpractice group, or the following:

Harris M. Mufson New York (+1 212-351-3805, hmufson@gibsondunn.com)

Danielle J. Moss New York (+1 212-351-6338, dmoss@gibsondunn.com)

Megan Cooney Orange County (+1 949-451-4087, mcooney@gibsondunn.com)

Jason C. Schwartz Co-Chair, Labor & Employment Group, Washington, D.C.(+1 202-955-8242, jschwartz@gibsondunn.com)

Katherine V.A. Smith Co-Chair, Labor & Employment Group, Los Angeles(+1 213-229-7107, ksmith@gibsondunn.com)

2022 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice.

Go here to see the original:
Keeping Up with the EEOC: Artificial Intelligence Guidance and Enforcement Action - Gibson Dunn

Artificial Intelligence in Supply Chain Market Research With Amazon Web Services, Inc., project44.| Discusse Reach Good Valuation The Daily Vale -…

With its unique ability to process millions of data points per second, AI can help supply chain managers solve tactical and strategic decision-making problems. This is particularly useful when dealing with large amounts of unstructured data. The ability to automate day-to-day tasks can help companies react more quickly to changes or problems in the supply chain. It also ensures that inventory levels are optimized for optimal availability at the lowest possible cost.

The Artificial Intelligence in Supply Chain Market research report offers adequate information that helps market players to prepare to develop in line with the changes and ensure a strong market position in this competitive Artificial Intelligence in Supply Chain for a more extended period, by 2022-2027. This report is prepared in easy-to-understand language and includes useful statistics pointing out the bottom line oriented thoughts to benefit the competitive field in this Market. Additionally, this report highlights key opportunities, market trends, and market dynamics consisting of driving forces and challenging situations. With the help of this research guide, interested market players Artificial Intelligence in Supply Chain can compete with their most challenging competitors based on development, deals, and other essential elements.

Get Sample of Market Report with Global Industry Analysis: http://www.researchinformatic.com/sample-request-324

The research defines and explains the market by gathering relevant and unbiased data. It is growing at a 42.3% of CAGR during the forecast period.

Research analysts and market experts have utilized innovative and sophisticated market investigation tools and methodologies, including primary and secondary research. To gather data, They have conducted telephonic meetings identified with the overall IT And Telecommunications industry. They also allude to organization sites, government records, public statements, yearly and money-related reports, and databases of associations cross-checked with dependable sources.

The Artificial Intelligence in Supply Chain Market offers market segmentation analysis for this increasing sagacious Artificial Intelligence in Supply Chain Market so that the genuinely necessary segments of the market players can recognize, which can eventually improve their way of performing in this competitive market.

Amazon Web Services, Inc., project44., Deutsche Post AG, FedEx, GENERAL ELECTRIC, Google LLC, IBM, Intel Corporation, Coupa Software Inc.., Micron Technology, Inc.

Get Enquired For Customized Report: http://www.researchinformatic.com/inquiry-324

Segmentation:

The segmentation study conducted in the Artificial Intelligence in Supply Chain report aids market players in boosting productivity by focusing on their organizations goals and assets in market segments that are most favorable to their objectives. The segments are done based on:

Artificial Intelligence in Supply Chain By type

Machine Learning, Supervised Learning, Unsupervised Learning, and others

Artificial Intelligence in Supply Chain By applications

Fleet Management, Supply Chain Planning, Warehouse Management, Others

The Artificial Intelligence in Supply Chain market report utilizes quantitative and qualitative investigation that will most likely help different market players (new and established) recognize critical development pockets in the Market. Also, the report offers Porters Five Forces examination, SWOT analysis, and PESTLE investigation to increasingly analyze nitty-gritty correlations and other significant factors. It likewise utilizes a top-down and bottom-up research approach to analyze improvement marketing channels and patterns. At last, the new venture possibility of activities is also evaluated.

Synopsis of the Artificial Intelligence in Supply Chain research report

Buy Exclusive Report With Good Discount: http://www.researchinformatic.com/discount-324

Contact Us:

George Miller

1887 Whitney Mesa

Dr. Henderson , NV 89014

Research Informatic

+1 775 237 4147

https://researchinformatic.com

Related Report

Cybersecurity Mesh Market 2022

Fumigation Services Market 2022

Tote Bags Market Size 2022 Growth, Opportunities and Worldwide Forecast to 2026

Dark Analytics Market Growth Analysis Report 2022-2028

Read more here:
Artificial Intelligence in Supply Chain Market Research With Amazon Web Services, Inc., project44.| Discusse Reach Good Valuation The Daily Vale -...

Artificial Intelligence in Workspace Market Research With Intel, Nvidia, IBM Growth, Opportunities, Worldwide Forecast and Size 2022 to 2026 The…

Artificial intelligence (AI) in the workplace increases the productivity of your workforce by delivering personalized experiences based on data and purpose-built vessels. It will also help improve employee loyalty and satisfaction and turn employees into loyal brand ambassadors. In addition, a significant benefit of AI in the workplace is the introduction of intelligent automation and the elimination of human error.

The Artificial Intelligence in Workspace Market research report offers adequate information that helps market players to prepare to develop in line with the changes and ensure a strong market position in this competitive Artificial Intelligence in Workspace for a more extended period, by 2022-2027. This report is prepared in easy-to-understand language and includes useful statistics pointing out the bottom line oriented thoughts to benefit the competitive field in this Market. Additionally, this report highlights key opportunities, market trends, and market dynamics consisting of driving forces and challenging situations. With the help of this research guide, interested market players Artificial Intelligence in Workspace can compete with their most challenging competitors based on development, deals, and other essential elements.

Get Sample of Market Report with Global Industry Analysis: http://www.researchinformatic.com/sample-request-325

The research defines and explains the market by gathering relevant and unbiased data. It is growing at a 39.3% of CAGR during the forecast period.

Research analysts and market experts have utilized innovative and sophisticated market investigation tools and methodologies, including primary and secondary research. To gather data, They have conducted telephonic meetings identified with the overall IT And Telecommunications industry. They also allude to organization sites, government records, public statements, yearly and money-related reports, and databases of associations cross-checked with dependable sources.

The Artificial Intelligence in Workspace Market offers market segmentation analysis for this increasing sagacious Artificial Intelligence in Workspace Market so that the genuinely necessary segments of the market players can recognize, which can eventually improve their way of performing in this competitive market.

Intel, Nvidia, IBM, Samsung Electronics, Siemens AG, Cisco, General Electric, Google, Oracle.

Get Enquired For Customized Report: http://www.researchinformatic.com/inquiry-325

Segmentation:

The segmentation study conducted in the Artificial Intelligence in Workspace report aids market players in boosting productivity by focusing on their organizations goals and assets in market segments that are most favorable to their objectives. The segments are done based on:

Artificial Intelligence in Workspace By type

Hardware, Software, AI Platforms, AI Solutions, On-Premises, Cloud, Services

Artificial Intelligence in Workspace By applications

Automotive and Transportation, Manufacturing, Healthcare and Pharmaceutical, IT & Telecommunication, Others

The Artificial Intelligence in Workspace market report utilizes quantitative and qualitative investigation that will most likely help different market players (new and established) recognize critical development pockets in the Market. Also, the report offers Porters Five Forces examination, SWOT analysis, and PESTLE investigation to increasingly analyze nitty-gritty correlations and other significant factors. It likewise utilizes a top-down and bottom-up research approach to analyze improvement marketing channels and patterns. At last, the new venture possibility of activities is also evaluated.

Synopsis of the Artificial Intelligence in Workspace research report

Buy Exclusive Report With Good Discount: http://www.researchinformatic.com/discount-325

Contact Us:

George Miller

1887 Whitney Mesa

Dr. Henderson , NV 89014

Research Informatic

+1 775 237 4147

https://researchinformatic.com

Related Report

Electric Vehicle Ecosystem Market By Type, By Application, By End User, By Regional Outlook, Industry 2022 2026

Craft Beer Market By Type, By Application, By End User, By Regional Outlook, Industry 2022 2026

Chemometric Software Market 2022: Business Development, Size, Share and Opportunities 2026

Big Data Software Market Future Growth Opportunities 2022-2028

Read the original here:
Artificial Intelligence in Workspace Market Research With Intel, Nvidia, IBM Growth, Opportunities, Worldwide Forecast and Size 2022 to 2026 The...

Artificial Intelligence (AI) Patent Filings Continue Explosive Growth Trend At The USPTO – Patent – United States – Mondaq

PatentNext Summary:ArtificialIntelligence (AI) Patent Application filings continue theirexplosive growth trend at the U.S. Patent Office (USPTO). At theend of 2020, the USPTO published a report finding an exponentialincrease in the number of patent application filings from 2002 to2018. This trend has continued. In addition, current data showsthat AI-related application filings pertaining to graphics andimaging are taking the lead over AI modeling and simulationapplications.

In the last quarter of 2020, the United States Patent andTrademark Office (USPTO) reported that patent filings forArtificial Intelligence (AI) related inventions more than doubledfrom 2002 to 2018.SeeOffice of the Chief Economist, Inventing AI:Tracking The Diffusion Of Artificial Intelligence With Patents, IPDATA HIGHLIGHTS No. 5 (Oct. 2020).

Since the publication of the USPTO's report almost two yearsago, AI patent application filings have continued their explosivegrowth trend.

The below chart shows filings by Technology ("Tech")Center over time from 2000 to 2022.

Note that the right-most side of the graph slopes down becauseof the 18-month "Publication Delay," during whichinformation for newer patent application filings is not yetpublicly available. See37 CFR 1.211.

The above chart organizes patent application filings byTech Center. As shown, most AI-related patentapplications fall into one of two Tech Centers. First,Tech Center 2100(purple color in theabove graph) includes examiners that handle "ComputerArchitecture and Software" inventions. It is not surprisingthat many AI-related patent applications end up here because TechCenter 2100 includes the specific AI-relatedArt Unit 2120, which handles technology involving "AI& Simulation/Modeling."

Second,Tech Center 2600(red color in the abovegraph) handles "Communications" technology. Tech Center2600 includes several art units devoted to graphic and visualprocessing, such as Art Unit 2615 ("Computer GraphicProcessing") and Art Unit 2660 ("Digital Cameras; ImageAnalysis; Applications; pattern Recognition; Color and Compression;Enhancement and Transformation"). Such Art Units handleAI-related patent applications that involve image processing. Thiscan include the use of a Convolutional Neural Network("CNN") to detect, classify, and/or predict objects intwo-dimensional (2D) and three-dimensional (3D) space.

Together these two Tech Centers receive a majority of theAI-related patent application filings.

For example, during the year 2018, Tech Center 2100 saw1,733 filings, and Tech Center 2600 saw 1,416 filings.

Later, in the year 2020, these two centers still saw the mostfilings but were reversed in respective rankings as to the numberof filings, where Tech Center 2100 saw 2,152 filings, and TechCenter 2600 saw 2,542 filings.

This suggests that graphical or image-related AI patentapplication filings have overtaken non-graphical or non-imageAI-related filings by the year 2020.

As good news for AI inventors, these two Tech Centers experiencehigh percentages of allowance. The below chart shows the patentapplication allowance rate by Tech Center.

As shown above, AI-related patent applications handled by TechCenter 2100 (purple bar in the above graph) has a relatively highallowance rate (84%). AI-related patent applications handled byTech Center 2600 (red bar in the above graph) has an even higherallowance rate (91%).

This finding can be contrasted toTech Center 3600(yellow bar in the abovegraph), which handles patent applications for a mix ofbusiness-related technologies, i.e.: "Transportation,Construction, Electronic Commerce, Agriculture, National Securityand License and Review."

Certain art units of Tech Center 3600, such as Art Unit 3620("Business Methods"), are infamous for issuingpatent-eligibility rejections pursuant to 35 USC 101. Suchrejections can be difficult to overcome and explains the much lowerallowance rate of 67% for Tech Center 3600.

Accordingly, staying out of Tech Center 3600 remains a viablestrategy for patentees.

To the extent the reader is interested in accomplishing this,please see PatentNext's articles on best practices forpatenting AI inventions. SeeHow to Patent an Artificial Intelligence (AI)Invention: Guidance from the U.S. Patent Office(USPTO)andHow to Patent Software Inventions: Show an"Improvement."

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Visit link:
Artificial Intelligence (AI) Patent Filings Continue Explosive Growth Trend At The USPTO - Patent - United States - Mondaq

Artificial Intelligence (AI) in Contact Center Market Analysis by Emerging Growth Factors and Revenue Forecast to 2028 IBM, Google, AWS, Microsoft,…

Adroit Market Research published a new research study on Global Artificial Intelligence (AI) in Contact Center Market 2022 by Manufacturers, Regions, Type and Application, Forecast to 2028 that promises a complete review of the marketplace, clarifying the previous experience and trends. On the basis of these previous experiences, it offers the future prediction considering other factors influencing the growth rate. The report covers the crucial elements of the global Artificial Intelligence (AI) in Contact Center market and elements such as drivers, current trends of the past and present times, supervisory scenario & technological growth. The research document presents in-depth evaluation of the market. It shows a detailed observation of several aspects, including the rate of growth, technological advances and various strategies implemented by the main current market players.

Free Sample Report + All Related Graphs & Charts @ https://www.adroitmarketresearch.com/contacts/request-sample/1650?utm_source=Sujata25

Leading players of Artificial Intelligence (AI) in Contact Center Market including:

IBM, Google, AWS, Microsoft, SAP, Oracle, Artificial Solutions, Nuance, Avaya, Haptik, NICE in Contact, EdgeVerve, Avaamo, Inbenta, Rulai,Kore.ai, Creative Virtual

The report is an amalgamation of detailed market overview based on the segmentations, applications, trends and opportunities, mergers and acquisitions, drivers, and restraints. The report showcases the current and forthcoming technical and financial details of the Artificial Intelligence (AI) in Contact Center market. The research study attracts attention to a detailed synopsis of the market valuation, revenue estimation, and market statistics. The study on the emerging trends in the global and regional spaces on all the significant components, such as market capacity, cost, price, demand and supply, production, profit, and competitive landscape. The report also explores all the key factors affecting the growth of the global market, consisting of the demand-supply scenario, pricing structure, profit margins, production, and value chain analysis.

Global retail sales, macroeconomic indicators, parent industry patterns, governing factors, and the businesss market segment attractiveness are all examined in the market research review. The Artificial Intelligence (AI) in Contact Center analysis looks at a wide range of industries, as well as trends and factors that have a big effect on the industry. The global Artificial Intelligence (AI) in Contact Center market analysis provides a quantitative analysis of demand over the forecasted time frame. Key drivers, constraints, rewards, and risks, as well as the market effect of these factors, are among the industrys core dynamics. The Artificial Intelligence (AI) in Contact Center research study also contains a comprehensive supply chain diagram and an examination of industry dealers. The Artificial Intelligence (AI) in Contact Center market study also looks at a number of important factors that influence the global Artificial Intelligence (AI) in Contact Center industrys growth.

This study provides a comprehensive overview of the major factors affecting the global market, in addition to prospects, development patterns, industry-specific developments, risks, and other topics. The Artificial Intelligence (AI) in Contact Center study also discusses the profitability index, the major market share breakdown, the SWOT survey, and the regional distribution of the global Artificial Intelligence (AI) in Contact Center market. Similarly, the Artificial Intelligence (AI) in Contact Center review shows the major players current roles in the competitive market world. The Artificial Intelligence (AI) in Contact Center research provides a thorough examination and comprehensive overview of the various aspects of business growth that affect the local and global markets.

Artificial Intelligence (AI) in Contact Center market Segmentation by Type:

By Component (Computer Platforms, Solutions, Services)

Artificial Intelligence (AI) in Contact Center market Segmentation by Application:

By Application (BFSI, Telecom, Retail & E-Commerce, Media & Entertainment, Healthcare,Travel & Hospitality, Others)

Highlights of the global Artificial Intelligence (AI) in Contact Center market report:

1. The Artificial Intelligence (AI) in Contact Center market research report provides statistical analysis via graphs, figures and pie charts indication the market dynamics and growth trends in the past and in future.2. The report also shares current market status, drivers and restrains, granular assessment of the industry segments such as sales, marketing and production along with data provided from producers, retailers and vendors.3. The Artificial Intelligence (AI) in Contact Center report also includes the analysis of top players in the market and their market status, revenues and changing strategies.4. Leading players turning towards trending products for new product development and changing sales and marketing strategies due to the impact of COVID-19 are shared in the global Artificial Intelligence (AI) in Contact Center market report.5. The Artificial Intelligence (AI) in Contact Center market report offers product segmentation and applications including the wide range of product services and major influential factors for expansion of the industry.6. Along with this, regional segmentation is also provided in the Artificial Intelligence (AI) in Contact Center market report identifying the dominating regions.

Reasons for buying this report:

* Analysing the outlook of the Artificial Intelligence (AI) in Contact Center market with the recent trends and Porters five forces analysis* To study current and future market outlook in the developed and emerging markets* Market dynamics scenario, along with growth opportunities of the market in the years to come* Market segmentation analysis including qualitative and quantitative research incorporating the impact of economic and non-economic aspects* Regional and country level analysis integrating the demand and supply forces that are influencing the growth of the market.* Market value (USD Million) and volume (Units Million) data for each segment and sub-segment* Distribution Channel Sales Analysis by Value* Competitive landscape involving the market share of major players, along with the new product launch and strategies adopted by players in the past five years* Comprehensive company profiles covering the product offerings, key financial information, recent developments, SWOT analysis, and strategy employed by the major market players

Table of Content:

1 Scope of the Report1.1 Market Introduction1.2 Research Objectives1.3 Years Considered1.4 Market Research Methodology1.5 Economic Indicators1.6 Currency Considered2 Executive Summary3 Global Artificial Intelligence (AI) in Contact Center by Players4 Artificial Intelligence (AI) in Contact Center by Regions4.1 Artificial Intelligence (AI) in Contact Center Market Size by Regions4.2 Americas Artificial Intelligence (AI) in Contact Center Market Size Growth4.3 APAC Artificial Intelligence (AI) in Contact Center Market Size Growth4.4 Europe Artificial Intelligence (AI) in Contact Center Market Size Growth4.5 Middle East & Africa Artificial Intelligence (AI) in Contact Center Market Size Growth5 Americas6 APAC7 Europe8 Middle East & Africa9 Market Drivers, Challenges and Trends9.1 Market Drivers and Impact9.1.1 Growing Demand from Key Regions9.1.2 Growing Demand from Key Applications and Potential Industries9.2 Market Challenges and Impact9.3 Market Trends10 Global Artificial Intelligence (AI) in Contact Center Market Forecast11 Key Players Analysis12 Research Findings and Conclusion

Do You Have Any Query Or Specific Requirement? Ask to Our Industry Expert @ https://www.adroitmarketresearch.com/contacts/enquiry-before-buying/1650?utm_source=Sujata25

ABOUT US:

Adroit Market Research is an India-based business analytics and consulting company. Our target audience is a wide range of corporations, manufacturing companies, product/technology development institutions and industry associations that require understanding of a markets size, key trends, participants and future outlook of an industry. We intend to become our clients knowledge partner and provide them with valuable market insights to help create opportunities that increase their revenues. We follow a code Explore, Learn and Transform. At our core, we are curious people who love to identify and understand industry patterns, create an insightful study around our findings and churn out money-making roadmaps.

CONTACT US:

Ryan JohnsonAccount Manager Global3131 McKinney Ave Ste 600, Dallas,TX 75204, U.S.APhone No.: USA: +1.210.667.2421/ +91 9665341414

Read the rest here:
Artificial Intelligence (AI) in Contact Center Market Analysis by Emerging Growth Factors and Revenue Forecast to 2028 IBM, Google, AWS, Microsoft,...

Healthcare Artificial Intelligence Market Is Expected to Boom: Butterfly Network, Inc., Lifegraph The Daily Vale – The Daily Vale

New Jersey, United States The Healthcare Artificial Intelligence Market Research Report is a professional asset that provides dynamic and statistical insights into regional and global markets. It includes a comprehensive study of the current scenario to safeguard the trends and prospects of the market. Healthcare Artificial Intelligence Research reports also track future technologies and developments. Thorough information on new products, and regional and market investments is provided in the report. This Healthcare Artificial Intelligence research report also scrutinizes all the elements businesses need to get unbiased data to help them understand the threats and challenges ahead of their business. The Service industry report further includes market shortcomings, stability, growth drivers, restraining factors, and opportunities over the forecast period.

Get Sample PDF Report with Table and Graphs:

https://www.a2zmarketresearch.com/sample-request/350253

The Major Manufacturers Covered in this Report @:

Butterfly Network, Inc., Lifegraph, Enlitic, Inc., IBM (Watson Health), Sophia Genetics, Zebra Medical Vision Ltd., Pathway Genomics Corporation, AiCure, Sense.ly, Welltok, Insilico Medicine, Inc., Atomwise, Inc., APIXIO, Inc., Modernizing Medicine, Cyrcadia Health Inc., iCarbonX.

Healthcare Artificial Intelligence Market Overview:

This systematic research study provides an inside-out assessment of the Healthcare Artificial Intelligence market while proposing significant fragments of knowledge, chronic insights and industry-approved and measurably maintained Service market conjectures. Furthermore, a controlled and formal collection of assumptions and strategies was used to construct this in-depth examination.

During the development of this Healthcare Artificial Intelligence research report, the driving factors of the market are investigated. It also provides information on market constraints to help clients build successful businesses. The report also addresses key opportunities.

The report delivers the financial details for overall and individual Healthcare Artificial Intelligence market segments for the year 2022-2029 with projections and expected growth rate in percent. The report examines the value chain activities across different segments of Healthcare Artificial Intelligence industry. The report analyses the current state of performance of the Healthcare Artificial Intelligence industry and what will be performed by the global Healthcare Artificial Intelligence industry by 2029. The report analyzes how the covid-19 pandemic is further impeding the progress of the global Healthcare Artificial Intelligence industry and highlights some short-term and long-term responses by the global market players that are boosting the market gain momentum. The Healthcare Artificial Intelligence report presents new growth rate estimates and growth forecasts for the period.

Key Questions Answered in Global Healthcare Artificial Intelligence Market Report:

Get Special Discount:

https://www.a2zmarketresearch.com/discount/350253

This report provides an in-depth and broad understanding of Healthcare Artificial Intelligence. With accurate data covering all the key features of the current market, the report offers extensive data from key players. An audit of the state of the market is mentioned as accurate historical data for each segment is available during the forecast period. Driving forces, restraints, and opportunities are provided to help provide an improved picture of this market investment during the forecast period 2022-2029.

Some essential purposes of the Healthcare Artificial Intelligence market research report:

oVital Developments: Custom investigation provides the critical improvements of the Healthcare Artificial Intelligence market, including R&D, new item shipment, coordinated efforts, development rate, partnerships, joint efforts, and local development of rivals working in the market on a global scale and regional.

oMarket Characteristics:The report contains Healthcare Artificial Intelligence market highlights, income, limit, limit utilization rate, value, net, creation rate, generation, utilization, import, trade, supply, demand, cost, part of the industry in general, CAGR and gross margin. Likewise, the market report offers an exhaustive investigation of the elements and their most recent patterns, along with Service market fragments and subsections.

oInvestigative Tools:This market report incorporates the accurately considered and evaluated information of the major established players and their extension into the Healthcare Artificial Intelligence market by methods. Systematic tools and methodologies, for example, Porters Five Powers Investigation, Possibilities Study, and numerous other statistical investigation methods have been used to analyze the development of the key players working in the Healthcare Artificial Intelligence market.

oConvincingly, the Healthcare Artificial Intelligence report will give you an unmistakable perspective on every single market reality without the need to allude to some other research report or source of information. This report will provide all of you with the realities about the past, present, and eventual fate of the Service market.

Buy Exclusive Report: https://www.a2zmarketresearch.com/checkout

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

[emailprotected]

+1 775 237 4147

The rest is here:
Healthcare Artificial Intelligence Market Is Expected to Boom: Butterfly Network, Inc., Lifegraph The Daily Vale - The Daily Vale

Artificial intelligence innovation among automotive industry companies has dropped off in the last year – just-auto.com

Research and innovation in artificial intelligence in the automotive manufacturing and supply sector has declined in the last year.

The most recent figures show that the number of AI related patent applications in the industry stood at 253 in the three months ending March down from 300 over the same period in 2021.

Figures for patent grants related to AI followed a different pattern to filings growing from 44 in the three months ending March 2021 to 69 in the same period in 2022.

The figures are compiled by GlobalData, who track patent filings and grants from official offices around the world. Using textual analysis, as well as official patent classifications, these patents are grouped into key thematic areas, and linked to key companies across various industries.

AI is one of the key areas tracked by GlobalData. It has been identified as being a key disruptive force facing companies in the coming years, and is one of the areas that companies investing resources in now are expected to reap rewards from.

The figures also provide an insight into the largest innovators in the sector.

Toyota Motor Corp was the top AI innovator in the automotive manufacturing and supply sector in the latest quarter. The company, which has its headquarters in Japan, filed 70 AI related patents in the three months ending March. That was up from 52 over the same period in 2021.

It was followed by the Germany based Porsche Automobil Holding SE with 24 AI patent applications, the United States based Ford Motor Co (24 applications), and Ireland based Aptiv Plc. (21 applications).

Aptiv Plc. Has recently ramped up R&D in AI. It saw growth of 47.6% in related patent applications in the three months ending March compared to the same period in 2021 - the highest percentage growth out of all companies tracked with more than 10 quarterly patents in the automotive manufacturing and supply sector.

Original post:
Artificial intelligence innovation among automotive industry companies has dropped off in the last year - just-auto.com

Davos 2022: Artificial intelligence is vital in the race to meet the SDGs – Imperial College London

As world leaders meet in Davos this week for the World Economic Forum, President Alice Gast writes on using AI in the race to meet the SDGs.

President Gast writes: "It is imperative that we put good processes and practices in place to ensure AI is developed in a positive and ethical way to see it adopted and used to its fullest by citizens and governments.

"We must now work together to ensure that artificial intelligence can accelerate progress of the Sustainable Development Goals and help us get back on track to reaching them by 2030."

President Gast highlighted some of the projects at Imperial that are harnessing the potential of AI to reach the SDGs.

Researchers from Google Health, DeepMind, the NHS, Northwestern University and colleagues at Imperial have designed andtrained an AI modelto spot breast cancer from X-ray images.

The computer algorithm, which was trained using mammography images from almost 29,000 women, was shown to be as effective as human radiologists in spotting cancer. At a time whenhealth services around the world are stretchedas they deal with long backlogs of patients following the pandemic, this sort of technology can help ease bottlenecks and improve treatment.

For malaria, a handheld lab-on-a-chip molecular diagnostics systems developed with AI could revolutionize how the disease is detected in remote parts of Africa. The project, which is led by the Digital Diagnostics for Africa Network, brings together collaborators such as MinoHealth AI Labs in Ghana and Imperials Global Development Hub. This technology could help pave the way for universal health coverage and push us towards achieving SDG3.

With an expanding global population,we face challenges around food demand and production not only how to reduce malnourishment but the impact on the planet too, such as deforestation, emissions and biodiversity loss. To meet these needs, the use of artificial intelligence in agriculture is growing rapidly and is enabling farmers to enhance crop production, direct machinery to carry out tasks autonomously, and identify pest infestations before they occur.

Smart sensing technology is also helping farmers use fertilizer more effectively and reduce environmental damage.An exciting research project, funded by the EPSRC, Innovate UK and Cytiva, will help growers optimize timing and amount of fertilizer to use on their crops, taking into account factors like the weather and soil condition. This will reduce the expense and damaging effects of over-fertilizing soil.

Imperial College Business Schoolwill be in Davos alongside the World Economic Forum from 22 - 26 May 2022. They will be holding a series of events on the theme ofTransforming Organisations, Transforming Markets an opportunity to explore some of the most pressing challenges and exciting opportunities facing business today.

One of the salons will focus on 'Delivering AI ethics' and another will focus on 'Decoding digital assets'.

Imperial academics will also be taking part in fringe panels on 'Integrating SDGs in the transition to Web3 technologies' and 'Rethinking Capitalism in the 4th industrial revolution'.

Read this article:
Davos 2022: Artificial intelligence is vital in the race to meet the SDGs - Imperial College London