What is Artificial Intelligence? Guide to AI | eWEEK – eWeek

By any measure, artificial intelligence (AI) has become big business.

According to Gartner, customers worldwide will spend $62.5 billion on AI software in 2022. And it notes that 48 percent of CIOs have either already deployed some sort of AI software or plan to do so within the next twelve months.

All that spending has attracted a huge crop of startups focused on AI-based products. CB Insights reported that AI funding hit $15.1 billion in the first quarter of 2022 alone. And that came right after a quarter that saw investors pour $17.1 billion into AI startups. Given that data drives AI, its no surprise that related fields like data analytics, machine learning and business intelligence are all seeing rapid growth.

But what exactly is artificial intelligence? And why has it become such an important and lucrative part of the technology industry?

Also see: Top AI Software

In some ways, artificial intelligence is the opposite of natural intelligence. If living creatures can be said to be born with natural intelligence, man-made machines can be said to possess artificial intelligence. So from a certain point of view, any thinking machine has artificial intelligence.

And in fact, one of the early pioneers of AI, John McCarthy, defined artificial intelligence as the science and engineering of making intelligent machines.

In practice, however, computer scientists use the term artificial intelligence to refer to machines doing the kind of thinking that humans have taken to a very high level.

Computers are very good at making calculations at taking inputs, manipulating them, and generating outputs as a result. But in the past they have not been capable of other types of work that humans excel at, such as understanding and generating language, identifying objects by sight, creating art, or learning from past experience.

But thats all changing.

Today, many computer systems have the ability to communicate with humans using ordinary speech. They can recognize faces and other objects. They use machine learning techniques, especially deep learning, in ways that allow them to learn from the past and make predictions about the future.

So how did we get here?

Also see: How AI is Altering Software Development with AI-Augmentation

Many people trace the history of artificial intelligence back to 1950, when Alan Turing published Computing Machinery and Intelligence. Turings essay began, I propose to consider the question, Can machines think?' It then laid out a scenario that came to be known as a Turing Test. Turing proposed that a computer could be considered intelligent if a person could not distinguish the machine from a human being.

In 1956, John McCarthy and Marvin Minsky hosted the first artificial intelligence conference, the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). It convinced computer scientists that artificial intelligence was an achievable goal, setting the foundation for several decades of further research. And early forays into AI technology developed bots that could play checkers and chess.

The 1960s saw the development of robots and several problem-solving programs. One notable highlight was the creation of ELIZA, a program that simulated psychotherapy and provided an early example of human-machine communication.

In the 1970s and 80s, AI development continued but at a slower pace. The field of robotics in particular saw significant advances, such as robots that could see and walk. And Mercedes-Benz introduced the first (extremely limited) autonomous vehicle. However, government funding for AI research decreased dramatically, leading to a period some refer to as the AI winter.

Interest in AI surged again in the 1990s. The Artificial Linguistic Internet Computer Entity (ALICE) chatbot demonstrated that natural language processing could lead to human-computer communication that felt far more natural than what had been possible with ELIZA. The decade also saw a surge in analytic techniques that would form the basis of later AI development, as well as the development of the first recurrent neural network architecture. This was also the decade when IBM rolled out its Deep Blue chess AI, the first to win against the current world champion.

The first decade of the 2000s saw rapid innovation in robotics. The first Roombas began vacuuming rugs, and robots launched by NASA explored Mars. Closer to home, Google was working on a driverless car.

The years since 2010 have been marked by unprecedented increases in AI technology. Both hardware and software developed to a point where object recognition, natural language processing, and voice assistants became possible. IBMs Watson won Jeopardy. Siri, Alexa, and Cortana came into being, and chatbots became a fixture of modern retail. Google DeepMinds AlphaGo beat human Go champions. And enterprises in all industries have begun deploying AI tools to help them analyze their data and become more successful.

Now AI is truly beginning to evolve past some of the narrow and limited types into more advanced implementations.

Also see:The History of Artificial Intelligence

Different groups of computer scientists have proposed different ways of classifying the types of AI. One popular classification uses three categories:

Another popular classification uses four different categories:

While these classifications are interesting from a theoretical standpoint, most organizations are far more interested in what they can do with AI. And that brings us to the aspect of AI that is generating a lot of revenue the AI use cases.

Also see: Three Ways to Get Started with AI

The possible AI use cases and applications for artificial intelligence are limitless. Some of todays most common AI use cases include the following:

Of course, these are just some of the more widely known use cases for AI. The technology is seeping into daily life in so many ways that we often arent fully aware of them.

Also see: Best Machine Learning Platforms

So where is the future of AI? Clearly it is reshaping consumer and business markets.

The technology that powers AI continues to progress at a steady rate. Future advances like quantum computing may eventually enable major new innovations, but for the near term, it seems likely that the technology itself will continue along a predictable path of constant improvement.

Whats less clear is how humans will adapt to AI. This question poses questions that loom large over human life in the decades ahead.

Many early AI implementations have run into major challenges. In some cases, the data used to train models has allowed bias to infect AI systems, rendering them unusable.

In many other cases, business have not seen the financial results they hoped for after deploying AI. The technology may be mature, but the business processes surrounding it are not.

The AI software market is picking up speed, but its long-term trajectory will depend on enterprises advancing their AI maturity, said Alys Woodward, senior research director at Gartner.

Successful AI business outcomes will depend on the careful selection of use cases, Woodware added. Use cases that deliver significant business value, yet can be scaled to reduce risk, are critical to demonstrate the impact of AI investment to business stakeholders.

Organizations are turning to approaches like AIOps to help them better manage their AI deployments. And they are increasingly looking for human-centered AI that harnesses artificial intelligence to augment rather than to replace human workers.

In a very real sense, the future of AI may be more about people than about machines.

Also see: The Future of Artificial Intelligence

Read the rest here:
What is Artificial Intelligence? Guide to AI | eWEEK - eWeek

Related Posts
This entry was posted in $1$s. Bookmark the permalink.