The Impact of Artificial Intelligence – Widespread Job Losses

Theres no question that Artificially Intelligence (AI) and Automation will change the way we live; the question isnt if, its how and when. In this post, Ill be exploring both optimistic and pessimistic views of how artificial intelligence and automation will impact our future workforce.

Technology-driven societal changes, like what were experiencing with AI and automation, always engender concern and fearand for good reason. A two-year study from McKinsey Global Institute suggests that by 2030, intelligent agents and robots could replace as much as 30 percent of the worlds current human labor. McKinsey suggests that, in terms of scale, the automation revolution could rival the move away from agricultural labor during the 1900s in the United States and Europe, and more recently, the explosion of the Chinese labor economy.

McKinsey reckons that, depending upon various adoption scenarios,automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely. How could such a shift not cause fear and concern, especially for the worlds vulnerable countries and populations?

The Brookings Institution suggests that even if automation only reaches the 38 percent means of most forecasts, some Western democracies are likely to resort to authoritarian policies to stave off civil chaos, much like they did during the Great Depression. Brookings writes, The United States would look like Syria or Iraq, with armed bands of young men with few employment prospects other than war, violence, or theft. With frightening yet authoritative predictions like those, its no wonder AI and automation keeps many of us up at night.

The Luddites were textiles workers who protested against automation, eventually attacking and burning factories because, they feared that unskilled machine operators were robbing them of their livelihood. The Luddite movement occurred all the way back in 1811, so concerns about job losses or job displacements due to automation are far from new.

When fear or concern is raised about the potential impact of artificial intelligence and automation on our workforce, a typical response is thus to point to the past; the same concerns are raised time and again and prove unfounded.

In 1961, President Kennedy said, the major challenge of the sixties is to maintain full employment at a time when automation is replacing men. In the 1980s, the advent of personal computers spurred computerphobia with many fearing computers would replace them.

So what happened?

Despite these fears and concerns, every technological shift has ended up creating more jobs than were destroyed. When particular tasks are automated, becoming cheaper and faster, you need more human workers to do the other functions in the process that havent been automated.

During the Industrial Revolution more and more tasks in the weaving process were automated, prompting workers to focus on the things machines could not do, such as operating a machine, and then tending multiple machines to keep them running smoothly. This caused output to grow explosively. In America during the 19th century the amount of coarse cloth a single weaver could produce in an hour increased by a factor of 50, and the amount of labour required per yard of cloth fell by 98%. This made cloth cheaper and increased demand for it, which in turn created more jobs for weavers: their numbers quadrupled between 1830 and 1900. In other words, technology gradually changed the nature of the weavers job, and the skills required to do it, rather than replacing it altogether. The Economist, Automation and Anxiety

Looking back on history, it seems reasonable to conclude that fears and concerns regarding AI and automation are understandable but ultimately unwarranted. Technological change may eliminate specific jobs, but it has always created more in the process.

Beyond net job creation, there are other reasons to be optimistic about the impact of artificial intelligence and automation.

Simply put, jobs that robots can replace are not good jobs in the first place. As humans, we climb up the rungs of drudgery physically tasking or mind-numbing jobs to jobs that use what got us to the top of the food chain, our brains. The Wall Street Journal, The Robots Are Coming. Welcome Them.

By eliminating the tedium, AI and automation can free us to pursue careers that give us a greater sense of meaning and well-being. Careers that challenge us, instill a sense of progress, provide us with autonomy, and make us feel like we belong; all research-backed attributes of a satisfying job.

And at a higher level, AI and automation will also help to eliminate disease and world poverty. Already, AI is driving great advances in medicine and healthcare with better disease prevention, higher accuracy diagnosis, and more effective treatment and cures. When it comes to eliminating world poverty, one of the biggest barriers is identifying where help is needed most. By applying AI analysis to data from satellite images, this barrier can be surmounted, focusing aid most effectively.

I am all for optimism. But as much as Id like to believe all of the above, this bright outlook on the future relies on seemingly shaky premises. Namely:

As explored earlier, a common response to fears and concerns over the impact of artificial intelligence and automation is to point to the past. However, this approach only works if the future behaves similarly. There are many things that are different now than in the past, and these factors give us good reason to believe that the future will play out differently.

In the past, technological disruption of one industry didnt necessarily mean the disruption of another. Lets take car manufacturing as an example; a robot in automobile manufacturing can drive big gains in productivity and efficiency, but that same robot would be useless trying to manufacture anything other than a car. The underlying technology of the robot might be adapted, but at best that still only addresses manufacturing

AI is different because it can be applied to virtually any industry. When you develop AI that can understand language, recognize patterns, and problem solve, disruption isnt contained. Imagine creating an AI that can diagnose disease and handle medications, address lawsuits, and write articles like this one. No need to imagine:AI is already doing those exact things.

Another important distinction between now and the past is the speed of technological progress. Technological progress doesnt advance linearly, it advances exponentially. Consider Moores Law: the number of transistors on an integrated circuit doubles roughly every two years.

In the words of University of Colorado physics professor Albert Allen Bartlett, The greatest shortcoming of the human race is our inability to understand the exponential function. We drastically underestimate what happens when a value keeps doubling.

What do you get when technological progress is accelerating and AI can do jobs across a range of industries? An accelerating pace of job destruction.

Theres no economic law that says You will always create enough jobs or the balance will always be even, its possible for a technology to dramatically favour one group and to hurt another group, and the net of that might be that you have fewer jobs Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy

In the past, yes, more jobs were created than were destroyed by technology. Workers were able to reskill and move laterally into other industries instead. But the past isnt always an accurate predictor of the future. We cant complacently sit back and think that everything is going to be ok.

Which brings us to another critical issue

Lets pretend for a second that the past actually will be a good predictor of the future; jobs will be eliminated but more jobs will be created to replace them. This brings up an absolutely critical question, what kinds of jobs are being created and what kinds of jobs are being destroyed?

Low- and high-skilled jobs have so far been less vulnerable to automation. The low-skilled jobs categories that are considered to have the best prospects over the next decade including food service, janitorial work, gardening, home health, childcare, and security are generally physical jobs, and require face-to-face interaction. At some point robots will be able to fulfill these roles, but theres little incentive to roboticize these tasks at the moment, as theres a large supply of humans who are willing to do them for low wages. Slate, Will robots steal your job?

Blue-collar and white-collar jobs will be eliminatedbasically, anything that requires middle-skills (meaning that it requires some training, but not much). This leaves low-skill jobs, as described above, and high-skill jobs that require high levels of training and education.

There will assuredly be an increasing number of jobs related to programming, robotics, engineering, etc.. After all, these skills will be needed to improve and maintain the AI and automation being used around us.

But will the people who lost their middle-skilled jobs be able to move into these high-skill roles instead? Certainly not without significant training and education. What about moving into low-skill jobs? Well, the number of these jobs is unlikely to increase, particularly because the middle-class loses jobs and stops spending money on food service, gardening, home health, etc.

The transition could be very painful. Its no secret that rising unemployment has a negative impact on society; less volunteerism, higher crime, and drug abuse are all correlated. A period of high unemployment, in which tens of millions of people are incapable of getting a job because they simply dont have the necessary skills, will be our reality if we dont adequately prepare.

So how do we prepare? At the minimum, by overhauling our entire education system and providing means for people to re-skill.

To transition from 90% of the American population farming to just 2% during the first industrial revolution, it took the mass introduction of primary education to equip people with the necessary skills to work. The problem is that were still using an education system that is geared for the industrial age. The three Rs (reading, writing, arithmetic) were once the important skills to learn to succeed in the workforce. Now, those are the skills quickly being overtaken by AI.

For a fascinating look at our current education system and its faults, check out this video from Sir Ken Robinson:

In addition to transforming our whole education system, we should also accept that learning doesnt end with formal schooling. The exponential acceleration ofdigital transformation means that learning must be a lifelong pursuit, constantly re-skilling to meet an ever-changing world.

Making huge changes to our education system, providing means for people to re-skill, and encouraging lifelong learning can help mitigate the pain of the transition, but is that enough?

When I originally wrote this article a couple of years ago, I believed firmly that 99% of all jobs would be eliminated. Now, Im not so sure. Here was my argument at the time:

[The claim that 99% of all jobs will be eliminated] may seem bold, and yet its all but certain. All you need are two premises:

The first premise shouldnt be at all controversial. The only reason to think that we would permanently stop progress, of any kind, is some extinction-level event that wipes out humanity, in which case this debate is irrelevant. Excluding such a disaster, technological progress will continue on an exponential curve. And it doesnt matter how fast that progress is; all that matters is that it will continue.The incentives for people, companies, and governments are too great to think otherwise.

The second premise will be controversial, but notice that I said human intelligence. I didnt say consciousness or what it means to be human. That human intelligence arises from physical processes seems easy to demonstrate: if we affect the physical processes of the brain we can observe clear changes in intelligence. Though a gloomy example, its clear that poking holes in a persons brain results in changes to their intelligence. A well-placed poke in someones Brocas area and voilthat person cant process speech.

With these two premises in hand, we can conclude the following: we will build machines that have human-level intelligence and higher. Its inevitable.

We already know that machines are better than humans at physical tasks, they can move faster, more precisely, and lift greater loads. When these machines are also as intelligent as us, there will be almost nothing they cant door cant learn to do quickly. Therefore, 99% of jobs will eventually be eliminated.

But that doesnt mean well be redundant. Well still need leaders (unless we give ourselves over to robot overlords) and our arts, music, etc., may remain solely human pursuits too. As for just about everything else? Machines will do itand do it better.

But whos going to maintain the machines? The machines.But whos going to improve the machines? The machines.

Assuming they could eventually learn 99% of what we do, surely theyll be capable of maintaining and improving themselves more precisely and efficiently than we ever could.

The above argument is sound, but the conclusion that 99% of all jobs will be eliminated I believe over-focused on our current conception of a job. As I pointed out above, theres no guarantee that the future will play out like the past. After continuing to reflect and learn over the past few years, I now think theres good reason to believe that while 99% of all current jobs might be eliminated, there will still be plenty for humans to do (which is really what we care about, isnt it?).

The one thing that humans can do that robots cant (at least for a long while) is to decide what it is that humans want to do. This is not a trivial semantic trick; our desires are inspired by our previous inventions, making this a circular question. The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, by Kevin Kelly

Perhaps another way of looking at the above quote is this: a few years ago I read the book Emotional Intelligence, and was shocked to discover just how essential emotions are to decision making. Not just important, essential. People who had experienced brain damage to the emotional centers of their brains were absolutely incapable of making even the smallest decisions. This is because, when faced with a number of choices, they could think of logical reasons for doing or not doing any of them but had no emotional push/pull to choose.

So while AI and automation may eliminate the need for humans to do any of thedoing, we will still need humans to determine what to do. And because everything that we do and everything that we build sparks new desires and shows us new possibilities, this job will never be eliminated.

If you had predicted in the early 19th century that almost all jobs would be eliminated, and you defined jobs as agricultural work, you would have been right. In the same way, I believe that what we think of as jobs today will almost certainly be eliminated too. But this does not mean that there will be no jobs at all, the job will instead shift to determining, what do we want to do? And then working with our AI and machines to make our desires a reality.

Is this overly optimistic? I dont think so. Either way, theres no question that the impact of artificial intelligence will be great and its critical that we invest in the education and infrastructure needed to support people as many current jobs are eliminated and we transition to this new future.

Originally published on April 1, 2017. Updated on January 29, 2020.

More here:
The Impact of Artificial Intelligence - Widespread Job Losses

Artificial Intelligence (AI) Partnering Deals Collection 2014-2020: Access to Over 350 AI Deal Records – ResearchAndMarkets.com – Yahoo Finance

The "Global Artificial Intelligence (AI) Partnering Terms and Agreements (2014-2020)" report has been added to ResearchAndMarkets.com's offering.

This report provides an understanding and access to the artificial intelligence partnering deals and agreements entered into by the world's leading healthcare companies.

Global Artificial Intelligence Partnering Terms and Agreements includes:

The report provides a detailed understanding and analysis of how and why companies enter artificial intelligence partnering deals. The majority of deals are early development stage whereby the licensee obtains a right or an option right to license the licensors artificial intelligence technology or product candidates. These deals tend to be multicomponent, starting with collaborative R&D, and commercialization of outcomes.

Understanding the flexibility of a prospective partner's negotiated deals terms provides critical insight into the negotiation process in terms of what you can expect to achieve during the negotiation of terms. Whilst many smaller companies will be seeking details of the payments clauses, the devil is in the detail in terms of how payments are triggered - contract documents provide this insight where press releases and databases do not.

This report contains a comprehensive listing of all artificial intelligence partnering deals announced since 2014 including financial terms where available including over 350 links to online deal records of actual artificial intelligence partnering deals as disclosed by the deal parties. In addition, where available, records include contract documents as submitted to the Securities Exchange Commission by companies and their partners.

Contract documents provide the answers to numerous questions about a prospective partner's flexibility on a wide range of important issues, many of which will have a significant impact on each party's ability to derive value from the deal.

For example, analyzing actual company deals and agreements allows assessment of the following:

The initial chapters of this report provide an orientation of artificial intelligence dealmaking and business activities.

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/oseg3y

View source version on businesswire.com: https://www.businesswire.com/news/home/20200515005346/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

See the original post:
Artificial Intelligence (AI) Partnering Deals Collection 2014-2020: Access to Over 350 AI Deal Records - ResearchAndMarkets.com - Yahoo Finance

This Man Created A Perfect AC/DC Song By Using Artificial Intelligence – Kerrang!

While weve long been enjoying some weird and wonderful mash-ups courtesy of the internets most hilarious and creative YouTubers, clearly its too much effort to be letting humans do all the work these days. As such, satirist Funk Turkey has handed the task of creating new material over to artificial intelligence, using robots to make a pretty ace AC/DCsong.

The track in question, Great Balls, came about use lyrics.rip to generate the words, before Funk channeled his best Brian Johnson to sing this hilarious mish-mash of lyrics (Wasnt the dog a touch too young to thrill? sorry, what?), and then backed it all with suitably AC/DC-esqueinstrumentation.

Read this next: Classic album covers redesigned for socialdistancing

Of course, theres hopefully real AC/DC material on the way at some point soon, with Twisted Sister vocalist Dee Snider revealing in December 2019 that all four surviving members have reunited for a new record, and, Its as close as you can get to the originalband.

Until then, though, heres Great Balls to tide usover:

In fairness, lyrics.rip is actually a pretty great little tool. We tried the same thing for Green Day to see what fine words would come out now, to get Billie Joe Armstrong to performthem:

An ambulance thats turning on the way across towncause you feeling sorry for that your whining eyesWhen September endsHere comes the waitingJust roamin for yourselfAre we are the silence with the brick of my way to search the story of my memory rests,but never forgets what I bleeding from the brick of my heads above the starsAre the waitingMy heads above the brick of self-controlTo live?My heads above the innocent can never lastTo searchthe

Okaythen.

Read this next: An exhaustive look at the phenomenon of celebrity cameos in musicvideos

Posted on May 15th 2020, 1:29pm

Continued here:
This Man Created A Perfect AC/DC Song By Using Artificial Intelligence - Kerrang!

Artificial Intelligence to Detect Coronavirus Infection Among Individuals Without Actual Test – The Weather Channel

A doctor collects a throat swab specimen for the test of the novel coronavirus that causes COVID-19, at Kurla in Mumbai.

As the novel coronavirus pandemic COVID-19 continues to spread across the globe, researchers are racing against time to find possible preventive measures, tests and cures to arrest the spread. While the pandemic enters the stage of community spread in many parts of the world, countries are running short of essential medical kits to test sufficient numbers of people.

Testing is the need of the hour, and to catalyse the pace of testing, scientists have now developed an artificial intelligence-based diagnostic tool. The incredible new tool can help predict if an individual is likely to have COVID-19 disease, based on the symptoms they display. The discovery was recently published in the journal Nature Medicine.

Researchers developed the artificial intelligence-based model using data from an app called COVID Symptom Study. So far, the app is said to be downloaded by about 33 lakh people globally. The users report their health status daily on the apps, and according to the paper, the app collects data from both asymptomatic and symptomatic individuals. Besides, it tracks in real-time the disease progression by recording self-reported health information daily.

To develop the AI-based prediction system, researchers examined the data collected from about 25 lakh people in the United Kingdom and the United States between March 24 and April 21. These users actively used the app regularly to add their health status.

Based on the user data on symptoms and health status of users, the AI-based models predict who might have COVID-19. The model also uses the actual test results of the people who have been tested positive. The tool also looked into information such as test outcomes, demographics, and pre-existing medical conditions.

The research team analysed several symptoms of COVID-19, which are most likely to give positive results. These key symptoms include cold, flu, fever, cough, fatigue. Moreover, they also found loss of taste and smell, as a common characteristic of COVID-19 disease.

When the AI-based model was applied to over 800,000 app users who displayed exact symptomsrevealed about 17.42% of these people were likely to have coronavirus. Also, the tool has been proven beneficial in recognising patients who have developed mild symptoms. This could help stop the spread of the virus by making the people aware that they might be potential carriers.

The most valuable feature of this AI model is that it can predict the COVID-19 symptoms without patients getting the actual test. Particularly at the time of a pandemicthe app could prove to be of significant value for highly populated countries like India.

The Weather Companys primary journalistic mission is to report on breaking weather news, the environment and the importance of science to our lives. This story does not necessarily represent the position of our parent company, IBM.

Read more:
Artificial Intelligence to Detect Coronavirus Infection Among Individuals Without Actual Test - The Weather Channel

Couple uses artificial intelligence to have baby in major breakthrough trial – Mirror Online

Groundbreaking artificial intelligence is helping couples to become parents.

A trial is currently taking place in Australia using technology to increase the chances of having a baby through IVF, 9news.com.au reports.

During the study, led by fertility provider Virtus Health, embryos are being grown in incubators with tiny cameras.

By taking 115,000 pictures over five days, these cameras then help to predict fetal heart outcomes and identify the healthiest embryos before they are implanted.

The incredible trial - which is taking place at seven different fertility clinics - has so far led to 90% of couples having a child through IVF.

Among those taking part are Sarah and Tim Keys, from Queensland, who have been trying to have a baby for a number of years.

The couple decided to turn to IVF after suffering a number of miscarriages.

Ms Keys, who is now 26 weeks pregnant, explained: "It's really hard to go through those miscarriages so anything that could decrease the chances, let's go with that."

She adds that the couple are now "very excited" to be expecting a little girl.

"I think we'll still be a bit stressed until we're holding her, but where we're at, at the moment is really awesome," she continued.

It is hoped that a total of 1,000 patients will take part in the study, which is also being carried out in clinics in Ireland and Denmark.

If successful, doctors claim that AI could be one of the biggest advances to IVF in decades, and hope it can be used globally.

Associate Professor Anusch Yazdani from the Queensland Fertility Group said: "It's completely new, completely different and it's all to do with the evolution of computer technology."

Visit link:
Couple uses artificial intelligence to have baby in major breakthrough trial - Mirror Online

Artificial Intelligence | Computer Science

The name artificial intelligence covers a lot of disparate problem areas, united mainly by the fact that they involve complex inputs and outputs that are difficult to compute (or even check for correctness when supplied). One of the most interesting such areas is sensor-controlled behavior, in which a machine acts in the real world using information gathered from sensors such as sonars and cameras. This is a major focus of A.I. research at Yale.

The difference between sensor-controlled behavior and what computers usually do is that the input from a sensor is ambiguous. When a computer reads a record from a database, it can be certain what the record says. There may be philosophical doubt about whether an employees social-security number really succeeds in referring to a flesh-and-blood employee but such doubts dont affect how programs are written. As far as the computer system is concerned, the identifying number is the employee, and it will happily, and successfully, use it to access all relevant data as long as no internal inconsistency develops.

Contrast that with a computer controlling a soccer-playing robot, whose only sensor is a camera mounted above the field. The camera tells the computer, several times per second, the pattern of illumination it is receiving encoded as an array of numbers. (Actually, its three arrays, one for red, one for green, and one for blue.) The vision system must extract from this large set of numbers the locations of all the robots (on its team and the opponents) plus the ball. What it extracts is not an exact description, but always noisy, and occasionally grossly wrong. In addition, by the time the description is available it is always slightly out of date. The computer must decide quickly how to alter the behavior of the robots, send them messages to accomplish that, and then process the next image.

One might wonder why we choose to work in such a perversely difficult area. There are two obvious reasons: First, one ultimate goal of A.I. research is to understand how people are possiblei.e., how it is that an intelligent system can thrive in the real world. Our vision and other senses are so good that we can sometimes overlook the noise and errors they are prone to, when in fact we are faced with problems that are similar to the robot-soccer player, but much worse. We will never understand human intelligence until we understand how the human brain extracts information from its environment, and uses it to guide behavior.

Second, vision and robotics have many practical applications. Space exploration is more cost-effective when robots are the vanguard, as demonstrated dramatically by the Mars Rover mission of 1997. Closer to home, we are already seeing commercially viable applications of the technology. For instance, TV networks can now produce three-dimensional views of an athletic event, by combining several two-dimensional views, in essentially the same way animals manage stereo vision. There is now a burgeoning robotic-toy industry, and we can expect robots to appear in more complex roles in our lives. So far, the behaviors these robots can exhibit are quite primitive. Kids are satisfied with a robot that can utter a few phrases or wag its tail when hugged. But it quickly becomes clear even to a child that todays toys are not really aware of what is going on around them. The main problem in making them aware is to provide them with better sensors, which means better algorithms for processing the outputs from the sensors.

Research in this area at Yale is carried out by the Center for Computational Vision and Control, a joint effort of the Departments of Computer Science, Electrical Engineering, and Radiology. We will describe three of the ongoing projects in this area.

View original post here:
Artificial Intelligence | Computer Science

Artificial intelligence: How to invest – USA TODAY

The first big investment wave in tech was the personal computer. Then came software, the internet, smartphones, social media and cloud computing.

The next big thing is artificial intelligence, or AI,professional stock pickers say.

AI is the science-fiction-like technology in which computers are programmed to think and perform the tasks ordinarily done by humans.

The size of the global AI market is expected to grow to $202.6 billion by 2026, up from $20.7 billion in 2018, according to Fortune Business Insights. Funding of upstart AI companies by venture capitalists remains brisk. Last year, 956 deals valued at $13.5 billion took placethrough the third quarter,puttingAI deal activity on pace for another record year, according to PitchBook-NVCA Venture Monitor.

Artificial intelligence may one day take the wheel.(Photo: metamorworks / Getty Images)

Should you marry your money? After the wedding, should you marry your money in a joint account? Here are 3 approaches.

Apple store closed: Apple says it resolved outage that knocked out the App Store for some customers

Mike Lippert, manager of the Baron Opportunity fund, says AI touches more than half of the60-plus stock holdings in his mutual fund. Those stocks areall about innovation, transformation and disruption, three traits AI has in abundance.

I wont claim AI is in every stock in the portfolio, but its all over my portfolio, Lippert tells USA TODAY.

AI is creeping into every business, boosting productivity, customer service, sales, product innovation and operating efficiency. The technology is all about crunching reams of data from around the world, making sense of it and using the information to help businesses add services and operate more efficiently.

AI applications can be found in virtually every industry today, from marketing to health care to finance, Xiaomin Mou, IFCs senior investment officer, wrote in a report.

It's paving the road to driverless cars, making decisions such as what lane to drive in and when to stop. Its behind the software that tells salespeople which client prospect to call first. It's the brains behind virtual assistants that can interpret voice commands and play songs or provide weather updates.

There are not a lot of companies, especially if they are growing, that are not benefiting from AI in some ways, Lippert says.

The potential danger of AI, Lippert notes, is that advances such as autonomous driving and more sophisticated machine learning will take jobs from workers.

How can investors who want to get in early on the next Microsoft, Amazon, Apple or Facebook gain exposure to AI in a way that gives them the potential to profit over the long term without too much risk?

Investors must take a long-term approach and not just bet on one or two companies they think will emerge as big winners in AI, says Nidhi Gupta, technology sector leader at Fidelity Investments.

Diversification is really important, Gupta says, adding that investing in AI exposes investors to a wide range of outcomes.

In searching for AI winners, look for three things to unlock value, Gupta says.

1. Rich data sets thathelp create the algorithms and apps that make people's lives better.

2. Scaled computing power as big data centers with big servers are needed.

3. AI engineering talent to avoid brainpower bottlenecks.

Among the AI stocks to watch:

Big AI platforms: Leading AI players include well-known, large-cap tech stocks Google parent Alphabet (GOOGL), Amazon (AMZN) and Microsoft (MSFT). These three companies have the rich data sets, computing power and AI engineering talent that Gupta says arekey to success.

Chipmakers:Nvidias (NVDA) powerful and fast computer chips have been found effective for use in machine learning, AI training purposes, data centers and cloud-based computing. Another chipmaker with AI expertise is Xilinx (XLNX), says John Freeman, an analyst at Wall Street research firm CFRA.

Companies benefiting from AI:Many businesses, such as Salesforce (CRM), stand apart from their peers and competitors by integrating AI into their business, says Barons Lippert. Salesforce Einstein AI, for example, analyzes all types of customer data, ranging from emails to tweets, to better predict which sales leads will convert to new business, he says. Netflix (NFLX) uses AI to recommend shows and programming viewersmight like. Chinas online retailer Alibaba (BABA) uses AI to crunch every customer interaction to make the online sales process smoother. Electric-car maker Tesla (TSLA) uses AI to enable software that is the driving force behind autonomous cars.

Software makers:Other companies use AI to make software smarter and help solve business problems,Lippert says. Guidewire Software (GWRE), for example, uses AI to help insurers properly price policies, analyze risk, process submitted claims faster and identify insurance fraud. Adobe (ADBE) uses AI to analyze data to quickly identify cyberthreats. Datadog (DDOG) offers AI-inspired cloud monitoring services that letclients know if their web-based apps are behaving properly.

FICO (FICO) isbest-known for calculating consumer credit scores. Ituses AI to make sense of financial data to help clients, such as banks, determinethe credit worthiness of borrowers or help detectfraud, CFRAs Freeman says.

Investors who dont want to pick their own stockscan invest in a tech-focused mutual fund or an ETF that focuses specifically on AI. Some examples include iShares Robotics & Artificial Intelligence ETF (IRBO) and Global X Robotics & Artificial Intelligence ETF (BOTZ).

I do think AI is as significant an investing opportunity as the first era of computers, Lippert says.

Investors should expect bumps in the road investing in AI, Freeman warns.

This is a multi-decade trend, he says. AI is going to go through some mini-bubbles as well as some very healthy cycles.

Read or Share this story: https://www.usatoday.com/story/money/2020/01/27/artificial-intelligence-how-invest/4542467002/

Excerpt from:
Artificial intelligence: How to invest - USA TODAY

4 Main Types of Artificial Intelligence – G2

Although AI is undoubtedly multifaceted, there are specific types of artificial intelligence under which extended categories fall.

What are the four types of artificial intelligence?

There are a plethora of terms and definitions in AI that can make it difficult to navigate the difference between categories, subsets, or types of artificial intelligence and no, theyre not all the same. Some subsets of AI include machine learning, big data, and natural language processing (NLP); however, this article covers the four main types of artificial intelligence: reactive machines, limited memory, theory of mind, and self-awareness.

These four types of artificial intelligence comprise smaller aspects of the general realm of AI.

Reactive machines are the most basic type of AI system. This means that they cannot form memories or use past experiences to influence present-made decisions; they can only react to currently existing situations hence reactive. An existing form of a reactive machine is Deep Blue, a chess-playing supercomputer created by IBM in the mid-1980s.

Deep Blue was created to play chess against a human competitor with intent to defeat the competitor. It was programmed with the ability to identify a chess board and its pieces while understanding the pieces functions. Deep Blue could make predictions about what moves it should make and the moves its opponent might make, thus having an enhanced ability to predict, select, and win. In a series of matches played between 1996 and 1997, Deep Blue defeated Russian chess grandmaster Garry Kasparov 3 to 2 games, becoming the first computerized program to defeat a human opponent.

Deep Blues unique skill of accurately and successfully playing chess matches highlight its reactive abilities. In the same vein, its reactive mind also indicates that it has no concept of past or future; it only comprehends and acts on the presently-existing world and components within it. To simplify, reactive machines are programmed for the here and now, but not the before and after.

Reactive machines have no concept of the world and therefore cannot function beyond the simple tasks for which they are programmed. A characteristic of reactive machines is that no matter the time or place, these machines will always behave the way they were programmed. There is no growth with reactive machines, only stagnation in recurring actions and behaviors.

Limited memory is comprised of machine learning models that derive knowledge from previously-learned information, stored data, or events. Unlike reactive machines, limited memory learns from the past by observing actions or data fed to them in order to build experiential knowledge.

Although limited memory builds on observational data in conjunction with pre-programmed data the machines already contain, these sample pieces of information are fleeting. An existing form of limited memory is autonomous vehicles.

Autonomous vehicles, or self-driving cars, use the principle of limited memory in that they depend on a combination of observational and pre-programmed knowledge. To observe and understand how to properly drive and function among human-dependent vehicles, self-driving cars read their environment, detect patterns or changes in external factors, and adjust as necessary.

Not only do autonomous vehicles observe their environment, but they also observe the movement of other vehicles and people in their line of vision. Previously, driverless cars without limited memory AI took as long as 100 seconds to react and make judgments on external factors. Since the introduction of limited memory, reaction time on machine-based observations has dropped sharply, depicting the value of limited memory AI.

GIF courtesy of ProStock/Getty via Tesla

What constitutes theory of mind is decision-making ability equal to the extent of a human mind, but by machines. While there are some machines that currently exhibit humanlike capabilities (voice assistants, for instance), none are fully capable of holding conversations relative to human standards. One component of human conversation is having emotional capacity, or sounding and behaving like a person would in standard conventions of conversation.

This future class of machine ability would include understanding that people have thoughts and emotions that affect behavioral output and thus influence a theory of mind machines thought process. Social interaction is a key facet of human interaction, so to make theory of mind machines tangible, the AI systems that control the now-hypothetical machines would have to identify, understand, retain, and remember emotional output and behaviors while knowing how to respond to them.

From this, said theory of mind machines would have to be able to use the information derived from people and adapt it into their learning centers to know how to communicate with and treat different situations. Theory of mind is a highly advanced form of proposed artificial intelligence that would require machines to thoroughly acknowledge rapid shifts in emotional and behavioral patterns in humans, and also understand that human behavior is fluid; thus, theory of mind machines would have to be able to learn rapidly at a moments notice.

Some elements of theory of mind AI currently exist or have existed in the recent past. Two notable examples are the robots Kismet and Sophia, created in 2000 and 2016, respectively.

Kismet, developed by Professor Cynthia Breazeal, was capable of recognizing human facial signals (emotions) and could replicate said emotions with its face, which was structured with human facial features: eyes, lips, ears, eyebrows, and eyelids.

Sophia, on the other hand, is a humanoid bot created by Hanson Robotics. What distinguishes her from previous robots is her physical likeness to a human being as well as her ability to see (image recognition) and respond to interactions with appropriate facial expressions.

GIF courtesy of GIPHY

These two humanlike robots are samples of movement toward full theory of mind AI systems materializing in the near future. While neither fully holds the ability to have full-blown human conversation with an actual person, both robots have aspects of emotive ability akin to that of their human counterparts one step toward seamlessly assimilating into human society.

Self-aware AI involves machines that have human-level consciousness. This form of AI is not currently in existence, but would be considered the most advanced form of artificial intelligence known to man.

Facets of self-aware AI include the ability to not only recognize and replicate humanlike actions, but also to think for itself, have desires, and understand its feelings. Self-aware AI, in essence, is an advancement and extension of theory of mind AI. Where theory of mind only focuses on the aspects of comprehension and replication of human practices, self-aware AI takes it a step further by implying that it can and will have self-guided thoughts and reactions.

We are presently in tier three of the four types of artificial intelligence, so believing that we could potentially reach the fourth (and final?) tier of AI doesnt seem like a far-fetched idea.

But for now, its important to focus on perfecting all aspects of types two and three in AI. Sloppily speeding through each AI tier could be detrimental to the future of artificial intelligence for generations to come.

TIP: Find out what AI software currently exists today, and see how it can help with your business processes.

Ready to learn more in-depth information about artificial intelligence? Check out articles on the benefits and risks of AI as well as the innovative minds behind the first genderless voice assistant!

Here is the original post:
4 Main Types of Artificial Intelligence - G2

What is AI? Artificial Intelligence Tutorial for Beginners

What is AI?

A machine with the ability to perform cognitive functions such as perceiving, learning, reasoning and solve problems are deemed to hold an artificial intelligence.

Artificial intelligence exists when a machine has cognitive ability. The benchmark for AI is the human level concerning reasoning, speech, and vision.

In this basic tutorial, you will learn-

Nowadays, AI is used in almost all industries, giving a technological edge to all companies integrating AI at scale. According to McKinsey, AI has the potential to create 600 billions of dollars of value in retail, bring 50 percent more incremental value in banking compared with other analytics techniques. In transport and logistic, the potential revenue jump is 89 percent more.

Concretely, if an organization uses AI for its marketing team, it can automate mundane and repetitive tasks, allowing the sales representative to focus on tasks like relationship building, lead nurturing, etc. A company name Gong provides a conversation intelligence service. Each time a Sales Representative make a phone call, the machine records transcribes and analyzes the chat. The VP can use AI analytics and recommendation to formulate a winning strategy.

In a nutshell, AI provides a cutting-edge technology to deal with complex data which is impossible to handle by a human being. AI automates redundant jobs allowing a worker to focus on the high level, value-added tasks. When AI is implemented at scale, it leads to cost reduction and revenue increase.

Artificial intelligence is a buzzword today, although this term is not new. In 1956, a group of avant-garde experts from different backgrounds decided to organize a summer research project on AI. Four bright minds led the project; John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories).

The primary purpose of the research project was to tackle "every aspect of learning or any other feature of intelligence that can in principle be so precisely described, that a machine can be made to simulate it."

The proposal of the summits included

It led to the idea that intelligent computers can be created. A new era began, full of hope - Artificial intelligence.

Artificial intelligence can be divided into three subfields:

Machine learning is the art of study of algorithms that learn from examples and experiences.

Machine learning is based on the idea that there exist some patterns in the data that were identified and used for future predictions.

The difference from hardcoding rules is that the machine learns on its own to find such rules.

Deep learning is a sub-field of machine learning. Deep learning does not mean the machine learns more in-depth knowledge; it means the machine uses different layers to learn from the data. The depth of the model is represented by the number of layers in the model. For instance, Google LeNet model for image recognition counts 22 layers.

In deep learning, the learning phase is done through a neural network. A neural network is an architecture where the layers are stacked on top of each other.

Most of our smartphone, daily device or even the internet uses Artificial intelligence. Very often, AI and machine learning are used interchangeably by big companies that want to announce their latest innovation. However, Machine learning and AI are different in some ways.

AI- artificial intelligence- is the science of training machines to perform human tasks. The term was invented in the 1950s when scientists began exploring how computers could solve problems on their own.

Artificial Intelligence is a computer that is given human-like properties. Take our brain; it works effortlessly and seamlessly to calculate the world around us. Artificial Intelligence is the concept that a computer can do the same. It can be said that AI is the large science that mimics human aptitudes.

Machine learning is a distinct subset of AI that trains a machine how to learn. Machine learning models look for patterns in data and try to conclude. In a nutshell, the machine does not need to be explicitly programmed by people. The programmers give some examples, and the computer is going to learn what to do from those samples.

AI has broad applications-

AI is used in all the industries, from marketing to supply chain, finance, food-processing sector. According to a McKinsey survey, financial services and high tech communication are leading the AI fields.

A neural network has been out since the nineties with the seminal paper of Yann LeCun. However, it started to become famous around the year 2012. Explained by three critical factors for its popularity are:

Machine learning is an experimental field, meaning it needs to have data to test new ideas or approaches. With the boom of the internet, data became more easily accessible. Besides, giant companies like NVIDIA and AMD have developed high-performance graphics chips for the gaming market.

Hardware

In the last twenty years, the power of the CPU has exploded, allowing the user to train a small deep-learning model on any laptop. However, to process a deep-learning model for computer vision or deep learning, you need a more powerful machine. Thanks to the investment of NVIDIA and AMD, a new generation of GPU (graphical processing unit) are available. These chips allow parallel computations. It means the machine can separate the computations over several GPU to speed up the calculations.

For instance, with an NVIDIA TITAN X, it takes two days to train a model called ImageNet against weeks for a traditional CPU. Besides, big companies use clusters of GPU to train deep learning model with the NVIDIA Tesla K80 because it helps to reduce the data center cost and provide better performances.

Data

Deep learning is the structure of the model, and the data is the fluid to make it alive. Data powers the artificial intelligence. Without data, nothing can be done. Latest Technologies have pushed the boundaries of data storage. It is easier than ever to store a high amount of data in a data center.

Internet revolution makes data collection and distribution available to feed machine learning algorithm. If you are familiar with Flickr, Instagram or any other app with images, you can guess their AI potential. There are millions of pictures with tags available on these websites. Those pictures can be used to train a neural network model to recognize an object on the picture without the need to manually collect and label the data.

Artificial Intelligence combined with data is the new gold. Data is a unique competitive advantage that no firm should neglect. AI provides the best answers from your data. When all the firms can have the same technologies, the one with data will have a competitive advantage over the other. To give an idea, the world creates about 2.2 exabytes, or 2.2 billion gigabytes, every day.

A company needs exceptionally diverse data sources to be able to find the patterns and learn and in a substantial volume.

Algorithm

Hardware is more powerful than ever, data is easily accessible, but one thing that makes the neural network more reliable is the development of more accurate algorithms. Primary neural networks are a simple multiplication matrix without in-depth statistical properties. Since 2010, remarkable discoveries have been made to improve the neural network

Artificial intelligence uses a progressive learning algorithm to let the data do the programming. It means, the computer can teach itself how to perform different tasks, like finding anomalies, become a chatbot.

Summary

Artificial intelligence and machine learning are two confusing terms. Artificial intelligence is the science of training machine to imitate or reproduce human task. A scientist can use different methods to train a machine. At the beginning of the AI's ages, programmers wrote hard-coded programs, that is, type every logical possibility the machine can face and how to respond. When a system grows complex, it becomes difficult to manage the rules. To overcome this issue, the machine can use data to learn how to take care of all the situations from a given environment.

The most important features to have a powerful AI is to have enough data with considerable heterogeneity. For example, a machine can learn different languages as long as it has enough words to learn from.

AI is the new cutting-edge technology. Ventures capitalist are investing billions of dollars in startups or AI project. McKinsey estimates AI can boost every industry by at least a double-digit growth rate.

View post:
What is AI? Artificial Intelligence Tutorial for Beginners

A Brief History of Artificial Intelligence | Live Science

The idea of inanimate objects coming to life as intelligent beings has been around for a long time. The ancient Greeks had myths about robots, and Chinese and Egyptian engineers built automatons.

The beginnings of modern AI can be traced to classical philosophers' attempts to describe human thinking as a symbolic system. But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined.

MIT cognitive scientist Marvin Minsky and others who attended the conference were extremely optimistic about AI's future. "Within a generation[...] the problem of creating 'artificial intelligence' will substantially be solved," Minsky is quoted as saying in the book "AI: The Tumultuous Search for Artificial Intelligence" (Basic Books, 1994). [Super-Intelligent Machines: 7 Robotic Futures]

But achieving an artificially intelligent being wasn't so simple. After several reports criticizing progress in AI, government funding and interest in the field dropped off a period from 197480 that became known as the "AI winter." The field later revived in the 1980s when the British government started funding it again in part to compete with efforts by the Japanese.

The field experienced another major winter from 1987 to 1993, coinciding with the collapse of the market for some of the early general-purpose computers, and reduced government funding.

But research began to pick up again after that, and in 1997, IBM's Deep Blue became the first computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. And in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!" by beating reigning champions Brad Rutter and Ken Jennings.

This year, the talking computer "chatbot" Eugene Goostman captured headlines for tricking judges into thinking he was real skin-and-blood human during a Turing test, a competition developed by British mathematician and computer scientist Alan Turing in 1950 as a way to assess whether a machine is intelligent.

But the accomplishment has been controversial, with artificial intelligence experts saying that only a third of the judges were fooled, and pointing out that the bot was able to dodge some questions by claiming it was an adolescent who spoke English as a second language.

Manyexperts now believe the Turing test isn't a good measure of artificial intelligence.

"The vast majority of people in AI who've thought about the matter, for the most part, think its a very poor test, because it only looks at external behavior," Perlis told Live Science.

In fact, some scientists now plan to develop an updated version of the test. But the field of AI has become much broader than just the pursuit of true, humanlike intelligence.

Follow Tanya Lewis on Twitterand Google+. Follow us @livescience, Facebook& Google+. Original article onLive Science.

Original post:
A Brief History of Artificial Intelligence | Live Science