7 Ways An Artificial Intelligence Future Will Change The …

[AI] is going to change the world more than anything in the history of mankind. More than electricity. AI oracle and venture capitalist Dr. Kai-Fu Lee, 2018

In a nondescript building close to downtown Chicago, Marc Gyongyosi and the small but growing crew of IFM/Onetrack.AI have one rule that rules them all: think simple. The words are written in simple font on a simple sheet of paper thats stuck to a rear upstairs wall of their industrial two-story workspace. What theyre doing here with artificial intelligence, however, isnt simple at all.

Sitting at his cluttered desk, located near an oft-used ping-pong table and prototypes of drones from his college days suspended overhead, Gyongyosi punches some keys on a laptop to pull up grainy video footage of a forklift driver operating his vehicle in a warehouse. It was captured from overhead courtesy of a Onetrack.AI forklift vision system.

Artificial intelligence is impacting the future of virtually every industry and every human being. Artificial intelligence has acted as the main driver of emerging technologies like big data, robotics and IoT, and it will continue to act as a technological innovator for the foreseeable future.

Employing machine learning and computer vision for detection and classification of various safety events, the shoebox-sized device doesnt see all, but it sees plenty. Like which way the driver is looking as he operates the vehicle, how fast hes driving, where hes driving, locations of the people around him and how other forklift operators are maneuvering their vehicles. IFMs software automatically detects safety violations (for example, cell phone use) and notifies warehouse managers so they can take immediate action. The main goals are to prevent accidents and increase efficiency. The mere knowledge that one of IFMs devices is watching, Gyongyosi claims, has had a huge effect.

If you think about a camera, it really is the richest sensor available to us today at a very interesting price point, he says. Because of smartphones, camera and image sensors have become incredibly inexpensive, yet we capture a lot of information. From an image, we might be able to infer 25 signals today, but six months from now well be able to infer 100 or 150 signals from that same image. The only difference is the software thats looking at the image. And thats why this is so compelling, because we can offer a very important core feature set today, but then over time all our systems are learning from each other. Every customer is able to benefit from every other customer that we bring on board because our systems start to see and learn more processes and detect more things that are important and relevant.

IFM is just one of countless AI innovators in a field thats hotter than ever and getting more so all the time. Heres a good indicator: Of the 9,100 patents received by IBM inventors in 2018, 1,600 (or nearly 18 percent) were AI-related. Heres another: Tesla founder and tech titan Elon Musk recently donated $10 million to fund ongoing research at the non-profit research company OpenAI a mere drop in the proverbial bucket if his $1 billion co-pledge in 2015 is any indication. And in 2017, Russian president Vladimir Putin told school children that Whoever becomes the leader in this sphere [AI] will become the ruler of the world. He then tossed his head back and laughed maniacally.

OK, that last thing is false. This, however, is not: After more than seven decades marked by hoopla and sporadic dormancy during a multi-wave evolutionary period that began with so-called knowledge engineering, progressed to model- and algorithm-based machine learning and is increasingly focused on perception, reasoning and generalization, AI has re-taken center stage as never before. And it wont cede the spotlight anytime soon.

Theres virtually no major industry modern AI more specifically, narrow AI, which performs objective functions using data-trained models and often falls into the categories of deep learning or machine learning hasnt already affected. Thats especially true in the past few years, as data collection and analysis has ramped up considerably thanks to robust IoT connectivity, the proliferation of connected devices and ever-speedier computer processing.

Some sectors are at the start of their AI journey, others are veteran travelers. Both have a long way to go. Regardless, the impact artificial intelligence is having on our present day lives is hard to ignore:

But those advances (and numerous others, including this crop of new ones) are only the beginning; theres much more to come more than anyone, even the most prescient prognosticators, can fathom.

I think anybody making assumptions about the capabilities of intelligent software capping out at some point are mistaken, says David Vandegrift, CTO and co-founder of the customer relationship management firm 4Degrees.

With companies spending nearly $20 billion collective dollars on AI products and services annually, tech giants like Google, Apple, Microsoft and Amazon spending billions to create those products and services, universities making AI a more prominent part of their respective curricula (MIT alone is dropping $1 billion on a new college devoted solely to computing, with an AI focus), and the U.S. Department of Defense upping its AI game, big things are bound to happen. Some of those developments are well on their way to being fully realized; some are merely theoretical and might remain so. All are disruptive, for better and potentially worse, and theres no downturn in sight.

Lots of industries go through this pattern of winter, winter, and then an eternal spring, former Google Brain leader and Baidu chief scientist Andrew Ng told ZDNet late last year. We may be in the eternal spring of AI.

During a lecture last fall at Northwestern University, AI guru Kai-Fu Lee championed AI technology and its forthcoming impact while also noting its side effects and limitations. Of the former, he warned:

The bottom 90 percent, especially the bottom 50 percent of the world in terms of income or education, will be badly hurt with job displacementThe simple question to ask is, How routine is a job? And that is how likely [it is] a job will be replaced by AI, because AI can, within the routine task, learn to optimize itself. And the more quantitative, the more objective the job isseparating things into bins, washing dishes, picking fruits and answering customer service callsthose are very much scripted tasks that are repetitive and routine in nature. In the matter of five, 10 or 15 years, they will be displaced by AI.

In the warehouses of online giant and AI powerhouse Amazon, which buzz with more than 100,000 robots, picking and packing functions are still performed by humans but that will change.

Lees opinion was recently echoed by Infosys president Mohit Joshi, who at this years Davos gathering told the New York Times, People are looking to achieve very big numbers. Earlier they had incremental, 5 to 10 percent goals in reducing their workforce. Now theyre saying, Why cant we do it with 1 percent of the people we have?

On a more upbeat note, Lee stressed that todays AI is useless in two significant ways: it has no creativity and no capacity for compassion or love. Rather, its a tool to amplify human creativity. His solution? Those with jobs that involve repetitive or routine tasks must learn new skills so as not to be left by the wayside. Amazon even offers its employees money to train for jobs at other companies.

One of the absolute prerequisites for AI to be successful in many [areas] is that we invest tremendously in education to retrain people for new jobs, says Klara Nahrstedt, a computer science professor at the University of Illinois at UrbanaChampaign and director of the schools Coordinated Science Laboratory.

Shes concerned thats not happening widely or often enough. IFMs Gyongyosi is even more specific.

People need to learn about programming like they learn a new language, he says, and they need to do that as early as possible because it really is the future. In the future, if you dont know coding, you dont know programming, its only going to get more difficult.

Stay Updated on the Latest AI Trends

Sign up for free to get more AI stories like this.

And while many of those who are forced out of jobs by technology will find new ones, Vandegrift says, that wont happen overnight. As with Americas transition from an agricultural to an industrial economy during the Industrial Revolution, which played a big role in causing the Great Depression, people eventually got back on their feet. The short-term impact, however, was massive.

The transition between jobs going away and new ones [emerging], Vandegrift says, is not necessarily as painless as people like to think.

"In the future, if you dont know coding, you dont know programming, its only going to get more difficult.

Mike Mendelson, a learner experience designer for NVIDIA, is a different kind of educator than Nahrstedt. He works with developers who want to learn more about AI and apply that knowledge to their businesses.

If they understand what the technology is capable of and they understand the domain very well, they start to make connections and say, Maybe this is an AI problem, maybe thats an AI problem, he says. Thats more often the case than I have a specific problem I want to solve.

In Mendelsons view, some of the most intriguing AI research and experimentation that will have near-future ramifications is happening in two areas: reinforcement learning, which deals in rewards and punishment rather than labeled data; and generative adversarial networks (GAN for short) that allow computer algorithms to create rather than merely assess by pitting two nets against each other. The former is exemplified by the Go-playing prowess of Google DeepMinds Alpha Go Zero, the latter by original image or audio generation thats based on learning about a certain subject like celebrities or a particular type of music.

On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Ideally and partly through the use of sophisticated sensors, cities will become less congested, less polluted and generally more livable. Inroads are already being made.

Once you predict something, you can prescribe certain policies and rules, Nahrstedt says. Such as sensors on cars that send data about traffic conditions could predict potential problems and optimize the flow of cars. This is not yet perfected by any means, she says. Its just in its infancy. But years down the road, it will play a really big role.

Of course, much has been made of the fact that AIs reliance on big data is already impacting privacy in a major way. Look no further than Cambridge Analyticas Facebook shenanigans or Amazons Alexa eavesdropping, two among many examples of tech gone wild. Without proper regulations and self-imposed limitations, critics argue, the situation will get even worse. In 2015, Apple CEO Tim Cook derided competitors Google and Facebook (surprise!) for greed-driven data mining.

Theyre gobbling up everything they can learn about you and trying to monetize it, he said in a 2015 speech. We think thats wrong.

Last fall, during a talk in Brussels, Belgium, Cook expounded on his concern.

Advancing AI by collecting huge personal profiles is laziness, not efficiency," he said. For artificial intelligence to be truly smart, it must respect human values, including privacy. If we get this wrong, the dangers are profound."

If implemented responsibly, AI can benefit society. However, as is the case with most emerging technology, there is a real risk that commercial and state use has a detrimental impact on human rights."

Plenty of others agree. In a paper published recently by UK-based human rights and privacy groups Article 19 and Privacy International, anxiety about AI is reserved for its everyday functions rather than a cataclysmic shift like the advent of robot overlords.

If implemented responsibly, AI can benefit society, the authors write. However, as is the case with most emerging technology, there is a real risk that commercial and state use has a detrimental impact on human rights. In particular, applications of these technologies frequently rely on the generation, collection, processing, and sharing of large amounts of data, both about individual and collective behavior. This data can be used to profile individuals and predict future behavior. While some of these uses, like spam filters or suggested items for online shopping, may seem benign, others can have more serious repercussions and may even pose unprecedented threats to the right to privacy and the right to freedom of expression and information (freedom of expression). The use of AI can also impact the exercise of a number of other rights, including the right to an effective remedy, the right to a fair trial, and the right to freedom from discrimination.

Speaking at Londons Westminster Abbey in late November of 2018, internationally renowned AI expert Stuart Russell joked (or not) about his formal agreement with journalists that I wont talk to them unless they agree not to put a Terminator robot in the article. His quip revealed an obvious contempt for Hollywood representations of far-future AI, which tend toward the overwrought and apocalyptic. What Russell referred to as human-level AI, also known as artificial general intelligence, has long been fodder for fantasy. But the chances of its being realized anytime soon, or at all, are pretty slim. The machines almost certainly wont rise (sorry, Dr. Russell) during the lifetime of anyone reading this story.

There are still major breakthroughs that have to happen before we reach anything that resembles human-level AI, Russell explained. One example is the ability to really understand the content of language so we can translate between languages using machines When humans do machine translation, they understand the content and then express it. And right now machines are not very good at understanding the content of language. If that goal is reached, we would have systems that could then read and understand everything the human race has ever written, and this is something that a human being can't do... Once we have that capability, you could then query all of human knowledge and it would be able to synthesize and integrate and answer questions that no human being has ever been able to answer because they haven't read and been able to put together and join the dots between things that have remained separate throughout history.

Thats a mouthful. And a mind full. On the subject of which, emulating the human brain is exceedingly difficult and yet another reason for AGIs still-hypothetical future. Longtime University of Michigan engineering and computer science professor John Laird has conducted research in the field for several decades.

The goal has always been to try to build what we call the cognitive architecture, what we think is innate to an intelligence system, he says of work thats largely inspired by human psychology. One of the things we know, for example, is the human brain is not really just a homogenous set of neurons. Theres a real structure in terms of different components, some of which are associated with knowledge about how to do things in the world.

Thats called procedural memory. Then theres knowledge based on general facts, a.k.a. semantic memory, as well as knowledge about previous experiences (or personal facts) thats called episodic memory. One of the projects at Lairds lab involves using natural language instructions to teach a robot simple games like Tic-Tac-Toe and puzzles. Those instructions typically involve a description of the goal, a rundown of legal moves and failure situations. The robot internalizes those directives and uses them to plan its actions. As ever, though, breakthroughs are slow to come slower, anyway, than Laird and his fellow researchers would like.

Every time we make progress, he says, we also get a new appreciation for how hard it is.

More than a few leading AI figures subscribe (some more hyperbolically than others) to a nightmare scenario that involves whats known as singularity, whereby superintelligent machines take over and permanently alter human existence through enslavement or eradication.

The late theoretical physicist Stephen Hawking famously postulated that if AI itself begins designing better AI than human programmers, the result could be machines whose intelligence exceeds ours by more than ours exceeds that of snails. Elon Musk believes and has for years warned that AGI is humanitys biggest existential threat. Efforts to bring it about, he has said, are like summoning the demon. He has even expressed concern that his pal, Google co-founder and Alphabet CEO Larry Page, could accidentally shepherd something evil into existence despite his best intentions. Say, for example, a fleet of artificial intelligence-enhanced robots capable of destroying mankind. (Musk, you might know, has a flair for the dramatic.) Even IFMs Gyongyosi, no alarmist when it comes to AI predictions, rules nothing out. At some point, he says, humans will no longer need to train systems; theyll learn and evolve on their own.

I dont think the methods we use currently in these areas will lead to machines that decide to kill us, he says. I think that maybe five or ten years from now, Ill have to reevaluate that statement because well have different methods available and different ways to go about these things.

While murderous machines may well remain fodder for fiction, many believe theyll supplant humans in various ways.

Last spring, Oxford Universitys Future of Humanity Institute published the results of an AI survey. Titled When Will AI Exceed Human Performance? Evidence from AI Experts, it contains estimates from 352 machine learning researchers about AIs evolution in years to come. There were lots of optimists in this group. By 2026, a median number of respondents said, machines will be capable of writing school essays; by 2027 self-driving trucks will render drivers unnecessary; by 2031 AI will outperform humans in the retail sector; by 2049 AI could be the next Stephen King and by 2053 the next Charlie Teo. The slightly jarring capper: by 2137, all human jobs will be automated. But what of humans themselves? Sipping umbrella drinks served by droids, no doubt.

Diego Klabjan, a professor at Northwestern University and founding director of the schools Master of Science in Analytics program, counts himself an AGI skeptic.

Currently, computers can handle a little more than 10,000 words, he explains. So, a few million neurons. But human brains have billions of neurons that are connected in a very intriguing and complex way, and the current state-of-the-art [technology] is just straightforward connections following very easy patterns. So going from a few million neurons to billions of neurons with current hardware and software technologies I don't see that happening.

Klabjan also puts little stockin extreme scenarios the type involving, say, murderous cyborgs that turn the earth into asmoldering hellscape. Hes much more concerned with machines war robots, for instance being fed faulty incentives by nefarious humans. As MIT physics professors and leading AI researcher Max Tegmark put it in a 2018 TED Talk, The real threat from AI isnt malice, like in silly Hollywood movies, but competence AI accomplishing goals that just arent aligned with ours. Thats Lairds take, too.

I definitely dont see the scenario where something wakes up and decides it wants to take over the world, he says. I think thats science fiction and not the way its going to play out.

What Laird worries most about isnt evil AI, per se, but evil humans using AI as a sort of false force multiplier for things like bank robbery and credit card fraud, among many other crimes. And so, while hes often frustrated with the pace of progress, AIs slow burn may actually be a blessing.

Time to understand what were creating and how were going to incorporate it into society, Laird says, might be exactly what we need.

But no one knows for sure.

There are several major breakthroughs that have to occur, and those could come very quickly, Russell said during his Westminster talk. Referencing the rapid transformational effect of nuclear fission (atom splitting) by British physicist Ernest Rutherford in 1917, he added, Its very, very hard to predict when these conceptual breakthroughs are going to happen.

But whenever they do, if they do, he emphasized the importance of preparation. That means starting or continuing discussions about the ethical use of A.G.I. and whether it should be regulated. That means working to eliminate data bias, which has a corrupting effect on algorithms and is currently a fat fly in the AI ointment. That means working to invent and augment security measures capable of keeping the technology in check. And it means having the humility to realize that just because we can doesnt mean we should.

Our situation with technology is complicated, but the big picture is rather simple, Tegmark said during his TED Talk. Most AGI researchers expect AGI within decades, and if we just bumble into this unprepared, it will probably be the biggest mistake in human history. It could enable brutal global dictatorship with unprecedented inequality, surveillance, suffering and maybe even human extinction. But if we steer carefully, we could end up in a fantastic future where everybodys better offthe poor are richer, the rich are richer, everybodys healthy and free to live out their dreams.

See the article here:

7 Ways An Artificial Intelligence Future Will Change The ...

How Artificial Intelligence Is Totally Changing Everything – HowStuffWorks

Advertisement

Back in Oct. 1950, British techno-visionary Alan Turing published an article called "Computing Machinery and Intelligence," in the journal MIND that raised what at the time must have seemed to many like a science-fiction fantasy.

"May not machines carry out something which ought to be described as thinking but which is very different from what a man does?" Turing asked.

Turing thought that they could. Moreover, he believed, it was possible to create software for a digital computer that enabled it to observe its environment and to learn new things, from playing chess to understanding and speaking a human language. And he thought machines eventually could develop the ability to do that on their own, without human guidance. "We may hope that machines will eventually compete with men in all purely intellectual fields," he predicted.

Nearly 70 years later, Turing's seemingly outlandish vision has become a reality. Artificial intelligence, commonly referred to as AI, gives machines the ability to learn from experience and perform cognitive tasks, the sort of stuff that once only the human brain seemed capable of doing.

AI is rapidly spreading throughout civilization, where it has the promise of doing everything from enabling autonomous vehicles to navigate the streets to making more accurate hurricane forecasts. On an everyday level, AI figures out what ads to show you on the web, and powers those friendly chatbots that pop up when you visit an e-commerce website to answer your questions and provide customer service. And AI-powered personal assistants in voice-activated smart home devices perform myriad tasks, from controlling our TVs and doorbells to answering trivia questions and helping us find our favorite songs.

But we're just getting started with it. As AI technology grows more sophisticated and capable, it's expected to massively boost the world's economy, creating about $13 trillion worth of additional activity by 2030, according to a McKinsey Global Institute forecast.

"AI is still early in adoption, but adoption is accelerating and it is being used across all industries," says Sarah Gates, an analytics platform strategist at SAS, a global software and services firm that focuses upon turning data into intelligence for clients.

It's even more amazing, perhaps, that our existence is quietly being transformed by a technology that many of us barely understand, if at all something so complex that even scientists have a tricky time explaining it.

"AI is a family of technologies that perform tasks that are thought to require intelligence if performed by humans," explains Vasant Honavar, a professor and director of the Artificial Intelligence Research Laboratory at Penn State University, in an email interview. "I say 'thought,' because nobody is really quite sure what intelligence is."

Honavar describes two main categories of intelligence. There's narrow intelligence, which is achieving competence in a narrowly defined domain, such as analyzing images from X-rays and MRI scans in radiology. General intelligence, in contrast, is a more human-like ability to learn about anything and to talk about it. "A machine might be good at some diagnoses in radiology, but if you ask it about baseball, it would be clueless," Honavar explains. Humans' intellectual versatility "is still beyond the reach of AI at this point."

According to Honavar, there are two key pieces to AI. One of them is the engineering part that is, building tools that utilize intelligence in some way. The other is the science of intelligence, or rather, how to enable a machine to come up with a result comparable to what a human brain would come up with, even if the machine achieves it through a very different process. To use an analogy, "birds fly and airplanes fly, but they fly in completely different ways," Honavar. "Even so, they both make use of aerodynamics and physics. In the same way, artificial intelligence is based upon the notion that there are general principles about how intelligent systems behave."

AI is "basically the results of our attempting to understand and emulate the way that the brain works and the application of this to giving brain-like functions to otherwise autonomous systems (e.g., drones, robots and agents)," Kurt Cagle, a writer, data scientist and futurist who's the founder of consulting firm Semantical, writes in an email. He's also editor of The Cagle Report, a daily information technology newsletter.

And while humans don't really think like computers, which utilize circuits, semi-conductors and magnetic media instead of biological cells to store information, there are some intriguing parallels. "One thing we're beginning to discover is that graph networks are really interesting when you start talking about billions of nodes, and the brain is essentially a graph network, albeit one where you can control the strengths of processes by varying the resistance of neurons before a capacitive spark fires," Cagle explains. "A single neuron by itself gives you a very limited amount of information, but fire enough neurons of varying strengths together, and you end up with a pattern that gets fired only in response to certain kinds of stimuli, typically modulated electrical signals through the DSPs [that is digital signal processing] that we call our retina and cochlea."

"Most applications of AI have been in domains with large amounts of data," Honavar says. To use the radiology example again, the existence of large databases of X-rays and MRI scans that have been evaluated by human radiologists, makes it possible to train a machine to emulate that activity.

AI works by combining large amounts of data with intelligent algorithms series of instructions that allow the software to learn from patterns and features of the data, as this SAS primer on artificial intelligence explains.

In simulating the way a brain works, AI utilizes a bunch of different subfields, as the SAS primer notes.

The concept of AI dates back to the 1940s, and the term "artificial intelligence" was introduced at a 1956 conference at Dartmouth College. Over the next two decades, researchers developed programs that played games and did simple pattern recognition and machine learning. Cornell University scientist Frank Rosenblatt developed the Perceptron, the first artificial neural network, which ran on a 5-ton (4.5-metric ton), room-sized IBM computer that was fed punch cards.

But it wasn't until the mid-1980s that a second wave of more complex, multilayer neural networks were developed to tackle higher-level tasks, according to Honavar. In the early 1990s, another breakthrough enabled AI to generalize beyond the training experience.

In the 1990s and 2000s, other technological innovations the web and increasingly powerful computers helped accelerate the development of AI. "With the advent of the web, large amounts of data became available in digital form," Honavar says. "Genome sequencing and other projects started generating massive amounts of data, and advances in computing made it possible to store and access this data. We could train the machines to do more complex tasks. You couldn't have had a deep learning model 30 years ago, because you didn't have the data and the computing power."

AI is different from, but related to, robotics, in which machines sense their environment, perform calculations and do physical tasks either by themselves or under the direction of people, from factory work and cooking to landing on other planets. Honavar says that the two fields intersect in many ways.

"You can imagine robotics without much intelligence, purely mechanical devices like automated looms," Honavar says. "There are examples of robots that are not intelligent in a significant way." Conversely, there's robotics where intelligence is an integral part, such as guiding an autonomous vehicle around streets full of human-driven cars and pedestrians.

"It's a reasonable argument that to realize general intelligence, you would need robotics to some degree, because interaction with the world, to some degree, is an important part of intelligence," according to Honavar. "To understand what it means to throw a ball, you have to be able to throw a ball."

AI quietly has become so ubiquitous that it's already found in many consumer products.

"A huge number of devices that fall within the Internet of Things (IoT) space readily use some kind of self-reinforcing AI, albeit very specialized AI," Cagle says. "Cruise control was an early AI and is far more sophisticated when it works than most people realize. Noise dampening headphones. Anything that has a speech recognition capability, such as most contemporary television remotes. Social media filters. Spam filters. If you expand AI to cover machine learning, this would also include spell checkers, text-recommendation systems, really any recommendation system, washers and dryers, microwaves, dishwashers, really most home electronics produced after 2017, speakers, televisions, anti-lock braking systems, any electric vehicle, modern CCTV cameras. Most games use AI networks at many different levels."

AI already can outperform humans in some narrow domains, just as "airplanes can fly longer distances, and carry more people than a bird could," Honavar says. AI, for example, is capable of processing millions of social media network interactions and gaining insights that can influence users' behavior an ability that the AI expert worries may have "not so good consequences."

It's particularly good at making sense of massive amounts of information that would overwhelm a human brain. That capability enables internet companies, for example, to analyze the mountains of data that they collect about users and employ the insights in various ways to influence our behavior.

But AI hasn't made as much progress so far in replicating human creativity, Honavar notes, though the technology already is being utilized to compose music and write news articles based on data from financial reports and election returns.

Given AI's potential to do tasks that used to require humans, it's easy to fear that its spread could put most of us out of work. But some experts envision that while the combination of AI and robotics could eliminate some positions, it will create even more new jobs for tech-savvy workers.

"Those most at risk are those doing routine and repetitive tasks in retail, finance and manufacturing," Darrell West, a vice president and founding director of the Center for Technology Innovation at the Brookings Institution, a Washington-based public policy organization, explains in an email. "But white-collar jobs in health care will also be affected and there will be an increase in job churn with people moving more frequently from job to job. New jobs will be created but many people will not have the skills needed for those positions. So the risk is a job mismatch that leaves people behind in the transition to a digital economy. Countries will have to invest more money in job retraining and workforce development as technology spreads. There will need to be lifelong learning so that people regularly can upgrade their job skills."

And instead of replacing human workers, AI may be used to enhance their intellectual capabilities. Inventor and futurist Ray Kurzweil has predicted that by the 2030s, AI have achieved human levels of intelligence, and that it will be possible to have AI that goes inside the human brain to boost memory, turning users into human-machine hybrids. As Kurzweil has described it, "We're going to expand our minds and exemplify these artistic qualities that we value."

Read more:

How Artificial Intelligence Is Totally Changing Everything - HowStuffWorks

What Is The Artificial Intelligence Of Things? When AI Meets IoT – Forbes

Individually, the Internet of Things (IoT) and Artificial Intelligence (AI) are powerful technologies. When you combine AI and IoT, you get AIoTthe artificial intelligence of things. You can think of internet of things devices as the digital nervous system while artificial intelligence is the brain of a system.

What Is The Artificial Intelligence Of Things? When AI Meets IoT

What is AIoT?

To fully understand AIoT, you must start with the internet of things. When things such as wearable devices, refrigerators, digital assistants, sensors and other equipment are connected to the internet, can be recognized by other devices and collect and process data, you have the internet of things. Artificial intelligence is when a system can complete a set of tasks or learn from data in a way that seems intelligent. Therefore, when artificial intelligence is added to the internet of things it means that those devices can analyze data and make decisions and act on that data without involvement by humans.

These are "smart" devices, and they help drive efficiency and effectiveness. The intelligence of AIoT enables data analytics that is then used to optimize a system and generate higher performance and business insights and create data that helps to make better decisions and that the system can learn from.

Practical Examples of AIoT

The combo of internet of things and smart systems makes AIoT a powerful and important tool for many applications. Here are a few:

Smart Retail

In a smart retail environment, a camera system equipped with computer vision capabilities can use facial recognition to identify customers when they walk through the door. The system gathers intel about customers, including their gender, product preferences, traffic flow and more, analyzes the data to accurately predict consumer behavior and then uses that information to make decisions about store operations from marketing to product placement and other decisions. For example, if the system detects that the majority of customers walking into the store are Millennials, it can push out product advertisements or in-store specials that appeal to that demographic, therefore driving up sales. Smart cameras could identify shoppers and allow them to skip the checkout like what happens in the Amazon Go store.

Drone Traffic Monitoring

In a smart city, there are several practical uses of AIoT, including traffic monitoring by drones. If traffic can be monitored in real-time and adjustments to the traffic flow can be made, congestion can be reduced. When drones are deployed to monitor a large area, they can transmit traffic data, and then AI can analyze the data and make decisions about how to best alleviate traffic congestion with adjustments to speed limits and timing of traffic lights without human involvement.

The ET City Brain, a product of Alibaba Cloud, optimizes the use of urban resources by using AIoT. This system can detect accidents, illegal parking, and can change traffic lights to help ambulances get to patients who need assistance faster.

Office Buildings

Another area where artificial intelligence and the internet of things intersect is in smart office buildings. Some companies choose to install a network of smart environmental sensors in their office building. These sensors can detect what personnel are present and adjust temperatures and lighting accordingly to improve energy efficiency. In another use case, a smart building can control building access through facial recognition technology. The combination of connected cameras and artificial intelligence that can compare images taken in real-time against a database to determine who should be granted access to a building is AIoT at work. In a similar way, employees wouldn't need to clock in, or attendance for mandatory meetings wouldn't have to be completed, since the AIoT system takes care of it.

Fleet Management and Autonomous Vehicles

AIoT is used to in fleet management today to help monitor a fleet's vehicles, reduce fuel costs, track vehicle maintenance, and to identify unsafe driver behavior. Through IoT devices such as GPS and other sensors and an artificial intelligence system, companies are able to manage their fleet better thanks to AIoT.

Another way AIoT is used today is with autonomous vehicles such as Tesla's autopilot systems that use radars, sonars, GPS, and cameras to gather data about driving conditions and then an AI system to make decisions about the data the internet of things devices are gathering.

Autonomous Delivery Robots

Similar to how AIoT is used with autonomous vehicles, autonomous delivery robots are another example of AIoT in action. Robots have sensors that gather information about the environment the robot is traversing and then make moment-to-moment decisions about how to respond through its onboard AI platform.

Read more from the original source:

What Is The Artificial Intelligence Of Things? When AI Meets IoT - Forbes

AI (Artificial Intelligence): What We Can Expect In The New Year – Forbes

AI, Artificial Intelligence concept,3d rendering,conceptual image.

As I covered in a recent post for Forbes.com, this year has seen notable breakthroughs in AI (Artificial Intelligence).They have included innovations about algorithmslike GANs or Generative Adversarial Networksas well as advances in categories like NLP (Natural Language Processing), just to name a few.

Then what can we expect in 2020?Well, it seems likely that the innovations will continue at a rapid pace.

So heres a look at what we may see:

Anand Rao, the Global and US Artificial Intelligence Leader at PwC:

2020 will be the year of practical AI: using cool technology to solve boring problems. Business leaders are recalibrating their ambitions, with just 4% intending to scale AI across the organization. Instead, many are focusing on functional areas like finance, compliance, HR, and tax and universal pain points like extracting data from forms. In our survey, executives ranked using AI to operate more efficiently and increase productivity as the top-two benefits they expect from AI in the coming year.

Sanjeev Katariya, the VP/Chief Architect of eBay AI & Platforms:

From an ecommerce lens, AI will continue to grow, building adaptive and highly personalized markets and bridging borders while extending itself to places on the planet that need to see explosive growthwho in 2020, will gladly join the ecommerce revolution.

Michael Kopp, the Head of Data Science at HERE Technologies:

Deep Learning goes industrial. Dedicated DL chipsets are accelerating trial and error opportunities across industries, allowing diverse fields to build critical new models and AI components that solve real-world data problems.

Bryan Friehauf, the Executive Vice President and General Manager of Enterprise Software, ABB:

In 2020, AI will be the mainstream recommendation engine for the industrial sector. In energy management in particular, there is a huge opportunity. AI can provide facility managers with accurate power consumption predictions, which enables them to take timely action to reduce unplanned consumption spikes through rescheduling or switching off non-critical loads. AI will be the technology that takes simulations to the next level, helping to locate unstable areas of the grid and increase safety for workers in the field.

Steve Grobman, the Chief Technology Officer at McAfee:

In general, adversaries are going to use the best technology to accomplish their goals, so if we think about nation-state actors attempting to manipulate an election, using deepfake video to manipulate an audience makes a lot of sense. Adversaries will try to create wedges and divides in society.

Jake Saper, a Partner at Emergence Capital:

"In 2020, we will see the tech industry shift its focus away from using AI to drive automation and move it towards employing AI for augmentation. We'll realize that human-to-human jobs, which most often include dynamic input and feedback, are at their core still best performed by humans. In those cases, AI is ideally suited to augment, and not replace, human jobs."

Andy Ellis, the Chief Security Officer at Akamai:

What well see in many spaces is folks starting to understand the limitations of algorithmic solutions, especially where those create, amplify, or ossify bias in the world; and companies buying technologies will really need to start understanding how that bias impacts their operations.

Steve Wood, the Chief Product Officer at Boomi, a Dell Technologies business:

Overzealous data analyses have brought many companies face to face with privacy lawsuits from consumers and governments alike, which in turn has led to even stricter data governance laws. Understandably concerned about making similar mistakes, businesses will begin turning to metadata for insights in 2020, rather than analyzing actual data.

Jay Gurudevan, the Principal Product Manager of AI/ML at Twilio:

Well see more enterprises and businesses leverage AI tools and automated communication to better understand the entire customer journey. As consumers become more comfortable interacting with AI agents, Natural Language Processing will become more accurate and advanced and implementation will expand.

Avon Puri, the CIO of Rubrik:

An ecosystem of technologies will emerge that leverage intelligence, such as RPA technologies, and will provide new efficiencies in business processes that werent possible before. Next year is when new intelligent technologies will really take off, and RPA will lead automated intelligence in the enterprise.

Umesh Sachdev, the CEO and co-founder of Uniphore:

Speech analytics tools were an important bridge to support automation, and the same AI aiding humans behind the scenes will aid bots and enable the era of platforms. In 2020, heres where were going to see the most progress: anticipating intent by layering emotion and sincerity with historical data in real time. We'll be able to determine things like the likelihood of person paying their past-due bill.

Rama Sekhar, a Venture Partner at Norwest:

2020 will usher in the year of AI in the Enterprise. AI will get an upgrade from being an ingredient to a first class citizen as CIOs will introduce AI-first initiatives, just as they adopted cloud-first initiatives five years ago. Companies will have to justify why theyre not using AI in their own software, processes, and workflows in 2020.

Stefan Nandzik, the Vice President of Product & Brand Marketing at Signifyd:

In 2020, well see a spate of lawsuits filed by aggrieved consumers who have been wrongly barred from returning goods to retailers, or buying goods from ecommerce merchants, or renting home shares, or benefiting from Uber rides by algorithmically driven screening schemes. And well see the first significant pieces of legislation codifying consumers rights when it comes to AIcreating demand for liable machines.

Dr. Hossein Rahnama, the CEO of Flybits:

Startups are realizing that no matter how good their algorithm is, big companies aren't comfortable just handing over their sensitive datasets and core assets. So as the industry continues to mature over the next year, AI entrepreneurs will recognize that they have to shed their grad school mindset of give me the data and Ill do my work because that is no longer the case. This realization will force AI entrepreneurs to focus on more than just algorithms and shift their attention toward solidifying a data strategy that includes governance, management, encryption and tokenization. Because at the end of the day, without a strong data strategy, your AI strategy means nothing.

Chris Nicholson, the CEO of Pathmind:

One of the most promising areas of AI applications in 2020 will combine different, powerful forms of AI. Deep learning is used in a lot of perceptive tasks that answer the question: what am I looking at? For example, deep learning could recognize a grizzly bear in a photograph. Reinforcement learning is used in a lot of strategic tasks that answer the question: what should I do? For example, should I run away, stand in place or play dead? If you combine the two, then you get a powerful sequence of machine learning decisions you can combine. In this example: Given that I see a grizzly bear ahead of me, I should play dead. (Pro tip: grizzlies can run 35 miles per hour, but they do not eat carrion.) So those combinations of smart perceptions combined with smart actions vastly extend the value of AI. We move beyond simple classification into much higher ROI tasks that have implications for businesses, robotics, self-driving cars and video games.

Dr. Alex Liu, the Chief Data Scientist for IBM and the founder of RMDS Lab:

There will be more exploration of causality, which is the next generation of data analysis. It will be going from what to why. This will be crucial in improving the success rate of AI, which is still fairly low.

Tom (@ttaulli) is the author of the book,Artificial Intelligence Basics: A Non-Technical Introduction.

Read more:

AI (Artificial Intelligence): What We Can Expect In The New Year - Forbes

Deciphering Artificial Intelligence in the Future of Information Security – AiThority

Artificial Intelligence (AI) is creating a new frontline in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. Like humans, they will be flawed, but also capable of achieving great things.

AI poses new information risks and makes some existing ones more dangerous. However, it can also be used for good and should become a key part of every organizations defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business.

Already, AI is finding its way into many mainstream business use cases. Organizations use variations of AI to support processes in areas including customer service, human resources, and bank fraud detection. However, the hype can lead to confusion and skepticism over what AI actually is and what it really means for business and security. It is difficult to separate wishful thinking from reality.

Read More: How AI and Automation Are Joining Forces to Transform ITSM

As AI systems are adopted by organizations, they will become increasingly critical to day-to-day business operations. Some organizations already have, or will have, business models entirely dependent on AI technology. No matter the function for which an organization uses AI, such systems and the information that supports them have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems make poor decisions and produce unexpected outcomes.

Simultaneously, organizations are beginning to face sophisticated AI-enabled attacks which have the potential to compromise information and cause severe business impact at a greater speed and scale than ever before. Taking steps both to secure internal AI systems and defend against external AI-enabled threats will become vitally important in reducing information risk.

While AI systems adopted by organizations present a tempting target, adversarial attackers are also beginning to use AI for their own purposes. AI is a powerful tool that can be used to enhance attack techniques or even create entirely new ones. Organizations must be ready to adapt their defenses in order to cope with the scale and sophistication of AI-enabled cyberattacks.

Security practitioners are always fighting to keep up with the methods used by attackers, and AI systems can provide at least a short-term boost by significantly enhancing a variety of defensive mechanisms. AI can automate numerous tasks, helping understaffed security departments to bridge the specialist skills gap and improve the efficiency of their human practitioners. Protecting against many existing threats, AI can put defenders a step ahead. However, adversaries are not standing still as AI-enabled threats become more sophisticated, security practitioners will need to use AI-supported defenses simply to keep up.

The benefit of AI in terms of response to threats is that it can act independently, taking responsive measures without the need for human oversight and at a much greater speed than a human could. Given the presence of malware that can compromise whole systems almost instantaneously, this is a highly valuable capability.

The number of ways in which defensive mechanisms can be significantly enhanced by AI provide grounds for optimism, but as with any new type of technology, it is not a miracle cure. Security practitioners should be aware of the practical challenges involved when deploying defensive AI.

Questions and considerations before deploying defensive AI systems have narrow intelligence and are designed to fulfill one type of task. They require sufficient data and inputs in order to complete that task. One single defensive AI system will not be able to enhance all the defensive mechanisms outlined previously an organization is likely to adopt multiple systems. Before purchasing and deploying defensive AI, security leaders should consider whether an AI system is required to solve the problem, or whether more conventional options would do a similar or better job.

Read More: Artificial Intelligence in Restaurant Business

Questions to ask include:

Security leaders also need to consider issues of governance around defensive AI, such as:

AI will not replace the need for skilled security practitioners with technical expertise and an intuitive nose for risk. These security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, especially as stories continue to emerge of AI proving unreliable or making poor or unexpected decisions.

AI systems will make mistakes a beneficial aspect of human oversight is that human practitioners can provide feedback when things go wrong and incorporate it into the AIs decision-making process. Of course, humans make mistakes too organizations that adopt defensive AI need to devote time, training and support to help security practitioners learn to work with intelligent systems.

Given time to develop and learn together, the combination of Human and Artificial Intelligence should become a valuable component of an organizations cyber defenses.

Computer systems that can independently learn, reason and act herald a new technological era, full of both risk and opportunity. The advances already on display are only the tip of the iceberg there is a lot more to come. The speed and scale at which AI systems think will be increased by growing access to big data, greater computing power and continuous refinement of programming techniques. Such power will have the potential to both make and destroy a business.

AI tools and techniques that can be used in defense are also available to malicious actors including criminals, hacktivists and state-sponsored groups. Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware and at that point, defensive AI will not just be a nice to have. It will be a necessity. Security practitioners using traditional controls will not be able to cope with the speed, volume, and sophistication of attacks.

To thrive in the new era, organizations need to reduce the risks posed by AI and make the most of the opportunities it offers. That means securing their own intelligent systems and deploying their own intelligent defenses. AI is no longer a vision of the distant future: the time to start preparing is now.

Read More: How Artificial Intelligence Can Transform Influencer Marketing

Go here to see the original:

Deciphering Artificial Intelligence in the Future of Information Security - AiThority

Artificial Intelligence and the Biopharmaceutical Industry: What’s Next? – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

Go here to see the original:

Artificial Intelligence and the Biopharmaceutical Industry: What's Next? - JD Supra

Artificial Intelligence, Foresight, and the Offense-Defense Balance – War on the Rocks

There is a growing perception that AI will be a transformative technology for international security. The current U.S. National Security Strategy names artificial intelligence as one of a small number of technologies that will be critical to the countrys future. Senior defense officials have commented that the United States is at an inflection point in the power of artificial intelligence and even that AI might be the first technology to change the fundamental nature of war.

However, there is still little clarity regarding just how artificial intelligence will transform the security landscape. One of the most important open questions is whether applications of AI, such as drone swarms and software vulnerability discovery tools, will tend to be more useful for conducting offensive or defensive military operations. If AI favors the offense, then a significant body of international relations theory suggests that this could have destabilizing effects. States could find themselves increasingly able to use force and increasingly frightened of having force used against them, making arms-racing and war more likely. If AI favors the defense, on the other hand, then it may act as a stabilizing force.

Anticipating the impact of AI on the so-called offense-defense balance across different military domains could be extremely valuable. It could help us to foresee new threats to stability before they arise and act to mitigate them, for instance by pursuing specific arms agreements or prioritizing the development of applications with potential stabilizing effects.

Unfortunately, the historical record suggests that attempts to forecast changes in the offense-defense balance are often unsuccessful. It can even be difficult to detect the changes that newly adopted technologies have already caused. In the lead-up to the First World War, for instance, most analysts failed to recognize that the introduction of machine guns and barbed wire had tilted the offense-defense balance far toward defense. The years of intractable trench warfare that followed came as a surprise to the states involved.

While there are clearly limits on the ability to anticipate shifts in the offense-defense balance, some forms of technological change have more predictable effects than others. In particular, as we argue in a recent paper, changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change.

Two Kinds of Technological Change

In a classic analysis of arms races, Samuel Huntington draws a distinction between qualitative and quantitative changes in military capabilities. A qualitative change involves the introduction of what might be considered a new form of force. A quantitative change involves the expansion of an existing form of force.

Although this is a somewhat abstract distinction, it is easy to illustrate with concrete examples. The introduction of dreadnoughts in naval surface warfare in the early twentieth century is most naturally understood as a qualitative change in naval technology. In contrast, the subsequent naval arms race which saw England and Germany competing to manufacture ever larger numbers of dreadnoughts represented a quantitative change.

Attempts to understand changes in the offense-defense balance tend to focus almost exclusively on the effects of qualitative changes. Unfortunately, the effects of such qualitative changes are likely to be especially difficult to anticipate. One particular reason why foresight about such changes is difficult is that the introduction of a new form of force from the tank to the torpedo to the phishing attack will often warrant the introduction of substantially new tactics. Since these tactics emerge at least in part through a process of trial and error, as both attackers and defenders learn from the experience of conflict, there is a limit to how much can ultimately be foreseen.

Although quantitative technological changes are given less attention, they can also in principle have very large effects on the offense-defense balance. Furthermore, these effects may exhibit certain regularities that make them easier to anticipate than the effects of qualitative change. Focusing on quantitative change may then be a promising way forward to gain insight into the potential impact of artificial intelligence.

How Numbers Matter

To understand how quantitative changes can matter, and how they can be predictable, it is useful to consider the case of a ground invasion. If the sizes of two armies double in the lead-up to an invasion, for example, then it is not safe to assume that the effect will simply cancel out and leave the balance of forces the same as it was prior to the doubling. Rather, research on combat dynamics suggests that increasing the total number of soldiers will tend to benefit the attacker when force levels are sufficiently low and benefit the defender when force levels are sufficiently high. The reason is that the initial growth in numbers primarily improves the attackers ability to send soldiers through poorly protected sections of the defenders border. Eventually, however, the border becomes increasingly saturated with ground forces, eliminating the attackers ability to exploit poorly defended sections.

Figures 1: A simple model illustrating the importance of force levels. The ability of the attacker (in red) to send forces through poorly defended sections of the border rises and then falls as total force levels increase.

This phenomenon is also likely to arise in many other domains where there are multiple vulnerable points that a defender hopes to protect. For example, in the cyber domain, increasing the number of software vulnerabilities that both an attacker and defender can each discover will benefit the attacker at first. The primary effect will initially be to increase the attackers ability to discover vulnerabilities that the defender has failed to discover and patch. In the long run, however, the defender will eventually discover every vulnerability that can be discovered and leave behind nothing for the attacker to exploit.

In general, growth in numbers will often benefit the attacker when numbers are sufficiently low and benefit the defender when they are sufficiently high. We refer to this regularity as offensive-then-defensive scaling and suggest that it can be helpful for predicting shifts in the offense-defense balance in a wide range of domains.

Artificial Intelligence and Quantitative Change

Applications of artificial intelligence will undoubtedly be responsible for an enormous range of qualitative changes to the character of war. It is easy to imagine states such as the United States and China competing to deploy ever more novel systems in a cat-and-mouse game that has little to do with quantity. An emphasis on qualitative advantage over quantitative advantage is a fairly explicit feature of the American military strategy and has been since at least the so-called Second Offset strategy that emerged in the middle of the Cold War.

However, some emerging applications of artificial intelligence do seem to lend themselves most naturally to competition on the basis of rapidly increasing quantity. Armed drone swarms are one example. Paul Scharre has argued that the military utility of these swarms may lie in the fact that they offer an opportunity to substitute quantity for quality. A large swarm of individually expendable drones may be able to overwhelm the defenses of individual weapon platforms, such as aircraft carriers, by attacking from more directions or in more waves than the platforms defenses are capable of managing. If this method of attack is in fact viable, one could see a race to build larger and larger swarms that ultimately results in swarms containing billions of drones. The phenomenon of offensive-then-defensive scaling suggests that growing swarm sizes could initially benefit attackers who can focus their attention increasingly intensely on less well-defended targets and parts of targets before potentially allowing defensive swarms to win out if sufficient growth in numbers occurs.

Automated vulnerability discovery tools also stand out as another relevant example, which have the potential to vastly increase the number of software vulnerabilities that both attackers and defenders can discover. The DARPA Cyber Grand Challenge recently showcased machine systems autonomously discovering, patching, and exploiting software vulnerabilities. Recent work on novel techniques such as deep reinforcement fuzzing also suggests significant promise. The computer security expert Bruce Schneier has suggested that continued progress will ultimately make it feasible to discover and patch every single vulnerability in a given piece of software, shifting the cyber offense-defense balance significantly toward defense. Before this point, however, there is reason for concern that these new tools could initially benefit attackers most of all.

Forecasting the Impact of Technology

The impact of AI on the offense-defense balance remains highly uncertain. The greatest impact might come from an as-yet-unforeseen qualitative change. Our contribution here is to point out one particularly precise way in which AI could impact the offense-defense balance, through quantitative increases of capabilities in domains that exhibit offensive-then-defensive scaling. Even if this idea is mistaken, it is our hope that by understanding it, researchers are more likely to see other impacts. In foreseeing and understanding these potential impacts, policymakers could be better prepared to mitigate the most dangerous consequences, through prioritizing the development of applications that favor defense, investigating countermeasures, or constructing stabilizing norms and institutions.

Work to understand and forecast the impacts of technology is hard and should not be expected to produce confident answers. However, the importance of the challenge means that researchers should still try while doing so in a scientific, humble way.

This publication was made possible (in part) by a grant to the Center for a New American Security from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author(s).

Ben Garfinkel is a DPhil scholar in International Relations, University of Oxford, and research fellow at the Centre for the Governance of AI, Future of Humanity Institute.

Allan Dafoe is associate professor in the International Politics of AI, University of Oxford, and director of the Centre for the Governance of AI, Future of Humanity Institute. For more information, see http://www.governance.ai and http://www.allandafoe.com.

Image: U.S. Air Force (Photo by Tech. Sgt. R.J. Biermann)

Original post:

Artificial Intelligence, Foresight, and the Offense-Defense Balance - War on the Rocks

7 tips to get your resume past the robots reading it – CNBC

There are about 7.3 million open jobs in the U.S., according to the most recent Job Openings and Labor Turnover Survey from the Bureau of Labor Statistics. And for many job seekers vying for these openings, the likelihood they'll submit their application to an artificial intelligence-powered hiring system is growing.

A 2017 Deloitte report found 33% of employers already use some form of AI in the hiring process to save time and reduce human bias. These algorithms scan applications for specific words and phrases around work history, responsibilities, skills and accomplishments to identify candidates who match well with the job description.

These assessments may also aim to predict a candidate's future success by matching their abilities and accomplishments to those held by a company's top performers.

But it remains unclear how effective these programs are.

As Sue Shellenbarger reports for The Wall Street Journal, many vendors of these systems don't tell employers how their algorithms work. And employers aren't required to inform job candidates when their resumes will be reviewed by these systems.

That said, "it's sometimes possible to tell whether an employer is using an AI-driven tool by looking for a vendor's logo on the employer's career site," Shellenbarger writes. "In other cases, hovering your cursor over the 'submit' button will reveal the URL where your application is being sent."

CNBC Make It spoke with career experts about how to make sure your next application makes it past the initial robot test.

AI-powered hiring platforms are designed to identify candidates whose resumes match open job descriptions the most. These machines are nuanced, but their use still means very specific wording, repetition and prioritization of certain phrases matter.

Job seekers can make sure to highlight the right skills to get past initial screens by using tools, such as an online cloud generator, to understand what the AI system will prioritize most. Candidates can drop in the text of a job description and see which words appear most often, based on how large they appear within the word cloud.

CareerBuilder also created an AI resume builder to help candidates include skills on an application they may not have identified on their own.

Including transferable skills mentioned in the job description can also increase your resume odds. After all, executives from a recent IBM report say soft skills such as flexibility, time management, teamwork and communication are some of the most important skills in the workforce today.

"Job seekers should be cognizant of how they are positioning their professional background to put their best foot forward," Michelle Armer, chief people officer at talent acquisition company CareerBuilder, tells CNBC Make It. "Since a candidate's skill set will help set them apart from other applicants, putting these front and center on a resume will help make sure you're giving skills the attention they deserve."

It's also worth noting that AI enables employers to source candidates from the entire application system more easily, rather than limiting consideration just to people who applied to a specific role. "As a result," says TopResume career expert Amanda Augustine, "you could be contacted for a role the company believes is a good fit even if you never specifically applied for that opportunity."

When it comes to actually writing your resume, here are seven ways to make sure it looks best for the robots who will be reading it.

Use a text-based application like Microsoft Word rather than a PDF, HTML, Open Office, or Apple Pages document so buzzwords can be accurately scanned by AI programs. Augustine suggests job seekers skip images, graphics and logos, which might not be readable. Test how well bots will comprehend your resume by copying it into a plain text file, then making sure nothing gets out of order and no strange symbols pop up.

Mirror the job description in your work history. Job titles should be listed in reverse-chronological order, Augustine says, because machines favor documents with a clear hierarchy to their information. For each role, prioritize the most relevant information that matches the critical responsibilities and requirements of the job you're applying for. "The bullets that directly match one of the job requirements should be listed first," Augustine adds, "and other notable contributions or accomplishments should be listed lower in a set of bullets."

Include keywords from the job description, such as the role's day-to-day responsibilities, desired previous experience and overall purpose within the organization. Consider having a separate skills section, Augustine says, where you list any certifications, technical skills and soft skills mentioned in the job description.

Quantify performance results, Shellenbarger writes. Highlight ones that involve meeting company goals, driving revenue, leading a certain number of people or projects, being efficient with costs and so on.

Tailor each application to the description of each role you're applying for. These AI systems are generally built to weed out disqualifying resumes that don't match enough of the job description. The more closely you mirror the job description in your application, the better, Augustine says.

Don't place information in the document header or footer, even though resumes traditionally list contact information here. According to Augustine, many application systems can't read the information in this section, so crucial details may be omitted.

Network within the company to build contacts and get your resume to the hiring manager's inbox directly. "While AI helps employers narrow down the number of applicants they will move forward with for interviews," Armer says, "networking is also important."

AI hiring programs show promise at filling roles with greater efficiency, but can also perpetuate bias when they reward candidates with similar backgrounds and experiences as existing employees. Armer stresses hiring algorithms need to be built by teams of diverse individuals across race, ethnicity, gender, experience and other background factors in order to minimize bias.

This is also where getting your resume in front of a human can pay off the most.

"When you have someone on the inside advocating for you, you are often able to bypass the algorithm and have your application delivered directly to the recruiter or hiring manager, rather than getting caught up in the screening process," Augustine says.

Augustine recommends job seekers take stock of their existing network and identify those who may know someone at the companies they're interested in working at. "Look for professional organizations and events that are tied to your industry 10times.com is a great place to find events around the world for every imaginable field," she adds.

Finally, Armer recommends those starting their job hunt review and polish their social media profiles.

Like this story? Subscribe to CNBC Make It on YouTube!

Don't miss: This algorithm can predict when workers are about to quithere's how

Read more:

7 tips to get your resume past the robots reading it - CNBC

Finland offers crash course in artificial intelligence to EU – The Associated Press

HELSINKI (AP) Finland is offering a techy Christmas gift to all European Union citizens a free-of-charge online course in artificial intelligence in their own language, officials said Tuesday.

The tech-savvy Nordic nation, led by the 34-year-old Prime Minister Sanna Marin, is marking the end of its rotating presidency of the EU at the end of the year with a highly ambitious goal.

Instead of handing out the usual ties and scarves to EU officials and journalists, the Finnish government has opted to give practical understanding of AI to 1% of EU citizens, or about 5 million people, through a basic online course by the end of 2021.

It is teaming up with the University of Helsinki, Finlands largest and oldest academic institution, and the Finland-based tech consultancy Reaktor.

Teemu Roos, a University of Helsinki associate professor in the department of computer science, described the nearly $2 million project as a civics course in AI to help EU citizens cope with societys ever-increasing digitalization and the possibilities AI offers in the jobs market.

The course covers elementary AI concepts in a practical way and doesnt go into deeper concepts like coding, he said.

We have enormous potential in Europe but what we lack is investments into AI, Roos said, adding that the continent faces fierce AI competition from digital giants like China and the United States.

The initiative is paid for by the Finnish ministry for economic affairs and employment, and officials said the course is meant for all EU citizens whatever their age, education or profession.

Since its launch in Finland in 2018 The Elements of AI has been phenomenally successful the most popular course ever offered by the University of Helsinki, which traces its roots back to 1640 with more than 220,000 students from over 110 countries having taken it so far online, Roos said.

A quarter of those enrolled so far are aged 45 and over, and some 40% are women. The share of women is nearly 60% among Finnish participants - a remarkable figure in the male-dominated technology domain.

Consisting of several modules, the online course is meant to be completed in about six weeks full time - or up to six months on a lighter schedule - and is currently available in Finnish, English, Swedish and Estonian.

Together with Reaktor and local EU partners, the university is set to translate it to the remaining 20 of the EUs official languages in the next two years.

Megan Schaible, COO of Reaktor Education, said during the projects presentation in Brussels last week that the company decided to join forces with the Finnish university to prove that AI should not be left in the hands of a few elite coders.

An official University of Helsinki diploma will be provided to those passing and Roos said many EU universities would likely give credits for taking the course, allowing students to include it in their curriculum.

For technology aficionados, the University of Helsinkis computer science department is known as the alma mater of Linus Torvalds, the Finnish software engineer who developed the Linux operating system during his studies there in the early 1990s.

In September, Google set up its free-of-charge Digital Garage training hub in the Finnish capital with the intention of helping job-seekers, entrepreneurs and children to brush up their digital skills including AI.

Original post:

Finland offers crash course in artificial intelligence to EU - The Associated Press

How Artificial Intelligence Is Humanizing the Healthcare Industry – HealthITAnalytics.com

December 17, 2019 -Seventy-nine percent of healthcare professionals indicate that artificial intelligence tools have helped mitigate clinician burnout, suggesting that the technology enables providers to deliver more engaging, patient-centered care, according to a survey conducted by MIT Technology Review and GE Healthcare.

As artificial intelligence tools have slowly made their way into the healthcare industry, many have voiced concerns that the technology will remove the human aspect of patient care, leaving individuals in the care of robots and machines.

Healthcare institutions have been anticipating the impact that artificial intelligence (AI) will have on the performance and efficiency of their operations and their workforcesand the quality of patient care, the report stated.

Contrary to common, yet unproven, fears that machines will replace human workers, AI technologies in health care may actually be re-humanizing healthcare, just as the system itself shifts to value-based care models that may favor the outcome patients receive instead of the number of patients seen.

Through interviews with over 900 healthcare professionals, researchers found that providers are already using AI to improve data analysis, enable better treatment and diagnosis, and reduce administrative burdensall of which free up clinicians time to perform other tasks.

READ MORE: Using Artificial Intelligence to Strengthen Suicide Prevention

Numerous technologies are in play today to allow healthcare professionals to deliver the best care, increasingly customized to patients, and at lower costs, the report said.

Our survey has found medical professionals are already using AI tools, to improve both patient care and back-end business processes, from increasing the accuracy of oncological diagnosis to increasing the efficiency of managing schedules and workflow.

The survey found that medical staff with pilot AI projects spend one-third less time writing reports, while those with extensive AI programs spend two-thirds less time writing reports. Additionally, 45 percent of participants said that AI has helped increase consultation time, as well as time to perform surgery and other procedures.

For those with the most extensive AI rollouts, 70 percent expect to spend more time performing procedures than doing administrative or other work.

AI is being used to assume many of a physicians more mundane administrative responsibilities, such as taking notes or updating electronic health records, researchers said. The more AI is deployed, the less time doctors spend at their computers.

READ MORE: Patient, Provider Support Key to Healthcare Artificial Intelligence

Respondents also indicated that AI is helping them gain an edge in the healthcare market. Eighty percent of business and administrative healthcare professionals said that AI is helping them improve revenue opportunities, while 81 percent said they think AI will make them more competitive providers.

The report also showed that AI-related projects will continue to receive an increasing portion of healthcare spending now and in the future. Seventy-nine percent of respondents said they will be spending more to develop AI applications.

Respondents also indicated that AI has increased the operational efficiency of healthcare organizations. Seventy-eight percent of healthcare professionals said that their AI deployments have already created workflow improvements in areas including schedule management.

Using AI to optimize schedule management and other administrative tasks creates opportunities to leverage AI for more patient-facing applications, allowing clinicians to work with patients more closely.

AIs core value proposition is in both improving diagnosing abilities and reducing regulatory and data complexities by automating and streamlining workflow. This allows healthcare professionals to harness the wealth of insight the industry is generating, without drowning in it, the report said.

READ MORE: GE Launches Program to Ease Artificial Intelligence Adoption

AI has also helped healthcare professionals reduce clinical errors. Medical staff who dont use AI cited fighting clinical error as a key challenge two-thirds of the timemore than double that of medical staff who have AI deployments.

Additionally, advanced tools are helping users identify and treat clinical issues. Seventy-five percent of respondents agree that AI has enabled better predictions in the treatment of disease.

AI-enabled decision-support algorithms allow medical teams to make more accurate diagnoses, researchers noted.

This means doing something big by doing something really small: noticing minute irregularities in patient information. That could be the difference between acting on a life-threatening issueor missing it.

While AI has shown a lot of promise in the industry, the technology still comes with challenges. Fifty-seven percent of respondents said that integrating AI applications into existing systems is challenging, and more than half of professionals planning to deploy AI raise concerns about medical professional adoption, support from top management, and technical support.

To overcome these challenges, researchers recommended that clinical staff collaborate to implement and deploy AI tools.

AI needs to work for healthcare professionals as part of a robust, integrated ecosystem. It needs to be more than deploying technologyin fact, the more humanized the application of AI is, the more it will be adopted and improve results and return on investment. After all, in healthcare, the priority is the patient, researchers concluded.

View original post here:

How Artificial Intelligence Is Humanizing the Healthcare Industry - HealthITAnalytics.com