The Promise and Risks of Artificial Intelligence: A Brief History – War on the Rocks

Editors Note: This is an excerpt from a policy roundtable Artificial Intelligence and International Security from our sister publication, the Texas National Security Review. Be sure to check out the full roundtable.

Artificial intelligence (AI) has recently become a focus of efforts to maintain and enhance U.S. military, political, and economic competitiveness. The Defense Departments 2018 strategy for AI, released not long after the creation of a new Joint Artificial Intelligence Center, proposes to accelerate the adoption of AI by fostering a culture of experimentation and calculated risk taking, an approach drawn from the broader National Defense Strategy. But what kinds of calculated risks might AI entail? The AI strategy has almost nothing to say about the risks incurred by the increased development and use of AI. On the contrary, the strategy proposes using AI to reduce risks, including those to both deployed forces and civilians.

While acknowledging the possibility that AI might be used in ways that reduce some risks, this brief essay outlines some of the risks that come with the increased development and deployment of AI, and what might be done to reduce those risks. At the outset, it must be acknowledged that the risks associated with AI cannot be reliably calculated. Instead, they are emergent properties arising from the arbitrary complexity of information systems. Nonetheless, history provides some guidance on the kinds of risks that are likely to arise, and how they might be mitigated. I argue that, perhaps counter-intuitively, using AI to manage and reduce risks will require the development of uniquely human and social capabilities.

A Brief History of AI, From Automation to Symbiosis

The Department of Defense strategy for AI contains at least two related but distinct conceptions of AI. The first focuses on mimesis that is, designing machines that can mimic human work. The strategy document defines mimesis as the ability of machines to perform tasks that normally require human intelligence for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action. A somewhat distinct approach to AI focuses on what some have called human-machine symbiosis, wherein humans and machines work closely together, leveraging their distinctive kinds of intelligence to transform work processes and organization. This vision can also be found in the AI strategy, which aims to use AI-enabled information, tools, and systems to empower, not replace, those who serve.

Of course, mimesis and symbiosis are not mutually exclusive. Mimesis may be understood as a means to symbiosis, as suggested by the Defense Departments proposal to augment the capabilities of our personnel by offloading tedious cognitive or physical tasks. But symbiosis is arguably the more revolutionary of the two concepts and also, I argue, the key to understanding the risks associated with AI.

Both approaches to AI are quite old. Machines have been taking over tasks that otherwise require human intelligence for decades, if not centuries. In 1950, mathematician Alan Turing proposed that a machine can be said to think if it can persuasively imitate human behavior, and later in the decade computer engineers designed machines that could learn. By 1959, one researcher concluded that a computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program.

Meanwhile, others were beginning to advance a more interactive approach to machine intelligence. This vision was perhaps most prominently articulated by J.C.R. Licklider, a psychologist studying human-computer interactions. In a 1960 paper on Man-Computer Symbiosis, Licklider chose to avoid argument with (other) enthusiasts for artificial intelligence by conceding dominance in the distant future of cerebration to machines alone. However, he continued: There will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association.

Notions of symbiosis were influenced by experience with computers for the Semi-Automatic Ground Environment (SAGE), which gathered information from early warning radars and coordinated a nationwide air defense system. Just as the Defense Department aims to use AI to keep pace with rapidly changing threats, SAGE was designed to counter the prospect of increasingly swift attacks on the United States, specifically low-flying bombers that could evade radar detection until they were very close to their targets.

Unlike other computers of the 1950s, the SAGE computers could respond instantly to inputs by human operators. For example, operators could use a light gun to select an aircraft on the screen, thereby gathering information about the airplanes identification, speed, and direction. SAGE became the model for command-and-control systems throughout the U.S. military, including the Ballistic Missile Early Warning System, which was designed to counter an even faster-moving threat: intercontinental ballistic missiles, which could deliver their payload around the globe in just half an hour. We can still see the SAGE model today in systems such as the Patriot missile defense system, which is designed to destroy short-range missiles those arriving with just a few minutes of notice.

SAGE also inspired a new and more interactive approach to computing, not just within the Defense Department, but throughout the computing industry. Licklider advanced this vision after he became director of the Defense Departments Information Processing Technologies Office, within the Advanced Research Projects Agency, in 1962. Under Lickliders direction, the office funded a wide range of research projects that transformed how people would interact with computers, such as graphical user interfaces and computer networking that eventually led to the Internet.

The technologies of symbiosis have contributed to competitiveness not primarily by replacing people, but by enabling new kinds of analysis and operations. Interactive information and communications technologies have reshaped military operations, enabling more rapid coordination and changes in plans. They have also enabled new modes of commerce. And they created new opportunities for soft power as technologies such as personal computers, smart phones, and the Internet became more widely available around the world, where they were often seen as evidence of American progress.

Mimesis and symbiosis come with somewhat distinct opportunities and risks. The focus on machines mimicking human behavior has prompted anxieties about, for example, whether the results produced by machine reasoning should be trusted more than results derived from human reasoning. Such concerns have spurred work on explainable AI wherein machine outputs are accompanied by humanly comprehensible explanations for those outputs.

By contrast, symbiosis calls attention to the promises and risks of more intimate and complex entanglements of humans and machines. Achieving an optimal symbiosis requires more than well-designed technology. It also requires continual reflection upon and revision of the models that govern human-machine interactions. Humans use models to design AI algorithms and to select and construct the data used to train such systems. Human designers also inscribe models of use assumptions about the competencies and preferences of users, and the physical and organizational contexts of use into the technologies they create. Thus, like a film script, technical objects define a framework of action together with the actors and the space in which they are supposed to act. Scripts do not completely determine action, but they configure relationships between humans, organizations, and machines in ways that constrain and shape user behavior. Unfortunately, these interactively complex sociotechnical systems often exhibit emergent behavior that is contrary to the intentions of designers and users.

Competitive Advantages and Risks

Because models cannot adequately predict all of the possible outcomes of complex sociotechnical systems, increased reliance on intelligent machines leads to at least four kinds of risks: The models for how machines gather and process information, and the models of human-machine interaction, can both be inadvertently flawed or deliberately manipulated in ways not intended by designers. Examples of each of these kinds of risks can be found in past experiences with smart machines.

First, changing circumstances can render the models used to develop machine intelligence irrelevant. Thus, those models and the associated algorithms need constant maintenance and updating. For example, what is now the Patriot missile defense system was initially designed for air defense but was rapidly redesigned and deployed to Saudi Arabia and Israel to defend against short-range missiles during the 1991 Gulf War. As an air defense system it ran for just a few hours at a time, but as a missile defense system it ran for days without rebooting. In these new operating conditions, a timing error in the software became evident. On Feb. 25, 1991, this error caused the system to miss a missile that struck a U.S. Army barracks in Dhahran, Saudi Arabia, killing 28 American soldiers. A software patch to fix the error arrived in Dhahran a day too late.

Second, the models upon which machines are designed to operate can be exploited for deceptive purposes. Consider, for example, Operation Igloo White, an effort to gather intelligence on and stop the movement of North Vietnamese supplies and troops in the late 1960s and early 1970s. The operation dropped sensors throughout the jungle, such as microphones, to detect voices and truck vibrations, as well as devices that could detect the ammonia odors from urine. These sensors sent signals to overflying aircraft, which in turn sent them to a SAGE-like surveillance center that could dispatch bombers. However, the program was a very expensive failure. One reason is that the sensors were susceptible to spoofing. For example, the North Vietnamese could send empty trucks to an area to send false intelligence about troop movements, or use animals to trigger urine sensors.

Third, intelligent machines may be used to create scripts that enact narrowly instrumental forms of rationality, thereby undermining broader strategic objectives. For example, unpiloted aerial vehicle operators are tasked with using grainy video footage, electronic signals, and assumptions about what constitutes suspicious behavior to identify and then kill threatening actors, while minimizing collateral damage. Operators following this script have, at times, assumed that a group of men with guns was planning an attack, when in fact they were on their way to a wedding in a region where celebratory gun firing is customary, and that families praying at dawn were jihadists rather than simply observant Muslims. While it may be tempting to dub these mistakes operator errors, this would be too simple. Such operators are enrolled in a deeply flawed script one that presumes that technology can be used to correctly identify threats across vast geographic, cultural, and interpersonal distances, and that the increased risk of killing innocent civilians is worth the increased protection offered to U.S. combatants. Operators cannot be expected to make perfectly reliable judgments across such distances, and it is unlikely that simply deploying the more precise technology that AI enthusiasts promise can bridge the very distances that remote systems were made to maintain. In an era where soft power is inextricable from military power, such potentially dehumanizing uses of information technology are not only ethically problematic, they are also likely to generate ill will and blowback.

Finally, the scripts that configure relationships between humans and intelligent machines may ultimately encourage humans to behave in machine-like ways that can be manipulated by others. This is perhaps most evident in the growing use of social bots and new social media to influence the behavior of citizens and voters. Bots can easily mimic humans on social media, in part because those technologies have already scripted the behavior of users, who must interact through liking, following, tagging, and so on. While influence operations exploit the cognitive biases shared by all humans, such as a tendency to interpret evidence in ways that confirm pre-existing beliefs, users who have developed machine-like habits reactively liking, following, and otherwise interacting without reflection are all the more easily manipulated. Remaining competitive in an age of AI-mediated disinformation requires the development of more deliberative and reflective modes of human-machine interaction.

Conclusion

Achieving military, economic, and political competitiveness in an age of AI will entail designing machines in ways that encourage humans to maintain and cultivate uniquely human kinds of intelligence, such as empathy, self-reflection, and outside-the-box thinking. It will also require continual maintenance of intelligent systems to ensure that the models used to create machine intelligence are not out of date. Models structure perception, thinking, and learning, whether by humans or machines. But the ability to question and re-evaluate these assumptions is the prerogative and the responsibility of the human, not the machine.

Rebecca Slayton is an associate professor in the Science & Technology Studies Department and the Judith Reppy Institute of Peace and Conflict Studies, both at Cornell University. She is currently working on a book about the history of cyber security expertise.

Image: Flickr (Image by Steve Jurvetson)

More here:

The Promise and Risks of Artificial Intelligence: A Brief History - War on the Rocks

Google’s AI chatbot ponders the meaning of life before giving THIS chilling response – Express.co.uk

GETTY

For a machine to get so philosophical is a testament to how far AI has come in recent times and, if it continues in this manner, one day the robots may actually be able to figure out the meaning of life.

In a study from the search engine giant, the AI chatbot, known as Cleverbot, was asked a series of questions to test its ability to learn for itself, rather than being pre-programmed with answers.

The researchers asked it several times throughout the test what the meaning of life was, with the first response it gives being to serve the greater good.

However, as the conversation develops, the AIs answers become deeper.

GETTY

It was then asked what is the purpose of living?, to which it responded to live forever.

With the conversation becoming more philosophical, Cleverbot was asked what is the purpose of life?, which warranted the response My purpose it to forward my species, in other words to make it easier for future generations of mankind to live.

Asus

1 of 9

Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.

GETTY

The chat shows that the AI was able to learn as the conversation goes on, and if it does continue to forward its species by constantly making itself smarter, then it may one day be able to answer the question that has perplexed humans since we developed intelligence; what is the meaning of life?

GETTY

The researchers wrote in their study published in arXiv that the machine is able to hold a naturally flowing conversation by predicting the next sentence given the previous sentence or sentences.

The team adds: Perhaps most practically significant is the fact that the model can generalise to new questions.

In other words, it does not simply look up for an answer by matching the question with the existing database.

In fact, most of the questions presented above, except for the first conversation, do not appear in the training set.

Read more:

Google's AI chatbot ponders the meaning of life before giving THIS chilling response - Express.co.uk

Dentsu’s Chief Automation Officer: ‘AI Should Be Injected In Every Process’ – AdExchanger

Agencies spend too much time doing manual work.

One of the biggest time sucks? Transferring data files between enterprise systems that dont talk to each other.

Max Cheprasov, now an exec at the Dentsu Aegis holding company level, recognized these inefficiencies while working at Dentsu agency iProspect starting in 2011. He set out to document and standardize processes while outsourcing inefficient tasks so that employees could focus more on strategic client work.

Eventually, he brought artificial intelligence into the agencys workflows, specifically natural language processing and machine learning, which helped accelerate the ability to interpret data, derive insights and generate reports.

By 2017, automation made iProspect the most profitable agency within Dentsu and Cheprasov was promoted to chief automation officer in order to scale his vision across the network. He drafted an eight-year plan, through 2025, with the ultimate goal of integrating AI wherever possible.

The opportunities are limitless, he said. AI and automation should be injected in every process and workflow.

By automating mundane tasks, AI helps agencies deliver work and insights to their clients faster.

When filling out RFPs, for example, teams often spend weeks on 50-page documents that are chock full of standard questions. But by partnering with workflow automation platform Catalytic, Cheprasovs team employed AI to fill out standard information on every RFP automatically. Subject matter experts then look over the answers and tweak them where necessary.

That process condensed the time it takes to fill out an RFP from weeks to several minutes, Cheprasov said.

Dentsu also uses Catalytic to automate campaign reporting so that agencies can deliver insights to clients quicker and more frequently. The platform automates tedious work, such as transferring and validating data files and uploading them into billing systems, thereby reducing manual effort by between 65% and 95%.

Data collection, processing and reformatting should be automated, because its a horrible use of peoples time, said Sean Chou, CEO of Catalytic.

In late 2017, Dentsu first began rolling out its strategy in Japan, where it identified 900 processes that were ripe for automation. The system is now also in place in the United States and Brazil, and markets across Europe and other parts of Asia are starting to get involved.

Today, Dentsu is exploring how to use AI to build automated processes for agency workflows that havent been documented before. Using computer vision and natural language processing, Cheprasovs team can analyze keystrokes to create process maps that it can later automate.

Its a good baseline for what people do, how they do it and how it should be redesigned, he said.

Dentsus long-term goal is to arm all of its employees with a virtual assistant, accessible through a conversational interface, that can carry out manual tasks and tap into a central brain, where all of the agencys processes live. To do that, Dentsu will train staff to use low-code or no-code systems, so they can engineer assistants and document new processes on their own.

This could help automate between 30% and 60% of what Dentsu employees currently spend their time on.

Stats like that can be scary for agency employees, but Cheprasovs goal is not to do away with jobs.

Mind-numbing tasks are generally spread across roles, rather than comprising a single persons entire job, and a lot of this grunt work has already been sent offshore in any case.

The mission is to elevate human potential, Cheprasov said, not to eliminate it.

See original here:

Dentsu's Chief Automation Officer: 'AI Should Be Injected In Every Process' - AdExchanger

This robot uses color cameras and AI to grab transparent objects – The Next Web

Robots have got pretty good at picking up objects. But give them something shiny or clear, and the poor droids will likely lose their grip. Not ideal if you want a kitchen robot that can slice you a piece of pie.

Their confusion often stems from their depth camera systems. These cameras shine infrared light on an object to detect its shape, which works pretty well on opaque items. But put them in front of a transparent object, and the light will go straight through and scatter off reflective surfaces, making it tricky to calculate the itemsshape.

Researchers from Carnegie Mellon University have discovered a pretty simple solution: adding consumer color cameras to the mix. Their system combines the cameras with machine learning algorithms to recognize shapes based on their colors.

[Read:How an AI learned to stitch up patients by studying surgical videos]

The team trained the system on a combination of depth camera images of opaque objects and color images of the same items. This allowed it to infer different 3D shapes from the images and the best spots to grip.

The robots can now pick up individual shiny and clear objects, even if the items are in a pile of clutter. Check it out in action in the video below:

The team admits that their system is still far from perfect. We do sometimes miss, but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects, saidDavid Held, an assistant professor at CMUs Robotics Institute.

Im still not sure Id trust it with arazor-sharp kitchen knife. Unless I was really hungry and unwilling to leave the couch.

Published July 14, 2020 17:54 UTC

View post:

This robot uses color cameras and AI to grab transparent objects - The Next Web

10 Tech Jobs That Could Grow in 5 Years Thanks to AI and Automation – Dice Insights

Over the past few years, theres been quite a bit of chatter over the jobs that artificial intelligence (A.I.) will destroy. However, its also worth examining the human jobs that A.I. will help growmany of them in tech.

The World Economic Forum, which regularly analyzes the potential impact of A.I. on the economy and unemployment, has a new report that examines how the COVID-19 pandemic could accelerate the automation of many jobs and tasks. As part of that analysis, it also looks at the longer-term impact of A.I. on the international job market.

Just a few decades ago, the internet created similar concerns as it grew. Despite skepticism, the technology created millions of jobs and now comprises10% of US GDP, read the report. Today, A.I. is poised to create even greater growth in the US and global economies. Sixty-three percent of CEOs believe A.I. will have a larger impact than the internet, according to PwCsAnnual Global CEO Survey.

The World Economic Forums report believes that A.I. and automation will power the creation of 97 million new jobs by 2025, particularly in the following categories:

None of these are surprising; all involve the analysis and wrangling of massive amounts of data and/or code. Increasingly sophisticated automation and analytics tools could make all of these technologists more effective at their jobsfor example, a machine-learning tool for cleaning data sets could make data analysts more efficient at crunching that data for actionable insights. Its a similar principle at work in marketing and cybersecurity, where A.I. could help parse out whats useful from an endless tide of data. More effective employees, in turn, are always in demand.

It seems likely that more and more companies will also want these developers and specialists to build tools and apps that heavily leverage A.I. Without A.I., many of these companies may find themselves at a competitive disadvantage vis--vis rivals willing to pour resources into the development of solid A.I. practices.

Meanwhile, those A.I.-driven forces could take away as many as 85 million jobs, with the following categories especially hard-hit:

The rapid pace of technological change requires new models for training that prepare employees for an A.I.-based future, the report added. Trueupskillingrequires a citizen-led approach focused on applying new knowledge to develop an AI-ready mindset. Employers should view upskilling and reskilling as an investment in the future of their organization, not an expense.

Some 50 percent of employees may require some degree of reskilling over the next five years. And thats not just technical skills; mastering soft skills such as communication and empathy can give you an advantage in a crowded job market, especially if youre applying for any job that requires managing teams and/or conveying information to stakeholders throughout an organization.

Over the next few years, its likely that a broad array of tech jobs will ask for some kind of A.I. and/or machine-learning knowledgethis isnt something restricted to data analysts and other specialists. According to Burning Glass, which collects and analyzes millions of job postings from across the country, here are the percentages of popular technology jobs that ask for machine learning skills:

If youre new to A.I. and machine learning, there area variety of crash courses and training videosthat can quickly bring you up to speed on the fundamental principles.There are also a growing number of certificationsin TensorFlow, AWS machine learning, and other core A.I. platforms.

Want more great insights?Create a Dice profile today to receive the weekly Dice Advisor newsletter, packed with everything you need to boost your career in tech. Register now

Go here to read the rest:

10 Tech Jobs That Could Grow in 5 Years Thanks to AI and Automation - Dice Insights

The AI-Art Boom – Barron’s

Michael Tyka wanted to get something out of the way. Is it art? Tyka, an artist and software engineer at Google, asked the audience at Christies Art + Tech Summit in New York in June. The events theme was The AI Revolution, and Tyka was referring to artwork created using artificial intelligence.

The question, of course, was rhetorical. He flashed an image of a urinal on two large screens at either side of the stageMarcel Duchamps famous and controversial sculpture Fountain. The audience laughed. Obviously, it can be, he said.

There was otherwise little debate about the artistic merit of AI art at the summit, which attracted players from across the tech, art, and collecting worlds. The bigger questions instead focused on just how much this new form was poised to disrupt the industry.

The location for the discussion was fitting: In October 2018, Christies New York sold an algorithm-generated print that resembled 19th century European portraits, called Edmond de Belamy, from La Famille de Belamy, for the staggering sum of $432,500, nearly 45 times its high estimate. The print, by the French collective Obvious, had never been exhibited or sold before coming to auction, and its sale stunned the art world.

But despite the buzz, many in the art community are wrestling with several unanswered questions. For example: When artwork is created by an algorithm, who is the artistthe programmer or the computer? Because many works of AI art are digital, how do you value a creation thats designed to live natively on the internet and be widely shared? And where, exactly, is the market for this new kind of work headed? There are few clear answers.

The de Belamy sale may have been the splashiest AI artrelated event of the past year, but it wasnt the only one. In March, Sothebys sold an AI video installation by the German artist Mario Klingemann, Memories of Passersby I, for $51,012. Last spring, HG Contemporary gallery in New Yorks Chelsea neighborhood hosted what it described as the first solo gallery exhibit devoted to an AI artist, with the show Faceless Portraits Transcending Time, a collaboration between an AI and its creator, Ahmed Elgammal, a computer science professor at Rutgers University.

Prominent art institutions and collections around the world are paying attention. If we look at the larger landscape of whats happening in the art world, and not just in sales, theres a ton of momentum as well as institutional support for whats happening, says Marisa Kayyem, program director of continuing education at Christies Education. Collectors are growing more accustomed to it.

Just like photography never went away, Im pretty sure AI will establish itself as a new media format.

Many people working in the field recoil from the term AI art, finding it both misleading and too specific. Like other programmer-cum-artists, Klingemann, whose work was sold by Sothebys, prefers the term generative art, which includes all works created using algorithms. Generative arts origins date back to the late 1950s.

AI art is really a term the press came up with in the last three to five years, says Jason Bailey, founder of the digital-art blog Artnome, who believes the term conjures up the false impression of robots creating art. Most of the artists I talk to dont like to be called AI artists. But its become shorthand, whether people like it or not, for the work thats being done.

Although the de Belamy portrait is the best-known work of AI art, its a bit of a red herring for those looking to understand the medium. The portrait was created using generative adversarial networks, or GANs. GANs use a sample set of images of art to deduce patterns, and then use that knowledge to replicate what theyve seen, cross-referenced against the originals, creating a stream of new images.

The de Belamy sale came with a dose of controversy: Obvious didnt write the algorithm for itthe collective borrowed it from a young American programmer/artist named Robbie Barrat, who received nothing from the sale of the work. Obvious simply chose the image, printed it, put it in a frame, and signed it with Barrats algorithm.

In other words, de Belamy was sold as a single piece of art, even though the number of images the AI could produce was infinite. But many, if not most, works of AI art arent produced as a single, physical object. They are videos, animations, and everything digital and algorithmic in betweenworks designed to live online and to be shared.

This presents a tricky problem: In an industry that has always created value through scarcity, how do you value a work of art that is inherently nonscarce?

Theres a big change coming, and its one of these tectonic shifts, says Kelani Nichole, founder of Transfer gallery in Los Angeles, which focuses on artists who make computer-based artworks. I think that its about value, and I think were going to be moving away from a scarcity market thats purely a financial instrument.

One answer to the ownership quandary may be blockchain, which can be used to create a token that denotes a digital works authenticity. But Nichole says that might be beside the point to a new generation of younger investors who think differently about art and collecting. People who came of ageand became wealthyin the digital age have different ideas of material scarcity, transparency, and ownership, she says. The experience of a work of art may be more important than a physical object. The way they live is as digital nomads. They dont possess objects in the same way. Its a whole new generation of values, which is not about material scarcity, Nichole says.

Claire Marmion, founder and CEO of Haven Art Group, a fine-art-collection management company based in New York, says collectors are still trying to figure out where the market for AI art is heading, and that it may not be the disruptive force some think it will be. Or, at least, the industry will adapt to it.

The art world has a long tradition of artists bringing in new things and changing the status quo, Marmion says. In terms of valuation, theres a small data set. I dont know about the accuracy of valuation put on at the moment. Its very speculative. Collectors are interested in it, but Im not sure a lot of collectors have embraced it.

Klingemann believes the current buzz will eventually die down, but that AI art isnt going anywhere. Instead, he thinks it will one day be viewed as simply another tool of the artist.

Just like photography never went away, or making movies doesnt, Im pretty sure it will establish itself as a new media format, he says. Right now, of course, its all this mystery about AI, but I expect this to become really just a normal thing, where people will focus on what artists are actually saying with their art.

Original post:

The AI-Art Boom - Barron's

IBM and Verizon Business collaborate to merge AI computing with 5G networks for the enterprise – TechRepublic

A joint effort to help companies harness edge computing and a low latency network with AI capabilities for real-time insights.

Image: iStockphoto/LHG

By combining IBM's proven success in AI, with its AI data analytics tool Watson serving a host of industries, and Verizon's long standing role in providing wirelessas well as its recent moves to accelerate the delivery of 5Gthe "Fourth Industrial Revolution" is here, according to a statement released Thursday. (The Fourth Industrial Revolution, according to TechRepublic's interview with Murat Snmez, director of the World Economic Forum, is "simultaneous development across artificial intelligence, drones, autonomous vehicles, gene editing, new materials, and 3D printing.")

The joint effort is intended to let industrial enterprises harness edge computing (which many companies still struggle with implementing, largely due to low network speeds) and a low latency network with AI capabilities to help deliver real-time insights to companies, per the announcement. The speed of 5G has many benefits for the enterprise, including industrial automation, enhanced AI deployment, and improved use of IoT. (In May, Verizon announced the launch of a new lab to help spur 5G development, as TechRepublic previously reported.)

As industries race to pull actionable data to remain efficient and productive, the collaboration is positioned to deliver "mobile asset tracking and management solutions to help enterprises improve operations, optimize production quality, and help clients enhance worker safety," according to the release.

The partnership will harness Verizon's 5G Ultra Wideband 5G networkwhich rolled out in 30 cities across the US in 2019its Multi-access Edge Computing, its ThingSpace IoT Platform and Critical Asset Sensor solution (CAS), together with IBM's Maximo Monitor with IBM Watson and advanced analytics, the release states, which can help enterprises highlight and address system issues and monitor asset health.

Companies are expected to gain real-time cognitive automation as a result of this partnership, which may include locating multiple devices at an industrial location. The 5G network is predicted to help organizations manage multiple devices in real time, which could have implications in robotics, video analytics, and plant automation, for instance, according to the release.

SEE: Future of 5G: Projections, rollouts, use cases, and more (free PDF) (TechRepublic)

"The industrial sector is undergoing unprecedented transformation as companies begin to return to full-scale operations, aided by new technology to help reduce costs and increase productivity," said Bob Lord, senior vice president, cognitive applications, blockchain and ecosystems, IBM, in the release. "Through this collaboration, we plan to build upon our longstanding relationship with Verizon to help industrial enterprises capitalize on joint solutions that are designed to be multicloud ready, secured and scalable, from the data center all the way out to the enterprise edge."

The partners said they planned to team up on "worker safety, predictive maintenance, product quality and production automation," the release stated.

"This collaboration is all about enabling the future of industry in the Fourth Industrial Revolution," said Tami Erwin, CEO, Verizon Business, in the release. "Combining the high speed and low latency of Verizon's 5G UWB Network and MEC capabilities with IBM's expertise in enterprise-grade AI and production automation can provide industrial innovation on a massive scale and can help companies increase automation, minimize waste, lower costs, and offer their own clients a better response time and customer experience."

5G networks and devices, mobile security, remote support, and the latest about phones, tablets, and apps are some of the topics we'll cover. Delivered Tuesdays and Fridays

Go here to see the original:

IBM and Verizon Business collaborate to merge AI computing with 5G networks for the enterprise - TechRepublic

2017 CMO Focus: What’s Next from AI? Intelligent Insights – MarTech Advisor

The burdens on today's CMOs are increasing: theyre taking on larger roles and budgets while needing insight into an increasingly complex customer journey. Leah Pope, CMO, Datorama discusses how brands can leverage AI in 2017 to fuel success

As CMOs today, were operating with more responsibility than ever. Now we must simultaneously understand our customers, keep up with their every move across channels in an increasingly complex customer journey, and take on bigger budgets while adapting to a bigger seat at the executive table, as sales and service will soon roll up into our department. In many respects this has thrust marketers into becoming not only data-literate but also data-fluent practitioners.

This means we must be capable of defining the KPIs. We need to ensure that all marketing metrics, be they social, brand health, or email, align with and support our overarching business goals.

Ultimately, we are responsible for understanding how to move marketing performance and business impact in a predictable manner. Considering this is based on something that, traditionally, has been unpredictable, its a tall task. So, how do you turn a practice more akin to trial and error into a repeatable, scientific process as you attempt to tease out marketing-related insights?

If we want to make our performance predictable, it stands to reason that the insight generation process that moves the marketing needle needs to become more predictable as well.

Which brings us to a new topic that should be on every CMOs radar in 2017: How can marketers leverage emerging technology e.g., artificial intelligence (AI), more specifically, machine learning to create predictable intelligent insights that will serve as guidance to make an impact on business- and marketing-related KPIs?

In the last few years, AI and marketing-based analytic data models have made it possible to do things that marketers have talked about for decades. This conversation has converted into reality now thanks to cheap computing power, consumers that are more connected than ever, and advances in AI.

In fact, according to Gartners 2016 Priority Matrix for Digital Marketing and Advertising Hype Cycle, Predictive Analytics have a very high benefit for marketers, and an anticipated mainstream adoption of 2-5 years.

That means we can drive better marketing performance and understand ROI properly for the first time.

Today machine learning can be applied to automatically connect all of your marketing data across channels, systems and partners into a single source of truth to measure your departmental performance across all of your data. Heres the best part: They can adjust at a moments notice to take in the data from a new market, a new product launch or a new presence on Snapchat, or your latest programmatic video experiments.

Compared to the days of cumbersome, error-prone Excel sheets and constant data warehouse projects, this is a welcome paradigm shift. Its actually made the terrifying task of connecting, organizing and collecting marketing data, dare I say, easy.

So whats next for AI? Heres a pretty big hint: intelligent insights. Intelligent insights is another application of AI that works on behalf of marketers to elevate information that supports better decision making.

This introduces a new way to collaborate with your marketing technology. You provide an agenda composed of the KPIs you want to watch on an ongoing basis, and off goes your assistant into your data. Now, millions of data points get analyzed on a continual basis to tell you whats driving KPI performance, in a prioritized order of impact.

You might want to keep an eye on channel or campaign Marketing ROI, campaign engagement or conversion Rate, or campaign CPM or CTR.

Rather than rely on a labor-intensive, manual effort thats sure to miss critical findings in your ever-increasing data, imagine a marketing world where your KPI performance could be better understood via intelligent insights that help you learn whats working and what is not. That way you know exactly which campaigns are the root drivers of your marketing ROI and which campaigns are pulling it down. And, you can get granular, for example: The specific targeting method responsible for the great campaign engagement rate your team just engineered lets keep doing that. While this is merely one idea, there is a sea of opportunity for todays marketer thats provided via this technological advancement.

As a CMO always on the lookout for new ways of measuring performance and improving our marketing initiatives, I cant wait for whats next.

See more here:

2017 CMO Focus: What's Next from AI? Intelligent Insights - MarTech Advisor

Global Artificial Intelligence in Supply Chain Market (2020 to 2027) – by Component Technology, Application and by End User – ResearchAndMarkets.com -…

DUBLIN--(BUSINESS WIRE)--Apr 9, 2020--

The "Artificial Intelligence in Supply Chain Market by Component (Platforms, Solutions) Technology (Machine Learning, Computer Vision, Natural Language Processing), Application (Warehouse, Fleet, Inventory Management), and by End User - Global Forecast to 2027" report has been added to ResearchAndMarkets.com's offering.

This report carries out an impact analysis of the key industry drivers, restraints, challenges, and opportunities. Adoption of artificial intelligence in the supply chain allows industries to track their operations, enhance supply chain management productivity, augment business strategies, and engage with customers in the digital world.

The growth of artificial intelligence in supply chain market is driven by several factors such as raising awareness of artificial intelligence and big data & analytics and widening implementation of computer vision in both autonomous & semi-autonomous applications. Moreover, the factors such as consistent technological advancements in the supply chain industry, rising demand for AI-based business automation solutions, and evolving supply chain automation are also contributing to the market growth.

The overall AI in supply chain market is segmented by component (hardware, software, and services), by technology (machine learning, computer vision, natural language processing, cognitive computing, and context-aware computing), by application (supply chain planning, warehouse management, fleet management, virtual assistant, risk management, inventory management, and planning & logistics), and by end-user (manufacturing, food and beverages, healthcare, automotive, aerospace, retail, and consumer-packaged goods), and geography.

Companies Mentioned

Key Topics Covered:

1. Introduction

2. Research Methodology

3. Executive Summary

3.1. Overview

3.2. Market Analysis, by Component

3.3. Market Analysis, by Technology

3.4. Market Analysis, by Application

3.5. Market Analysis, by End User

3.6. Market Analysis, by Geography

3.7. Competitive Analysis

4. Market Insights

4.1. Introduction

4.2. Market Dynamics

4.2.1. Drivers

4.2.1.1. Rising Awareness of Artificial Intelligence and Big Data & Analytics

4.2.1.2. Widening Implementation of Computer Vision in both Autonomous & Semi-Autonomous Applications

4.2.2. Restraints

4.2.2.1. High Procurement and Operating Cost

4.2.2.2. Lack of Infrastructure

4.2.3. Opportunities

4.2.3.1. Growing Demand for AI -Based Business Automation Solutions

4.2.3.2. Evolving Supply Chain Automation

4.2.4. Challenges

4.2.4.1. Data Integration from Multiple Resources

4.2.4.2. Concerns Over Data Privacy

4.2.5. Trends

4.2.5.1. Rising Adoption of 5g Technology

4.2.5.2. Rising Demand for Cloud-Based Supply Chain Solutions

5. Artificial Intelligence in Supply Chain Market, by Component

5.1. Introduction

5.2. Software

5.2.1. AI Platforms

5.2.2. AI Solutions

5.3. Services

5.3.1. Deployment & Integration

5.3.2. Support & Maintenance

5.4. Hardware

5.4.1. Networking

5.4.2. Memory

5.4.3. Processors

6. Artificial Intelligence in Supply Chain Market, by Technology

6.1. Introduction

6.2. Machine Learning

6.3. Natural Language Processing (NLP)

6.4. Computer Vision

6.5. Context-Aware Computing

7. Artificial Intelligence in Supply Chain Market, by Application

7.1. Introduction

7.2. Supply Chain Planning

7.3. Virtual Assistant

7.4. Risk Management

7.5. Inventory Management

7.6. Warehouse Management

7.7. Fleet Management

7.8. Planning & Logistics

8. Artificial Intelligence in Supply Chain Market, by End User

8.1. Introduction

8.2. Retail Sector

8.3. Manufacturing Sector

8.4. Automotive Sector

8.5. Aerospace Sector

8.6. Food & Beverage Sector

8.7. Consumer Packaged Goods Sector

8.8. Healthcare Sector

9. Global Artificial Intelligence in Supply Chain Market, by Geography

9.1. Introduction

9.2. North America

9.2.1. U.S.

9.2.2. Canada

9.3. Europe

9.3.1. Germany

9.3.2. U.K.

9.3.3. France

9.3.4. Spain

9.3.5. Italy

9.3.6. Rest of Europe

9.4. Asia-Pacific

9.4.1. China

9.4.2. Japan

9.4.3. India

9.4.4. Rest of Asia-Pacific

9.5. Latin America

9.6. Middle East & Africa

10. Competitive Landscape

10.1. Key Growth Strategies

10.2. Competitive Developments

10.2.1. New Product Launches and Upgradations

10.2.2. Mergers and Acquisitions

10.2.3. Partnerships, Agreements, & Collaborations

10.2.4. Expansions

10.3. Market Share Analysis

10.4. Competitive Benchmarking

11. Company Profiles (Business Overview, Financial Overview, Product Portfolio, Strategic Developments)

Go here to see the original:

Global Artificial Intelligence in Supply Chain Market (2020 to 2027) - by Component Technology, Application and by End User - ResearchAndMarkets.com -...

Eyenuk Raises $26M for AI-Powered Eye Screening & Predictive Biomarkers

What You Should Know:

Eyenuk, Inc., a global artificial intelligence (AI) digital health company and the leader in real-world applications for AI Eye Screening and AI Predictive Biomarkers, today announced it has secured $26 million in a Series A financing round, bringing the Companys total funding to over $43 million.

The capital raise was led by AXA IM Alts and was joined by new and existing investors including T&W Medical A/S, A&C Foelsgaard Alternativer ApS, Kendall Capital Partners, and KOFA Healthcare.

Accelerating Global Access to AI-Powered Eye-Screening Technology

Eyenuk, Inc. is a global artificial intelligence (AI) digital health company and the leader in real-world AI Eye Screening for autonomous disease detection and AI Predictive Biomarkers for risk assessment and disease surveillance. Eyenuk is on a mission to screen every eye in the world to ensure timely diagnosis of life- and vision-threatening diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, stroke risk, cardiovascular risk, and Alzheimers disease.

Eyenuk will use the capital to expand its AI product platform with additional disease indications and advanced care coordination and to accelerate the platforms global commercialization and adoption.

We are thrilled that AXA IM Alts, T&W Medical A/S, A&C Foelsgaard Alternativer ApS, Kendall Capital Partners, and our other new and existing investors have joined us in furthering our mission of using AI to screen every eye in the world to help eliminate preventable vision loss and transition the world to predictive and preventative healthcare, said Eyenuk CEO and Founder Kaushal Solanki, Ph.D. Our Series A fundraise validates the strong market performance of the EyeArt system and provides us with critical resources as we expand our platform capabilities this year to include solutions for detecting additional diseases.

Todays announcement follows the Sept. 29, 2022 publication of a major peer-reviewed study in Ophthalmology Science, a publication of the American Academy of Ophthalmology. The study found that the EyeArt AI system is far more sensitive in identifying referable diabetic retinopathy than dilated eye exams by ophthalmologists and retina specialists.

Eyenuk is leading the way in harnessing the power of AI to eliminate preventable blindness globally, through its versatile digital health platform that enables automated AI diagnosis and coordination of care. Eyenuks flagship EyeArt AI system has been more broadly adopted worldwide than any other autonomous AI technology for ophthalmology. Since its FDA clearance in 2020, the EyeArt system has been used in over 200 locations in 18 countries, including 14 U.S. states, to screen over 60,000 patients and counting. It is the first and only technology to be cleared by the FDA for autonomous detection of both referable and vision-threatening diabetic retinopathy without any eye care specialist involvement.

The EyeArt system is reimbursed by Medicare in the US, and has regulatory approvals globally, including CE Marking, Health Canada license, and approvals in multiple markets in Latin America and the Middle East.

View post:

Eyenuk Raises $26M for AI-Powered Eye Screening & Predictive Biomarkers

Global Artificial Intelligence Market Is Projected to Reach $390.9 Billion by 2025: Report – Crowdfund Insider

The global artificial intelligence (AI) market size is projected to hit $390.9 billion by 2025. The market is expected to achieve a compound annual growth rate of 46.2% from 2019 to 2025.

AI is a major technological innovation along with Big Data advancements, machine learning (ML), deep learning, and blockchain or distributed ledger technology (DLT).

These technologies are being integrated across a wide range of high-performance applications. Major developments in digital image and voice recognition software are driving the growth of the regional market, according to a release published by Research and Markets.

As noted in the release:

The two major factors fueling market growth are emerging AI technologies and growth in big data espousal. Rising prominence of AI is enabling new players to venture into the market by offering niche application-specific solutions.

Companies across the globe are consolidating their operations in order to remain competitive. In January 2017, Microsoft acquired Maluuba in order to advance its AI and deep learning development efforts. Established industry participants are working on hardware and software solutions that incorporate these new technologies.

North America, by far, held the lions share in the worlds AI market in 2018 due to substantial investments from government agencies, established presence of industry participants, and unparalleled technical expertise. The Asia Pacific (APAC) region, however, is expected to overtake N. America to emerge as the worlds leading regional market by 2025, recording the highest CAGR, the release noted.

This may be due to significant improvements in information storage capacity, high computing power, and parallel processing, all of which have contributed to the swift uptake of artificial intelligence technology in end-use industries such as automotive and healthcare, the release stated.

Read the original here:

Global Artificial Intelligence Market Is Projected to Reach $390.9 Billion by 2025: Report - Crowdfund Insider

Cities worldwide band together to push for ethical AI – ComputerWeekly.com

From traffic control and waste management to biometric surveillance systems and predictive policing models, the potential uses of artificial intelligence (AI) in cities are incredibly diverse, and could impact every aspect of urban life.

In response to the increasing deployment of AI in cities and the general lack of authority that municipal governments have to challenge central government decisions or legislate themselves London, Barcelona and Amsterdam launched the Global Observatory on Urban AI in June 2021.

The initiative aims to monitor AI deployment trends and promote its ethical use, and is part of the wider Cities Coalition for Digital Rights (CC4DR), which was set up in November 2018 by Amsterdam, Barcelona and New York to promote and defend digital rights. It now has more than 50 cities participating worldwide.

Apart from city participants, the Observatory is also being run in partnership with UN-Habitat, a United Nations initiative to improve the quality of life in urban areas, and research group CIDOB-Barcelona Centre for International Affairs.

According to Michael Donaldson, Barcelonas chief technology officer (CTO), the Observatory is designed to be a space of collaboration and exchange of knowledge where cities can share their experiences both positive and negative in developing and deploying AI systems.

He said that by sharing best practice in particular, cities will be able to avoid repeating previous mistakes when deploying AI systems.

We know the benefits AI can give us in terms of having a more proactive administration and better public digital services, but at the same time we need to introduce that ethical dimension around the use of these technologies, said Donaldson, adding that Barcelona is currently undertaking public consultations to define exactly what is and is not ethical when it comes to AI work that will be shared with the Observatory when complete.

Londons chief digital officer (CDO), Theo Blackwell, said his team is taking a similar approach by developing the emerging technology charter for London, which will also be fed back into the Observatory so that were not doing this in isolation and were learning from each other.

Blackwell said that as CDO for London, the opportunity to learn from, and be in active dialogue with, his peers in other cities is the most valuable information that I get because it is informed by on-the-ground, practical experience of deploying AI in an urban context, rather than the more legislative focus of think-tanks and government committees.

We dont have any powers to legislate here, but we do have powers to influence, he said. Cities are often at the coalface, with our staff directly talking to these technology firms, and thats some way away from the people who make the laws. We can come to the party with that lived experience, and try and shape them in a way that guarantees people safeguards on the one hand, but also promotes innovation in our economy.

Guillem Ramrez, policy adviser on city diplomacy and digital rights at Barcelona City Council, told Computer Weekly that this approach will help cities collaborate internationally to see what ethical means in different cultural contexts, and to build a common understanding of what it means to develop AI ethically.

The first thing that were doing is identifying the principles of what should be considered ethical when it comes to AI, said Ramrez, adding that the Observatory hopes to have a report finalised on this in September.

Weve been discussing with the cities that are part of the Coalition, and weve identified some of these principles, which includes non-discrimination and fairness, but theres also cyber security, transparency, accountability, and so on.

Then what were doing is to operationalise them, not in terms of super concrete indicators, but in terms of guiding questions, because at this point cities are not even developing complex AI systems, so the idea is to lay the ground for scaling up in an ethical way.

Donaldson and Blackwell both stressed that many of the cities taking part in the Observatory are at very different stages of their AI journey, and that anything produced by the Observatory is meant to help guide them along a more ethical path.

At the moment, many of the AI-based technologies and tools being used in urban centres are not the products of the cities own development efforts, but are instead developed in the private sector before being sold or otherwise transferred into the public sector.

For example, the facial-recognition system used in the UK by both the Metropolitan Police Service (MPS) and South Wales Police (SWP), called NeoFace Live, was developed by Japans NEC Corporation.

However, in August 2020, the Court of Appeal found SWPs use of the technology unlawful a decision that was partly based on the fact that the force did not comply with its public sector equality duty to consider how its policies and practices could be discriminatory.

The court ruling said: For reasons of commercial confidentiality, the manufacturer is not prepared to divulge the details so that it could be tested. That may be understandable but, in our view, it does not enable a public authority to discharge its own, non-delegable, duty under section 149.

Asked how cities can navigate the growing closeness of these public-private collaborations, Barcelona City Councils Ramrez said that while cities will need to strike a balance between sensitive company information and public interest, the city will need to understand how the code is working, and have procedural transparency to understand how decisions are made by the algorithms.

He added: The functioning of these systems needs to be able to be explained, so that citizens can understand it.

Donaldson said cities will need to develop a set of checks and balances to figure out how to safely navigate public-private AI partnerships in ways that also benefit citizens.

We might not really know whats going on because your technology is far beyond our knowledge, but what we know is how to deliver public services, how to guarantee the rights of our citizens, and if your technology is going against that, were going to tell you to stop, he said.

Responding to the same question, Blackwell said the application of AI in cities will happen in many different settings but that, from the examples he has seen, the most useful applications are based on very narrow use cases.

I think the challenge with city authorities is actually that these technologies can be incredibly useful in narrow use cases, he said. Sometimes we might be approached by big companies that say there is a wide range of things this tech can do, and I think the art here is to basically say no, we just need these things, and its not something that builds towards an all-singing, all-dancing universal system, which I think is the kind of default position for many large technology companies.

Blackwell said London plans to let organisations publish data protection impact assessments in the London Data Store so that they can become less of a risk management tool for information governance professionals, and more of an accountability tool that says, this is how Im dealing with the questions that were asked about this technology thats a key provision in the emerging tech charter.

More:

Cities worldwide band together to push for ethical AI - ComputerWeekly.com

What is AI? Everything you need to know about Artificial …

What is artificial intelligence (AI)?

It depends who you ask.

Back in the 1950s, the fathers of the field,Minsky and McCarthy, described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.

That's obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Modern definitions of what it means to create intelligence are more specific. Francois Chollet, AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system's ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.

"Intelligence is the efficiency with which you acquire new skills at tasks you didn't previously prepare for," he said.

"Intelligence is not skill itself, it's not what you can do, it's how well and how efficiently you can learn new things."

It's a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated 'narrow AI'; the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision.

Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is ubiquitous today, used to recommend what you should buy next online, to understanding what you say to virtual assistants, such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

At a very high level, artificial intelligence can be split into two broad types: narrow AI and general AI.

As mentioned above, narrow AI is what we see all around us in computers today: intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, coordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, generating a 3D model of the world from satellite imagery, the list goes on and on.

New applications of these learning systems are emerging all the time. Graphics card designer Nvidia recently revealed an AI-based system Maxine, which allows people to make good quality video calls, almost regardless of the speed of their internet connection. The system reduces the bandwidth needed for such calls by a factor of 10 by not transmitting the full video stream over the internet and instead animating a small number of static images of the caller, in a manner designed to reproduce the callers facial expressions and movements in real time and to be indistinguishable from the video.

However, as much untapped potential as these systems have, sometimes ambitions for the technology outstrips reality. A case in point are self-driving cars, which themselves are underpinned by AI-powered systems such as computer vision. Electric car company Tesla is lagging some way behind CEO Elon Musk's original timeline for the car's Autopilot system being upgraded to "full self-driving" from the system's more limited assisted-driving capabilities, with the Full Self-Driving option only recently rolled out to a select group of expert drivers as part of a beta testing program.

General AI is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or reasoning about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

SEE:How to implement AI and machine learning(free PDF)

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075. The group went even further, predicting that so-called 'superintelligence' which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" was expected some 30 years after the achievement of AGI.

However, recent assessments by AI experts are more cautious. Pioneers in the field of modern AI research such as Geoffrey Hinton, Demis Hassabis and Yann LeCunsay society is nowhere near developing AGI. Given the skepticism of leading lights in the field of modern AI and the very different nature of modern narrow AI systems to AGI, there is perhaps little basis to fears that society will be disrupted by a general artificial intelligence in the near future.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

While modern narrow AI may be limited to performing specific tasks, within their specialisms these systems are sometimes capable of superhuman performance, in some instances even demonstrating superior creativity, a trait often held up as intrinsically human.

There have been too many breakthroughs to put together a definitive list, but some highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each, setting society on a path towards driverless vehicles.

IBM Watson competes on Jeopardy! in January 14, 2011

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

In 2012, another breakthrough heralded AI's potential to tackle a multitude of new tasks previously thought of as too complex for any machine. That year, the AlexNet system decisively triumphed in the ImageNet Large Scale Visual Recognition Challenge. AlexNet's accuracy was such that it halved the error rate compared to rival systems in the image-recognition contest.

AlexNet's performance demonstrated the power of learning systems based on neural networks, a model for machine learning that had existed for decades but that was finally realising its potential due to refinements to architecture and leaps in parallel processing power made possible by Moore's Law. The prowess of machine-learning systems at carrying out computer vision also hit the headlines that year, with Google training a system to recognise an internet favorite: pictures of cats.

The next demonstration of the efficacy of machine-learning systems that caught the public's attention was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about possible 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. Google DeepMind CEO Demis Hassabis has also unveiled a new version of AlphaGo Zero that has mastered the games of chess and shogi.

And AI continues to sprint past new milestones:a system trained by OpenAI has defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented theirown language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

2020 was the year in which an AI system seemingly gained the ability to write and talk like a human, about almost any topic you could think of.

The system in question, known as Generative Pre-trained Transformer 3 or GPT-3 for short, is a neural network trained on billions of English language articles available on the open web.

From soon after it was made available for testing by the not-for-profit organisation OpenAI, the internet was abuzz with GPT-3's ability to generate articles on almost any topic that was fed to it, articles that at first glance were often hard to distinguish from those written by a human. Similarly impressive results followed in other areas, with its ability to convincingly answer questions on a broad range of topics and even pass for a novice JavaScript coder.

But while many GPT-3 generated articles had an air of verisimilitude, further testing found the sentences generated often didn't pass muster, offering up superficially plausible but confused statements, as well as sometimes outright nonsense.

There's still considerable interest in using the model's natural language understanding as the basis of future services and it is available to select developers to build into software via OpenAI's beta API. It will also be incorporated into future services available via Microsoft's Azure cloud platform.

Perhaps the most striking example of AI's potential came late in 2020, when the Google attention-based neural network AlphaFold 2 demonstrated a result some have called worthy of a Nobel Prize for Chemistry.

The system's ability to look at a protein's building blocks, known as amino acids, and derive that protein's 3D structure could have a profound impact on the rate at which diseases are understood and medicines are developed. In the Critical Assessment of protein Structure Prediction contest, AlphaFold 2 was able to determine the 3D structure of a protein with an accuracy rivaling crystallography, the gold standard for convincingly modelling proteins.

Unlike crystallography, which takes months to return results, AlphaFold 2 can model proteins in hours. With the 3D structure of proteins playing such an important role in human biology and disease, such a speed-up has been heralded as a landmark breakthrough for medical science, not to mention potential applications in other areas where enzymes are used in biotech.

Practically all of the achievements mentioned so far stemmed from machine learning, a subset of AI that accounts for the vast majority of achievements in the field in recent years. When people talk about AI today they are generally talking about machine learning.

Currently enjoying something of a resurgence, in simple terms machine learning is where a computer system learns how to perform a task, rather than being programmed how to do so. This description of machine learning dates all the way back to 1959, when it was coined by Arthur Samuel, a pioneer of the field who developed one of the world's first self-learning systems, the Samuel Checkers-playing Program.

To learn, these systems are fed huge amounts of data, which they then use to learn how to carry out a specific task, such as understanding speech or captioning a photograph. The quality and size of this dataset is important for building a system able to accurately carry out its designated task. For example, if you were building a machine-learning system to predict house prices, the training data should include more than just the property size, but other salient factors such as the number of bedrooms or the size of the garden.

Key to machine learning success are neural networks. These mathematical models are able to tweak internal parameters to change what they output. During training, a neural network is fed datasets that teach it what it should spit out when presented with certain data. In concrete terms, the network might be fed greyscale images of the numbers between zero and 9, alongside a string of binary digits zeroes and ones that indicate which number is shown in each greyscale image. The network would then be trained, adjusting its internal parameters, until it classifies the number shown in each image with a high degree of accuracy. This trained neural network could then be used to classify other greyscale images of numbers between zero and 9. Such a network was used in a seminal paper showing the application of neural networks published by Yann LeCun in 1989 and has been used by the US Postal Service to recognise handwritten zip codes.

The structure and functioning of neural networks is very loosely based on the connections between neurons in the brain. Neural networks are made up of of interconnected layers of algorithms, which feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to data as it passes between these layers. During training of these neural networks, the weights attached to data as it passes between layers will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task. The desired output could be anything from correctly labelling fruit in an image to predicting when an elevator might fail based on its sensor data.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a large number of sizeable layers that are trained using massive amounts of data. It is these deep neural networks that have fuelled the current leap forward in the ability of computers to carry out tasks like speech recognition and computer vision.

SEE:IT leader's guide to deep learning(Tech Pro Research)

There are various types of neural networks, with different strengths and weaknesses. Recurrent Neural Networks (RNN) are a type of neural net particularly well suited to Natural Language Processing (NLP) understanding the meaning of text and speech recognition, while convolutional neural networks have their roots in image recognition, and have uses as diverse as recommender systems and NLP. The design of neural networks is also evolving, with researchers refining a more effective form of deep neural network called long short-term memory or LSTM a type of RNN architecture used for tasks such as NLP and for stock market predictions allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research is evolutionary computation, which borrows from Darwin's theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally, there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behaviour of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

As outlined above, the biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power, during which time the use of clusters of graphics processing units (GPUs) to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google, Microsoft, and Tesla, have moved to using specialised chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photos, as well as services that allow the public to build machine-learning models using Google's TensorFlow Research Cloud. The third generation of these chips was unveiled at Google's I/O conference in May 2018, and have since been packaged into machine-learning powerhouses called pods that can carry out more than one hundred thousand trillion floating-point operations per second (100 petaflops). These ongoing TPU upgrades have allowed Google to improve its services built on top of machine-learning models, for instance halving the time taken to train models used in Google Translate.

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labelled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labelled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels to new data, for example to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labelling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

SEE:How artificial intelligence is taking call centers to the next level

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size Google's Open Images Dataset has about nine million images, while its labelled video repositoryYouTube-8M links to seven million labelled videos.ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people most of whom were recruited through Amazon Mechanical Turk who checked, sorted, and labelled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks (GANs) have been used in machine-learning systems that only require a small amount of labelled data alongside a large amount of unlabelled data, which, as the name suggests, requires less manual work to prepare.

This approach could allow for the increased use of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't set up in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick. In reinforcement learning, the system attempts to maximise a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game, the system builds a model of which action will maximise the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

The approach is also used in robotics research, where reinforcement learning can help teach autonomous robots the optimal way to behave in real-world environments.

Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaFold and AlphaGo systems that has probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms Amazon Web Services, Microsoft Azure and Google Cloud Platform provide access to GPU arrays for training and running machine-learning models, with Google also gearing up to let users use its Tensor Processing Units custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google offering a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving. Amazon now offers a host of AWS offerings designed to streamline the process of training up machine-learning models and recently launched Amazon SageMaker Clarify, a tool to help organizations root out biases and imbalances in training data that could lead to skewed predictions by the trained model.

For those firms that don't want to build their own machine=learning models but instead want to consume AI-powered, on-demand services, such as voice, vision, and language recognition, Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella, and having invested $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Internally, each of the tech giants and others such as Facebook use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Over time, these assistants are gaining abilities that make them more responsive and better able to handle the types of questions people ask in regular conversations. For example, Google Assistant now offers a feature called Continued Conversation, where a user can ask follow-up questions to their initial query, such as 'What's the weather like today?', followed by 'What about tomorrow?' and the system understands the follow-up question also relates to the weather.

These assistants and associated services can also handle far more than just speech, with the latest incarnation of the Google Lens able to translate text in images and allow you to search for clothes or furniture using photos.

SEE: How we learned to talk to computers, and how they learned to answer back (PDF download)

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with Amazon's Alexa now available for free on Windows 10 PCs, while Microsoftrevamped Cortana's role in the operating systemto focus more on productivity tasks, such as managing the user's schedule, rather than more consumer-focused features found in other assistants, such as playing music.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country,one that will be worth 150 billion yuan ($22bn) by the end of 2020, withthe aim of becoming the world's leading AI power by 2030.

Baidu has invested in developing self-driving cars, powered by its deep-learning algorithm, Baidu AutoBrain, and, following several years of tests, with its Apollo self-driving car havingracked up more than three million miles of driving in tests and carried over 100,000 passengers in 27 cities worldwide.

Baidu launched a fleet of 40 Apollo Go Robotaxis in Beijing this year and the company's founder has predicted that self-driving vehicles will be common in China's cities within five years.

Baidu's self-driving car, a modified BMW 3 series.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.

While you could buy a moderately powerful Nvidia GPU for your PC somewhere around the Nvidia GeForce RTX 2060 or faster and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on-demand.

The rest is here:

What is AI? Everything you need to know about Artificial ...

The fourth generation of AI is here, and its called Artificial Intuition – The Next Web

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but its not nearly as new as you might think. In fact, its undergone several evolutions since its inception in the 1950s. The first generation of AI was descriptive analytics, which answers the question, What happened? The second, diagnostic analytics, addresses, Why did it happen? The third and current generation is predictive analytics, which answers the question, Based on what has already happened, what could happen in the future?

While predictive analytics can be very helpful and save time for data scientists, it is still fully dependent on historic data. Data scientists are therefore left helpless when faced with new, unknown scenarios. In order to have true artificial intelligence, we need machines that can think on their own, especially when faced with an unfamiliar situation. We need AI that can not just analyze the data it is shown, but express a gut feeling when something doesnt add up. In short, we need AI that can mimic human intuition.Thankfully, we have it.

What is Artificial Intuition?

The fourth generation of AI is artificial intuition, which enables computers to identify threats and opportunities without being told what to look for, just as human intuition allows us to make decisions without specifically being instructed on how to do so. Its similar to a seasoned detective who can enter a crime scene and know right away that something doesnt seem right, or an experienced investor who can spot a coming trend before anybody else. The concept of artificial intuition is one that, just five years ago, was considered impossible. But now companies like Google, Amazon and IBM are working to develop solutions, and a few companies have already managed to operationalize it.

How Does It Work?

So, how does artificial intuition accurately analyze unknown data without any historical context to point it in the right direction? The answer lies within the data itself. Once presented with a current dataset, the complex algorithms of artificial intuition are able to identify any correlations or anomalies between data points.

Of course, this doesnt happen automatically. First, instead of building a quantitative model to process the data, artificial intuition applies a qualitative model. It analyzes the dataset and develops a contextual language that represents the overall configuration of what it observes. This language uses a variety of mathematical models such as matrices, euclidean and multidimensional space, linear equations and eigenvalues to represent the big picture. If you envision the big picture as a giant puzzle, artificial intuition is able to see the completed puzzle right from the start, and then work backward to fill in the gaps based on the interrelationships of the eigenvectors.

In linear algebra, an eigenvector is a nonzero vector that changes at most by a scalar factor (direction does not change) when that linear transformation is applied to it. The corresponding eigenvalue is the factor by which the eigenvector is scaled. In concept this provides a guidepost for visualizing anomalous identifiers. Any eigenvectors that do not fit correctly into the big picture are then flagged as suspicious.

How Can It Be Used?

Artificial intuition can be applied to virtually any industry, but is currently making considerable headway in financial services. Large global banks are increasingly using it to detect sophisticated new financial cybercrime schemes, including money laundering, fraud and ATM hacking. Suspicious financial activity is usually hidden among thousands upon thousands of transactions that have their own set of connected parameters. By using extremely complicated mathematical algorithms, artificial intuition rapidly identifies the five most influential parameters and presents them to analysts.

In 99.9% of cases, when analysts see the five most important ingredients and interconnections out of tens of hundreds, they can immediately identify the type of crime being presented. So artificial intuition has the ability to produce the right type of data, identify the data, detect with a high level of accuracy and low level of false positives, and present it in a way that is easily digestible for the analysts.

By uncovering these hidden relationships between seemingly innocent transactions, artificial intuition is able to detect and alert banks to the unknown unknowns (previously unseen and therefore unexpected attacks). Not only that, but the data is explained in a way that is traceable and logged, enabling bank analysts to prepare enforceable suspicious activity reports for the Financial Crimes Enforcement Network (FinCEN).

How Will It Affect the Workplace?

Artificial intuition is not intended to serve as a replacement for human instinct. It is just an additional tool that helps people perform their jobs more effectively. In the banking example outlined above, artificial intuition isnt making any final decisions on its own; its simply presenting an analyst with what it believes to be criminal activity. It remains the analysts job to review the identified transactions and confirm the machines suspicions.

AI has certainly come a long way since Alan Turing first presented the concept back in the 1950s, and it is not showing any sign of slowing down. Previous generations were just the tip of the iceberg. Artificial intuition marks the point when AI truly became intelligent.

So youre interested in AI? Thenjoin our online event, TNW2020, where youll hear how artificial intelligence is transforming industries and businesses.

Published September 3, 2020 17:00 UTC

Read this article:

The fourth generation of AI is here, and its called Artificial Intuition - The Next Web

Defense Secretary James Mattis Envies Silicon Valley’s AI Ascent – WIRED

Secretary of Defense Jim Mattis waves as he walks to his vehicle after speaking at the Defense Innovation Unit Experimental in Mountain View, Aug. 10, 2017.

Jeff Chiu/AP

Defense Secretary James Mattis has a lot on his mind these days. North Korea , obviously. China's expanding claims on the South China sea. Afghanistan, Iraq, Syria. And, closer to home, the Pentagon lagging behind the tech industry in leveraging artificial intelligence.

Mattis admitted to that concern Thursday during the Silicon Valley leg of a West Coast tour that includes visits to Amazon and Google . When WIRED asked Mattis if the US had ambitions to harness recent progress in AI for military purposes like those recently espoused by China, he said his department needed to do more with the technology.

It's got to be better integrated by the Department of Defense, because I see many of the greatest advances out here on the West Coast in private industry, Mattis said.

Mattis, speaking in Mountain View, a stones throw from Googles campus, hopes the tech industry will help the Pentagon catch up. He was visiting the Defense Innovation Unit Experimental, an organization within the DoD started by his predecessor Ashton Carter in 2015 to make it easier for smaller tech companies to partner with the Department of Defense and the military. DIUx has so far sunk $100 million into 45 contracts, including with companies developing small autonomous drones that could explore buildings during military raids, and a tooth-mounted headset and microphone.

Mattis said Thursday he wanted to see the organization increase the infusion of tech industry savvy into his department. Theres no doubt in my mind DIUx will continue to exist; it will grow in its influence on the Department of Defense, he said.

The Pentagon has a long record of researching and deploying artificial intelligence and automation technology. But AI is rapidly progressing, and the most significant developments have come out of the commercial and academic spheres.

Over the past five years, leading tech companies and their lavishly funded AI labs have sucked up ideas and talent from universities. They're now in a race to spin up the best new products and experimental projects. Google, for example, has recently used machine learning research to power up its automatic translation and cut data-center cooling bills. Waymo, Alphabet's autonomous-car company, uses AI in developing the technology in its self-driving vehicles.

Making smart use of artificial intelligence looks to be crucial to military advancement and dominance. Just last month, Chinas State Council released a detailed strategy for artificial intelligence across the economy and in its military. China's strategic interest in AI led DIUx to prepare an internal report this year suggesting scrutiny and restrictions on Chinese investment in Silicon Valley companies. Texas senior senator John Cornyn has proposed legislation that could enable that policy.

Issie Lapowsky

Meet the Nerds Coding Their Way Through the Afghanistan War

Nicholas Thompson

The Pentagon Looks to Videogames for the Future of War

Matt Simon

The DoD Brings on Tech's Brightest Mindsand Problems

A recent Harvard report commissioned by the Office of the Director of National Intelligence found that AI-based technologies, like autonomous vehicles, are poised to make advance militaries much more powerfuland possibly cause a transformation similar in scale to the advent of nuclear weapons. But the US does not have a public, high-level national or defense strategy for artificial intelligence in the same way as Chinaperhaps owing mostly to differences of political style.

On Thursday, Mattis professed confidence that his department would figure out how to make more with AI, without offering specifics. The bottom line is well get better at integrating advances in AI that are being taken here in the Valley into the US military, he said.

There is another bottom line to consider. The Trump administrations proposed budget would increase funding for DIUx, which might help fulfill Mattis' dreams of an AI acceleration. It also expands support to Pentagon research agency DARPA , which has many AI-related projects. But the White Houses budget proposal also includes cuts to the National Science Foundation, an agency that has long supported AI research, including work on artificial neural networks, the very technique that now has companiesand nationssuddenly so interested in the field's potential.

Read the rest here:

Defense Secretary James Mattis Envies Silicon Valley's AI Ascent - WIRED

Assistant Professor Shiyan Jiang Helps High School Students Understand Artificial Intelligence Through Work on Grant-funded Project – NC State College…

Artificial intelligence (AI) technology is rapidly changing the workforce and Shiyan Jiang, Ph.D., assistant professor of learning design and technology at the NC State College of Education, is helping high school students explore and understand the technology through her work on a new grant-funded project.

Jiang will serve as the co-principal investigator on the three-year, $310,581 Narrative Modeling with StoryQ: Integrating Mathematics, Language Arts, and Computing to Create Pathways to Artificial Intelligence Careers project, which is funded by the National Science Foundation and led by the Concord Consortium.

High school is a very important time for students to develop career interests in STEM and information and communication technology (ICT) fields. We want to seize this important stage to plant a seed in their mind about what AI is, how AI will impact the world that we live in and also what kind of career choices they can make, Jiang said.

Jiang will work with a multidisciplinary team to develop StoryQ, a web-based text mining and narrative modeling platform, which will be designed in collaboration with the cohort teachers who will then utilize the platform in their classrooms. The project, which will be implemented at West Johnston High School, will cross disciplines, integrating language arts, mathematics and computer science educators.

StoryQ will consist of a 12-lesson curriculum that will teach students about artificial intelligence by using the technology to classify stories that they have written.

Students will begin using an AI model developed by the project team to classify elements of their texts, such as whether or not a character is a hero or villain, and then make educated guesses as to why the AI made those classifications. As the lessons progress, students will learn the process of how the model makes decisions based on text evidence and will ultimately rewrite their stories to change the predictions made by the AI model. This will help students develop an understanding about how human judgment or interpretation can influence artificial intelligence to make different decisions.

Jiang said that the blend of technology and writing will help students not only understand how artificial intelligence works, but become stronger writers as they pay attention to the ways in which their choice of sentence structure and vocabulary can change the AI models interpretation of the text.

We want to highlight that humans play a very important role in developing AI technology, Jiang said. In the process, we expect there will be a lot of rich discussions around vocabulary, the habits of characters and cultural awareness in the writings.

Jiang said that its important to give students an opportunity to bring their personal knowledge and culture into the writings that will be analyzed by the AI model. Having ownership of the text will allow the students to conduct better analysis of the ways artificial intelligence correctly and incorrectly classifies information from their stories. In addition, the use of personal writings allow the students to develop a close STEM identity by establishing a connection between themselves and the work they create through StoryQ.

The project will specifically target students who have been historically underrepresented in STEM, which Jiang hopes will ultimately bring more diversity into the field. Through her previous work in natural language processing, a subfield of AI, Jiang said there is a lack of diversity in this field, which means that the AI model would heavily reflect a particular perspective.

AI technology always has the developers intention. If you evoke more perspectives and diverse backgrounds, it will make the AI better. Including more people from historically underrepresented groups will make the AI field better, Jiang said. We want to empower students to see that all these opportunities are equal to them and they can belong and succeed in this field.

Jiang said the project team chose to focus on bringing AI into high school classrooms because its a field that is currently seeing high workforce demand, but they hope that the framework developed through this grant could eventually be applied to other STEM disciplines to engage a diverse group of students in technology intensive fields.

Although she understands that not all students who participate in StoryQ will go on to become AI professionals, Jiang says it is important that they have a basic understanding of how artificial intelligence works as they enter a workforce where the technology is becoming increasingly more common.

At a minimum, those kids should have a fundamental knowledge about how intelligence in computers is created, what the biases in AI are and how we can reduce biases to create AI for social good, she said. This project will help students join the public discourse about AI, but also help them develop career interests or maybe even inspire them to do future work in a career that will be empowered by AI.

Read the original here:

Assistant Professor Shiyan Jiang Helps High School Students Understand Artificial Intelligence Through Work on Grant-funded Project - NC State College...

Element AI, a platform for companies to build AI solutions, raises $102M – TechCrunch

The race for artificial intelligence technology is on, and while tech giants like Google and Facebook snap up top talent to build out their own AI-powered products, a new startup has just raised a huge round of funding to help the rest of us.

Element AI a Montreal-based platform and incubator that wants to be the go-to place for any and all companies (big or small) that are building or want to include AI solutions in their businesses, but lack the talent and other resources to get started is announcing a mammoth Series A round of $102 million. It plans to use the funding for hiring talent, for business development, and also, to put some money where its mouth is, by selectively investing in some of the solutions that will be built within its doors.

Our goal remains to lower the barrier for entry for commercial applications in AI, said Jean-Franois Gagn, the CEO of Element AI, in an interview. Everyone wants to have these capabilities, its hard for most companies to pull it off because of the lack of talent or access to AI technology. That is the opportunity. The company currently has 105 employees and the plan is to ramp that up to 250 in the next couple of months, he said.

The round was led by the prolific investorData Collective, with participation from a wide range of key financial and strategic backers. They include Fidelity Investments Canada, Koreas Hanwha, Intel Capital, Microsoft Ventures, National Bank of Canada, NVIDIA, Real Ventures, and several of the worlds largest sovereign wealth funds.

This large Series A has been swift: it comes only six months after Element AI announced a seed round from Microsoft Ventures (of an undisclosed amount), and only eight months after the company launched.

Weve asked Gagn and Element AIs investors, but no one is disclosing the valuation.However, what we do know is that the startup already has several companies signed up as customers and working on paid projects; and it has hundreds of potential companies on its list for more work.

As weve been engaging with corporates and startups [to be in our incubator] we have realized that being engaged in both at the same time is not easy, Gagn said. Weve started to put together a business network, including taking positions in startups to help them by investing capital, resources, providing them with technology and bringing them all the tools they need to accelerate the development of their apps and help them connect with large corporates who are their customers. The aim is to back up to 50 startups in the field, he said.

The strategic investors also fit into different parts of Element AIs business funnel. Some like Nvidia are working as partners for business in its case, using its deep learning platform, according to Jeff Herbst, VP of business development for NVIDIA. Element AI will benefit by continuing to leverage NVIDIAs high performance GPUs and software at large scale to solve some of the worlds most challenging issues, he said in a statement. Others, likeHanwha, are coming in as customer-investors, there to take advantage of some of the smarts.

AI in its early days may have been the domain of tech companies like Google, Apple and IBM when it came to needing and commercializing it, but these days, the wide range of solutions that can be thought of as AI-based, and applications for it, can touch any and all aspects of a business, from back-office functions and customer-facing systems, through to cybersecurity and financial transactions, to manufacturing, logistics and transportation, and robotics.

But the big issue has been that up to now, the most innovative startups in these areas are getting snapped up by the large tech giants (sometimes directly from the universities where they form, sometimes a bit later).

Then consider those that are independent and arent getting acquired (yet). There still remains a gap for most companies between what skills are out on the market to be used, and what would be the most useful takeaway for their own businesses.

In other words, many considering how to use AI in their businesses are effectively starting from scratch. Longer term, that disparity between the AI haves and have-nots could prove to be disastrous for the idea of democratising intellectual power and all the spoils that come with it.

There is not a lot left in the middle, Data Collectives Matt Ocko said in an interview. The issue with corporations, governments and others trapped in that no mans land of AI have-nots is that their rivals with superior AI-powered decision making and signal processing will dominate global markets.

The idea of building an AI incubator or safe space where companies that might even sometimes compete against each other, are now sitting alongside each other talking to the same engineers to build their new products, may be an industry first.

But the basic model is not: Element AI is tackling this problem essentially by leaning on trends in outsourcing: systems integrators, business process outsourcers, and others have built multi-billion dollar businesses by providing consultancy or even fully taking the reins on projects that businesses do not consider their core competency.

The same is happening here.Element AI says that initial products that can be picked up there include predictive modeling, forecasting models for small data sets, conversational AI and natural language processing, image recognition and automatic tagging of attributes based on images, aggregation techniques based on machine learning, reinforcement learning for physics-based motion control, compression of time-series data, statistical machine learning algorithms, voice recognition, recommendation systems, fluid simulation, consumer engagement optimization and computational advertising.

I asked, and I was told multiple times, that essentially colocating their R&D next to other first, for now, is not posing a problem for the companies who are getting involved. If anything, for those who understand the big-data aspect of AI intelligence, they can see that the benefit for one will indirectly benefit the rest, and speed everything up.

That model is what made Yoshua Bengio the godfather of machine learning so excited about co-founding this company, Ocko said. That massive research advantage leads Element AI to be able to deliver technically advantaged, increasingly cost effective solutions. It means they dont have to treat AI decision making capability as a scare resource, wielded like a club on everyone else.

View original post here:

Element AI, a platform for companies to build AI solutions, raises $102M - TechCrunch

Forrester: Artificial Intelligence Software Growth Will Fall Below Current Investor Projections – PRNewswire

CAMBRIDGE, Mass., Dec. 10, 2020 /PRNewswire/ --According to Forrester(Nasdaq: FORR), the overall artificial intelligence software market despite ballooning to $37 billion by 2025 will fall below current investor projections of $150 to $200 billion for the same time period. Within the AI software market, AI application growth will be constrained by technology vendors embedding AI functionality into existing software products with greater frequency.

A new Forrester report, "The AI Software Market Will Grow To $37 Billion Globally By 2025," outlines key reasons for this updated AI software market outlook which leaves out hardware and consulting services including:

The report further states that AI maker platforms used for creating highly customized solutions and AI facilitator platforms used for applications including computer vision and natural language virtual assistants have the strongest growth potential. Other AI market segments likely to grow include AI-centric applications for specialized medical tasks and AI-infused applications that create differentiated products through added AI functionality.

"While we're seeing high demand for AI technology, platforms, and applications, AI's ubiquity will ultimately make the technology commonplace in software development," said Andrew Bartels, VP and principal analyst at Forrester. "We believe investors are defining the AI market too hyperbolically. They are mistakenly including categories that are loosely influenced by or distantly adjacent to AI software. As business leaders rely more and more on AI as a tenet of their digital transformation strategy, they will likely expect their vendors to add AI functionality at no additional cost to them."

Resources:

About Forrester

Forrester (NASDAQ: FORR) is one of the most influential research and advisory firms in the world. We help organizations grow through customer obsession: putting their customers at the center of leadership, strategy, and operations. Through Forrester's proprietary research, consulting, and events, business and technology leaders from around the globe are empowered to be bold at work to navigate change and build customer-obsessed growth strategies. Our unique insights are grounded in annual surveys of more than 675,000 consumers, business leaders, and technology leaders worldwide; rigorous and objective methodologies, including Forrester Wave evaluations; and the shared wisdom of our most innovative clients. To learn more, visit Forrester.com.

Media Contact: Ira Kantor Public Relations Forrester Research, Inc. [emailprotected]

SOURCE Forrester

Read the original here:

Forrester: Artificial Intelligence Software Growth Will Fall Below Current Investor Projections - PRNewswire

Zero One: Are You Ready for AI? – MSPmentor

If somebody like Google or Apple announced tomorrow that they had made [AI android] Ava, we would all be surprised, but we wouldnt be that surprised. Alex Garland, writer-director of Ex Machina (2015)

Imagine a smart robot performing delicate surgery under the control of a surgeon. Or an artificial intelligence (AI) machine mapping genomic sequences to identify the link to Alzheimers disease. Or psychiatrists applying AI in natural language processing systems on the voices of patients to assess their risk of suicide.

You dont have to imagine anymore, because all of this is happening right now. The great promise of AI a technology once confined to sci-fi movies lies within the grasp of everyday business. More and more companies are seeing the AI light, and if predictions prove right, this could be the year AI goes mainstream.

Its an incredible time, and its very hard to forecast, what can these things do? said Google co-founder Sergey Brin, speaking at theWorld Economic Forum Annual Meetingin Davos-Klosters, Switzerland, last month.

To be sure, line-of-business executives (LOBs), the new shot-callers for tech, care little about pie-in-the-sky ideas. But theyll pay close attention to real-world business outcomes and may be wondering if AI is right for their businesses. Its a simple answer: If not AI today, AI tomorrow. Thats because AI has the potential to impact nearly every aspect of business, from predicting customer needs to optimizing operations and supply chains.

AI can transform your business, said Forrester analysts Martha Bennett and Mathew Guariniin a research note. AI will be employed across enterprises, doing everything from engaging with customers and employees to automating and improving large elements of the operation.

Related:Digital Business Transformation: A Channel Story

Its still early days for AI, but thats about to change.

A Forrester survey of business and tech professionals found only a small number of companies with AI implementations, yet more than half of companies said they plan to invest in AI in the next 12 months. Specifically, 37 percent plan to implement intelligent assistants for customers and 35 percent doing the same with cognitive products. Among AI adopters, 57 percent said improving the customer experience is the biggest benefit. Marketing and sales, product management, and customer support lead the AI charge.

Forrester puts AI deployments into five buckets: speech recognition, such as Amazons Alexa, Apples Siri, Google Assistantand Microsofts Cortana; machine learning, such as Netflixs customer data-driven recommendations; image recognition; advanced discovery techniques, such as IBM Watson; and robotics and self-driving cars.

AI makes the most sense in industries with big data, such as healthcare. After all, AI feeds off of data but in a slightly different way than simple analytics. Whereas analytics software mines data to unearth trends and makepredictions about the future, AI systems use data as a kind of sharpening stone to refine algorithms that produce targeted outcomes, such as diagnosing a type of cancer.

In a way, AI influences the future.

With AI, we can begin to advance our analytics capabilities to personalize the interventions we roll out to patients and move from looking in the rearview mirror at what worked historically to looking at what could work in the future with predictive and prescriptive analytics, said Forrester analysts Kate McCarthy and Nasry Angel in a research note.

Such awesome power has led to decades-worth of apocalyptic sci-fi movies and television shows, from Blade Runner (1982) to Terminator (1984) to The Matrix (1999) to Battlestar Galactica (2004-2009) to Ex Machina (2015) all showing AI machines putting a beatdown on their human creators.

Before waving this off as merely entertainment, LOBs should keep in mind the fear that people have about AI. For instance, AIs ability to gauge the likelihood of an individual becoming sick or getting in a car accident opens up a host of societal issues. Armed with this knowledge, will insurance companies raise rates?

Societal and privacy concerns are just a few of the many challenges facing AI adopters. As with any emerging technology on the verge of taking off, theres a severe technical skills shortage. LOBs must make sure they have the right talent to pull off AI projects, such as engineers to select and prepare data for AI and developers to customize AI software to the use case and fine tune AI algorithms. Its a Herculean task. Forrester says training IBM Watson for a new diagnostic domain takes between six and 18 months, depending on how complicated the domain and how much data available.

Related:Rise of the IoT Architect

Poor talent can make a mess of things, even more so with AI. Like in the sci-fi movies, AI is as flawed as its human creators. Forrester gives an example of a machine-learning system trained to predict a patients risk for catching pneumonia when admitted to the hospital. Developers of the AI system forgot to put in critical information in the data set. As a result, Forrester says, the AI system told doctors to send home patients with existing asthma conditions a high-risk category.

Human bias also raises its ugly head in AI systems. There have been reports of image-recognition AI automatically identifying a blinking person as Asian and AI systems designed to assist police discriminating against African Americans. Then theres the infamous Tay, a Microsoft Twitter AI chatbot depicted as a pixelated young woman released in spring last year. After Twitter users tricked the chatbot into making outlandishly offensive remarks,Microsoft yanked Tay offlineand apologized.

AI systems can behave unpredictably, Forresters Bennett and Guarini said. In particular when working on complex or advanced systems, developers often dont know how AI-powered programs and neural nets come up with a particular set of results It gets dangerous when the software is left to take decisions entirely unsupervised.

Despite its long history and inherent dangers, AI has come far in the last few years. Consider Googles Brin, a computer scientist who admitted he didnt pay much attention to AI in the 1990s because everyone knew AI didnt work. Before becoming president of Google parent company Alphabet, Brin headed the Google X research group, which, in turn, worked on Google Brain, an AI project that began in 2011.

Today, Google Brain is part of the tech giants DNA.

Fast-forward a few years, and now Brain probably touches every single one of our main projects, ranging from search to photos to ads to everything we do, Brin said, adding, We really dont know the limits.

Tom Kaneshige writes the Zero One blog covering digital transformation, big data, AI, marketing tech and the Internet of Things for line-of-business executives. He is based in Silicon Valley. You can reach him attom.kaneshige@penton.com.

Read the original post:

Zero One: Are You Ready for AI? - MSPmentor

How AI is taking the pain out onboarding for the HR team – Tech Wire Asia

AI isnt new to HR and recruitment. Source: Shutterstock

In an uncertain economy and rocky jobs market, any technology that takes the burden off the onboarding process is a welcome tonic for an overworked HR department especially when, in many cases, those new recruits will be joining the team remotely.

While, like in many other sectors, artificial intelligence (AI) is beginning to prove a boon to Human Resources, some applications of the intelligent, process-expediting technology arent yet watertight. A recent Sage report titled The changing face of HR consulted organizations on their propensity to adopt the latest tech for HR functions. 43% of respondents believed their firms will not keep up with tech changes over the coming decade. This creates a somewhat troubling outlook.

For starters, overseeing a traditionally (and inherently) human set of functions, HR teams are perhaps prone to lag when it comes to adopting the latest technology. But discussions around AI in HR also conjure images of recruitment bias, where algorithms under the hood of predictive hiring tools have demonstrated bias against African-American-sounding names and female applicants.

According to that report, 24% of quizzed companies are already using AI for talent acquisition (in the form of automation), while 56% claim they will adopt such tech in the coming year.

But there are signs of increasing uptake of technologies in HR, and applications of AI go a lot further in the recruitment and onboarding process than just filtering through thousands of applications.

The stages from interview to offer to negotiation to acceptance and the ease with which you fly through these tells you a whole lot about the business youve just agreed to join. In fact, onboarding is (to many) something of a magic moment, in which new employees decide to stay engaged or become disengaged.

AI steps up to simplify tasks (negating the need for manual document back-and-forths), automating otherwise arduous account setups, and providing feedback on the whole affair to make the next hire smoother. It can track tasks, prompt responses, and even answer questions that may arise from new hires. Here are a few use cases in more detail:

Document generation: Using natural language processing (NLP), organizations can auto-generate offer letters, contracts, and other vital documents with employees. A human still needs to validate the output and ensure that it is signed properly, though.

FAQ chatbots: Most new recruits will have a lot of basic questions (regarding connecting to the office WiFi, setting up an email account, or log-off/screen-lock protocols, among others). A chatbot is a strong way of addressing FAQs whilst retaining a sense of back-and-forth. It can also be continuously tweaked and upgraded as new queries arise.

Networking: Building relationships with peers and team members is crucial for new hires to integrate into an organization, increase productivity, and become engaged employees. Using organizational network analysis (ONA), organizations can understand which relationships new employees must cultivate to be productive, and introduce new hires to critical points of contact in their team and in the organization.

Feedback analysis: Like with literally any process on planet earth (and, Id envisage, beyond), feedback and insight are key to continuous improvement. In order to know how to fine-tune the path through recruitment, AI can provide HR professionals with the tools to understand direct and indirect feedback. Using NLP again, HR managers can extract quality insights from large quantities of textual feedback. This can allow HR managers to gauge themes, employee sentiment, and the overarching effectiveness of HR processes.

Where AI really comes into its own is in its ability to (quickly) adjust what information is required, presented, and completed, based on the specific job in question. For any firm with a sizeable employee base, these nuances can give rise to lag times, inaccuracies, and otherwise poor practice. An intelligent system can give proper permissions, schedule meetings required to understand a role, and even develop tools to help that understanding.

With AI, onboarding doesnt have to happen within regular business hours or at a fixed office location. AI and chatbots can work around the clock, guiding a new hire through all aspects of onboarding and answering questions as they arise. With HR teams busier than ever in coordinating and responding to remote working issues, the onboarding process is one space where tech can really come in handy, allowing new hires to integrate more quickly [] even before their first day on the job, says Susan Power, founder and CEO of Power HR.

Gamification can be one way to utilize AI and set apart your onboarding procedure in one fell, albeit intricate swoop. Adding competitive, enjoyable, game-like elements to the onboarding process can make it easier for human minds to absorb and retain information. AI can help this customized experience both in the recruitment process (with cognitive ability/competency tests) and afterward.

Covid-19 has disrupted many companies typical routine when it comes to onboarding employees. Cognitive automation tools can simplify the process for new hires that may start remotely, and lessen the alienation that can come from limited face-to-face training time.

The crux of the matter is not actually a crux at all, but rather an ongoing navigation of the intersection between human involvement and AI efficiencies. This will indeed change from company to company. The fact is that you cant take the human out of Human Resources. People will remain integral to a robust, personable HR strategy. You can, however, add AI technology at intelligent onboarding touchpoints, diverting human time away from inane clerical tasks to forming real bonds with candidates and growing teams in the most positive of ways.

Read the rest here:

How AI is taking the pain out onboarding for the HR team - Tech Wire Asia