China and the US are battling to become the world’s first AI superpower – The Verge

In October 1957, the Soviet Union launched the Earths first artificial satellite, Sputnik 1. The craft was no bigger than a beach ball, but it spurred the US into a frenzy of research and investment that would eventually put humans on the Moon. Sixty years later, the world might have had its second Sputnik moment. But this time, its not the US receiving the wake-up call, but China; and the goal is not the exploration of space, but the creation of artificial intelligence.

The second Sputnik arrived in the form of AlphaGo, the AI system developed by Google-owned DeepMind. In 2016, AlphaGo beat South Korean master Lee Se-dol at the ancient Chinese board game Go, and in May this year, it toppled the Chinese world champion, Ke Jie. Two professors who consult with the Chinese government on AI policy told The New York Times that these games galvanized the countrys politicians to invest in the technology. And the report the pair helped shape published last month makes Chinas ambitions in this area clear: the country says it will become the worlds leader in AI by 2030.

Its a very realistic ambition, Anthony Mullen, a director of research at analyst firm Gartner, tells The Verge. Right now, AI is a two-horse race between China and the US. And, says Mullen, China has all the ingredients it needs to move into first. These include government funding, a massive population, a lively research community, and a society that seems primed for technological change. And it all invites the trillion-dollar question: in the coming AI Race, can China really beat the US?

To build great AI, you need data, and nothing produces data quite like humans. This means Chinas massive 1.4 billion population (including some 730 million internet users) might be its biggest advantage. These citizens produce reams of useful information that can be mined by the countrys tech giants, and China is also significantly more permissive when it comes to users privacy. For the purposes of building AI, this compares favorably with European countries and their citizen-centric legislation, says Mullen. Companies like Apple and Google are designing workarounds for this privacy problem, but its simpler not to bother in the first place.

Chinas 1.4 billion population is a data gold mine for building AI

In China, this also means that AI is being deployed in ways that might not be acceptable in the West. For example, facial recognition technology is used for everything from identifying jaywalkers to dispensing toilet paper. These implementations seem trivial, but as any researcher will tell you, theres no substitute for deploying tech in the wild for testing and developing. I dont think China will have the same level of existential crisis about the development of AI that the West will have, says Mullen.

The adventures of Microsoft chatbots in China and the US make for a good comparison. In China, the companys Xiaoice bot, which is downloadable as an app, has more than 40 million users, with regulars talking to it every night. It even published a book of poetry under a pseudonym, sparking a debate in the country about artificial creativity. By comparison, the American version of the bot, named Tay, was famously shut down in a matter of days after Twitter users taught it to be racist.

Matt Scott, CTO of Shenzhen machine vision startup Malong Technologies, says Chinas attitude toward new technology can be risk-taking in a bracing way. For AI you have to be at the cutting edge, he says. If youre using technology thats one year old, youre outdated. And I definitely find that in China at least, my community in China is very adept at taking on these risks.

The output of Chinas AI research community is, in some ways, easy to gauge. A report from the White House in October 2016 noted that China now publishes more journal articles on deep learning than the US, while AI-related patent submissions from Chinese researchers have increased 200 percent in recent years. The clout of the Chinese AI community is such that at the beginning of the year, the Association for the Advancement of Artificial Intelligence rescheduled the date of its annual meeting; the original had fallen on Chinese New Year.

Whats trickier, though, is knowing how these numbers translate to scientific achievement. Paul Scharre, a researcher at the think tank Center for a New American Security, is skeptical about statistics. You can count the number of papers, but thats sort of the worst possible metric, because it doesnt tell you anything about quality, he says. At the moment, the real cutting-edge research is still being done by institutions like Google Brain, OpenAI, and DeepMind.

In China, though, there is more collaboration between firms like these and universities and government something that could be beneficial in the long term. Scotts Malong Technologies runs a joint research lab with Tsinghua University, and there are much bigger partnerships like the national laboratory for deep learning run by Baidu and the Chinese governments National Development and Reform agency.

Other aspects of research seem influential, but are difficult to gauge. Scott, who started working in machine learning 10 years ago with Microsoft, suggests that China has a particularly open AI community. I think there is a bit more emphasis on [personal] relationships, he says, adding that Chinas ubiquitous messaging app WeChat is a rich resource, with chat groups centered around universities and companies sharing and discussing new research. The AI communities are very, very alive, he says. I would say that WeChat as a vehicle for spreading information is highly effective.

What most worries Scharre is the US governments current plans to retreat from basic science. The Trump administrations proposed budget would slash funding for research, taking money away from a number of agencies whose work could involve AI. Clearly [Washington doesnt] have any strategic plan to revitalize American investment in science and technology, Scharre tells The Verge. I am deeply troubled by the range of cuts that the Trump administration is planning. I think theyre alarming and counterproductive.

Trumps administration could never be called science-friendly

The previous administration was aware of the dangers and potential of artificial intelligence. Two reports published by the Obama White house late last year spelled out the need to invest in AI, as well as touching on topics like regulation and the labor market. AI holds the potential to be a major driver of economic growth and social progress, said the October report, noting that public- and private-sector investments in basic and applied R&D on AI have already begun reaping major benefits.

In some ways, Chinas July policy paper on AI mirrors this one, but China didnt just go through a dramatic political upheaval that threatens to change its course. The Chinese policy paper says that by 2020 it wants to be on par with the worlds finest; by 2025 AI should be the primary driver for Chinese industry; and by 2030, it should occupy the commanding heights of AI technology. According to a recent report from The Economist, having the high ground will pay off, with consultancy firm PwC predicting that AI-related growth will lift the global economy by $16 trillion by 2030 with half of that benefit landing in China.

For Scharre, who recently wrote a report on the threat AI poses to national security, the US government is laboring under a delusion. A lot of people take it for granted that the US builds the best tech in the world, and I think thats a dangerous assumption to make, he says, saying that a wake-up call is due. China may have had the Sputnik moment it needed to back AI, but has the US?

Others question whether this is necessary. Mullen says that while the momentum to be the world leader in AI currently lies with China, the US is still marginally ahead, thanks to the work of Silicon Valley. Scharre agrees, and says that government funding isnt that big of an issue while US tech giants are able to redirect just a little of their ad money to AI. Money you get from somewhere like DARPA is just a drop in the ocean compared to what you can get from the likes of Google and Facebook, he says.

These companies also provide a counterpoint to the argument that Chinas demographics give it an unmatchable advantage. Its certainly good to have a huge number of users in one country, but its probably better to have that same number of users spread across the world. Both Facebook and Google have more than 2 billion people hooked on to their primary platforms (Facebook itself and Android) as well as a half-dozen other services with a billion-plus users. Its arguable that this sort of reach is more useful, as it provides an abundance of data, as well as diversity. Chinas tech companies may be formidable, but they lack this international reach.

Scharre suggests this is important, because when it comes to measuring progress in AI, on-the-ground implementations are worth more than research. What counts, he says, is the ability of nations and organizations to effectively implement AI technologies. Look at things like using AI in healthcare diagnoses, in self-driving cars, in finance. Its fine to be, say, 12 months behind in research terms, as long as you can still get ahold of the technology and use it effectively.

In that sense, the AI race doesnt have to be zero sum. Right now, cutting-edge research is developed in secret, but shared openly across borders. Scott, who has worked in the field in both the US and China, says the countries have more in common than they think. People are afraid that this is something happening in some basement lab somewhere, but its not true, he says. The most advanced technology in AI is published, and countries are actively collaborating. AI doesnt work in a vacuum: you need to be collaborative.

In some ways, this is similar to the situation in 1957. When news of Sputniks launch first broke, there was an air of scientific respect, despite the the geopolitical rivalry between the US and USSR. A contemporary report said that Americas top scientists showed no rancor at being beaten into space by the Soviet engineers, and, as one of them put it, We are all elated that it is up there.

Throughout the 60s and early 70s, America and Russia jockeyed back and forth to be first in the space race. But in the end, the benefits of this competition new scientific knowledge, technology, and culture didnt just go to the winner. They were shared more evenly than that. By this metric, a Sputnik moment doesnt have to be cause for alarm, and the race to build better AI could still benefit us all.

Read the original post:

China and the US are battling to become the world's first AI superpower - The Verge

AI: Boon to business, bane to low-skilled workers – Inquirer.net

VIDEO CONFERENCE ON TECH VISION. JP Palpallatoc, Accenture PH digital lead, discusses his companys technology vision for 2017 in a video conference with Cebus business reporters held at Accentures office in I.T. Park, Barangay Apas, Cebu City. (CDN PHOTO/JUNJIE MENDOZA)

As artificial intelligence (AI) moves to the forefront of business operations in the Philippines, the workforce needs to learn more complex skills to cushion the risk of losing jobs to automation.

JP Palpallatoc, Accenture Philippines digital lead, said that with the rise of AI comes the risk of employment reduction, especially among those with lower-level skills.

We need to move those with lower-level skills up the value chain through education and helping them learn more complex skills, he said in a video conference with Cebu-based reporters on Wednesday.

While AI has the potential to support humans in terms of business, through digital means of transacting and interacting with customers, Palpallatoc also recognized that this technology runs the risk of displacing human workers.

With the many improvements to AI technology today, more companies have opted not to hire additional employees to handle transactions that can easily be automated such as responding to common customer queries.

The Philippine Information Technology-Business Process Management (IT-BPM) Roadmap 2022 has projected a decline in demand for low-level skilled workers in the coming years, but also sees a rise in need for workers with mid- to high-complexity skills.

Palpallatoc said the roadmap also focuses on education and getting the workforce ready once this time comes, putting an emphasis on developing graduates in Science, Technology, Engineering, and Math (STEM).

The industry roadmap targets to directly employ 1.8 million IT-BPM workers by year 2022, of which 73 percent hold mid- to high-value jobs.

But Palpallatoc said the opportunities in taking advantage of AI are greater than its risks, adding that the technology is among the five trends seen to drive the transformation of businesses in the next three years.

Citing Accentures Technology Vision 2017, Palpallatoc said people used to be the ones adapting to technology but are now starting to make technology adapt to them and their needs.

The report identified five emerging technology trends that are essential to business success in todays digital economy, based on insight from more than 5,400 business and IT executives surveyed worldwide.

Among these are AI becoming the new User Interface (UI), underpinning the way transactions and interaction are done with systems.

According to the report, 79 percent of executives agreed that AI will revolutionize the way they gain information from and interact with customers.

Meanwhile, 85 percent reported that they will invest intensively in AI-related technologies over the next three years.

Another trend was design for humans, where technology decisions are being made by humans for humans.

Technology now adapts to how people behave, which many executives believe should be used to guide a businesss desired outcomes.

The report also saw a surge in demand for labor platforms and online work-management solutions, resulting to companies dissolving traditional hierarchies and replacing them with talent marketplaces.

Case in point, 85 percent of executives surveyed said they plan to increase their organizations use of independent freelance workers over the next year.

Another trend seen by Accenture was ecosystems as macrocosms, where platform companies that provide a single point of access to multiple services have completely broken rules on how companies compete.

Companies are now integrating their core business functionalities with third parties and leverage these relationships to build their roles in new digital ecosystems.

One example is car manufacturing company General Motors (GM) investing $500 million on ride-hailing app Lyft, launching a program that allows car-less drivers of Lyft to rent vehicles made by GM, opening up an entirely new line of business.

Accentures annual report also stated that to succeed in todays ecosystem-driven digital economy, businesses must delve into unchartered territory. Instead of focusing on the introduction of new products and services, firms should also seize opportunities to establish rules and standards for entirely new industries. /with USJ-R Intern Vanisa Soriano

Read more from the original source:

AI: Boon to business, bane to low-skilled workers - Inquirer.net

The AI revolution in science – Science Magazine

Just what do people mean by artificial intelligence (AI)? The term has never had clear boundaries. When it was introduced at a seminal 1956 workshop at Dartmouth College, it was taken broadly to mean making a machine behave in ways that would be called intelligent if seen in a human. An important recent advance in AI has been machine learning, which shows up in technologies from spellcheck to self-driving cars and is often carried out by computer systems called neural networks. Any discussion of AI is likely to include other terms as well.

ALGORITHM A set of step-by-step instructions. Computer algorithms can be simple (if its 3 p.m., send a reminder) or complex (identify pedestrians).

BACKPROPAGATION The way many neural nets learn. They find the difference between their output and the desired output, then adjust the calculations in reverse order of execution.

BLACK BOX A description of some deep learning systems. They take an input and provide an output, but the calculations that occur in between are not easy for humans to interpret.

DEEP LEARNING How a neural network with multiple layers becomes sensitive to progressively more abstract patterns. In parsing a photo, layers might respond first to edges, then paws, then dogs.

EXPERT SYSTEM A form of AI that attempts to replicate a humans expertise in an area, such as medical diagnosis. It combines a knowledge base with a set of hand-coded rules for applying that knowledge. Machine-learning techniques are increasingly replacing hand coding.

GENERATIVE ADVERSARIAL NETWORKS A pair of jointly trained neural networks that generates realistic new data and improves through competition. One net creates new examples (fake Picassos, say) as the other tries to detect the fakes.

MACHINE LEARNING The use of algorithms that find patterns in data without explicit instruction. A system might learn how to associate features of inputs such as images with outputs such as labels.

NATURAL LANGUAGE PROCESSING A computers attempt to understand spoken or written language. It must parse vocabulary, grammar, and intent, and allow for variation in language use. The process often involves machine learning.

NEURAL NETWORK A highly abstracted and simplified model of the human brain used in machine learning. A set of units receives pieces of an input (pixels in a photo, say), performs simple computations on them, and passes them on to the next layer of units. The final layer represents the answer.

NEUROMORPHIC CHIP A computer chip designed to act as a neural network. It can be analog, digital, or a combination.

PERCEPTRON An early type of neural network, developed in the 1950s. It received great hype but was then shown to have limitations, suppressing interest in neural nets for years.

REINFORCEMENT LEARNING A type of machine learning in which the algorithm learns by acting toward an abstract goal, such as earn a high video game score or manage a factory efficiently. During training, each effort is evaluated based on its contribution toward the goal.

STRONG AI AI that is as smart and well-rounded as a human. Some say its impossible. Current AI is weak, or narrow. It can play chess or drive but not both, and lacks common sense.

SUPERVISED LEARNING A type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.

TENSORFLOW A collection of software tools developed by Google for use in deep learning. It is open source, meaning anyone can use or improve it. Similar projects include Torch and Theano.

TRANSFER LEARNING A technique in machine learning in which an algorithm learns to perform one task, such as recognizing cars, and builds on that knowledge when learning a different but related task, such as recognizing cats.

TURING TEST A test of AIs ability to pass as human. In Alan Turings original conception, an AI would be judged by its ability to converse through written text.

Read the original post:

The AI revolution in science - Science Magazine

Microsoft makes major cuts to MSN editorial team amid AI shift and broader fiscal year-end layoffs – GeekWire

BigStock Photo

Microsoft is further cutting back its MSN team amid a controversial shift to automation and AI for content decisions previously left to human editors.

After previously eliminating dozens of MSN contract positions, the company is now laying off an unspecified number of direct employees from MSN, including some senior leaders on the Microsoft News editorial team, according to people familiar with the situation.

Microsoft made cuts across the company as part of its annual fiscal year-end business review, one of these people said. Its fiscal year ends June 30, and its common for Microsoft to restructure some of its operations in conjunction with the annual milestone. Overall, the cutbacks this year appear much smaller than the thousands of employees laid off by the company in some years past.

The company isnt commenting publicly on the cuts at MSN or other groups.

Last week, in an article for Motherboard, former MSN Money editor Bryan Joiner detailed his experience being replaced by an algorithm. Joiner was a contractor for MSN who, like dozens of full-time journalists, lost his job in June. Microsoft replaced the team tasked with curating and editing MSN news content with AI software.

After news of the earlier layoffs surfaced, Microsofts software misidentified a member of the British pop group Little Mix. The mistake trended vigorously because it came so soon after MSN let its human editors go, Joiner wrote.

Based on how far theyve come down this road, the algorithm will sink or swim on its own, which is to say itll probably sink and take down the whole of MSN with it, he wrote. Maybe thats overstating things, but MSN is low enough in the Microsoft hierarchy that its existence has felt like it was on the chopping block for years.

Monica Nickelsburg contributed to this report.

Here is the original post:

Microsoft makes major cuts to MSN editorial team amid AI shift and broader fiscal year-end layoffs - GeekWire

What CIOs need to know about adding AI to their processes – TechRepublic

AI can help many types of businesses get more from their data. In 2021, one expert believes adoption of AI will take leaps forward.

TechRepublic's Karen Roby spoke with Ira Cohen, chief data scientist with Anodot, about the tools CIOs need to implement artificial intelligence (AI) at their companies. The following is an edited transcript of their conversation.

Karen Roby: As we're heading into 2021, CIOs need to have a checklist of some things to keep in mind when making decisions for this coming year, whether that be about hiring or projects to consider. Let's start with the talent that's needed at companies now, to pull off some of these AI projects. What do you think CIOs need to keep in mind?

SEE: Natural language processing: A cheat sheet (TechRepublic)

Ira Cohen: As you said, 2020 was really special in all this disruption to so many businesses. And AI, actually, is now becoming even more important. The projects that maybe people talked about before have been accelerated now because the speed of movement to new paradigms, that has to be much faster. If you're talking about, for example, commerce, supply chains, need to move much faster. A lot of different projects that maybe before were slowly moving towards more e-commerce, and more shipment. I mean, you're getting your Amazon, but now, so many companies are sending what they're selling out, that you have to have a lot more automation and be a lot more mindful of the data, and be a lot more reactive to how things change constantly. Things are changing much faster, and AI is the perfect thing to manage all of that, if we talk about AI in a very, very global sense, because it has a capability of processing data very fast, giving you insights very fast of very high volumes of data, which is what's happening now, but that's what's needed.

What do you need to actually have in your company in order to actually be able to achieve these goals of these projects? The first order of business, and this is something that people and companies have been doing in the last few years is, put all your data together. Create these data lakes. Data lakes have been very popular, and growing at companies like Snowflake, and other types of companies that have grown tremendously in the last few years, because that's what they offer. But, now, to leverage those data lakes, you need data engineers that know how to pull data quickly out of them, and serve them to the data science team that can actually transform them with algorithms into meaningful insights.

Data engineers is something that is going to be required a lot more in the next year or so, because without those data pipelines, laying of data pipelines that will feed all these AI algorithms and projects, there is nothing. The AI doesn't work without data, at least the AI that we have today. And then comes the machine learning engineers. Today, data science has been something that has grown in the last few years. The data scientists are the ones that are developing the AI required for all these projects. But data scientists, a lot of what was hired was basically people that do analysis. They do kind of one-off projects.

And, now, because these things are starting to be more and more automated, you don't need just a data scientist who knows how to do a project well, and prototype something, but you need the engineers that will make it into products, even if they're internal products. It's not a project anymore, it's internal products that have to constantly work for the company to deliver the rate that they need to deliver. These two areas, the data engineers, and the machine learning engineers, and not just the scientist, these are probably the areas where we need to ... I believe, CIOs need to invest most in their companies.

SEE:Is AI just a fairy tale? Not in these successful use cases(TechRepublic)

Karen Roby: When you consider the talent pool, Ira, how much are we talking about here, as far as supply and demand, when it comes to these more specialized areas with AI and machine learning? I mean, do we have the talent to fill the positions that we're going to need?

Ira Cohen: No. I think there's still a big gap, but what's happening in the market, in general, is that the whole field of AI or machine learning is being democratized by all sorts of tools that are either being wrapped into loose products, or open source completely, either from Google, or from Facebook, from companies that are actually invested a lot in developing the, let's say, the foundations that you would need. And then, the talent pool that needs to use it, they don't have to know as deeply, they don't have to have the knowledge as deeply as the people who developed all these tools. So, there is hope of getting a lot more talent into the area without the need for them to get Ph.Ds, in order to be able to do this. And that is happening in parallel.

With good education, with good courses, you can actually get junior machine learning engineers that can start bringing value. Where the gap is, is in the more senior ones, the ones that do have experience, because you can't hire just junior people. They won't have a clue what to do. You do need some sense of the field. The gap is in the machine learning engineers that are kind of, I would say, the mid-tier, and the experts, of course, that will always be a gap. But, the mid-tier that can teach the juniors how to work, that's where most of the gap is today, I believe.

Karen Roby: There's no question that AI has been fast-tracked for many companies that may not have even been considering moving in that direction yet, until their digital transformation plans were really put on fast-forward as well, from March 2020. Is there any particular industry you're really seeing where it's being embraced even more?

SEE:3D scanning, lidar, and drones: Big data is helping law enforcement solve crimes(TechRepublic)

Ira Cohen: We're seeing it in all sorts of commerce, where even if it was half brick-and-mortar, half online, now, this has pushed them quite significantly. Supply chains and deliveries are definitely a big push in those types of companies. And, telcos, we've also seen in telcos that very big push towards AI, and it's driven by two things that happen now in parallel. One is the virus, right? The whole pandemic, which actually put a lot more pressure on networks, and made them even more important, and actually brought some of the telcos to ... Basically, that provide all the foundations for our communications, brought them to the front, and center.

5G, is the second one that's happening in parallel. So, 5G, changing, coming to play, creating a lot more data, a lot more complexities in the networks, is also pushing them to implement AI, to actually being able to manage all that complex infrastructure, which is becoming even more complex, and even more critical.

Karen Roby: When you look to say, nine months to a year from now, how do you see AI playing a role, even versus now? And, again, how is that going to change things overall for businesses, from small businesses to huge enterprise companies?

SEE:Healthcare is adopting AI much faster since the pandemic began(TechRepublic)

Ira Cohen: I think small businesses will leverage AI for particular tasks, small tasks, and probably, the adoption there will be less, because AI, at the end of the day, is fueled by data. And if you don't have a lot of data, you can make your own decisions fairly quickly anyway. But, for larger companies, the ones that do not embrace it, and do not start using it heavily to make better decisions, to forecast the future, they'll be left behind, because they are not going to benefit from the improved, either margins, by being more efficient, or improve the ability to sell more, because of what those tools will give them, they will start losing out.

There's definitely a race for them to actually do this, tools to embrace it quickly. For the small businesses, I think it will be slower to embrace, unless it's for very particular tasks that before, they could not do, because they could not hire the people to do it. But, now, they'll get the tool that already does it for a small fraction of that price that would be if they had to develop it themselves, and then they can run away with it.

I mean, even looking at just simple e-commerce sites, right? You're trying to sell something, and you want to have a recommendation engine, like Amazon has a recommendation engine on its website, which does improve how much you're selling. Today, a small website, or as a small seller, cannot develop it themselves. It's too expensive. But with it becoming available as a service from companies, they can actually start using it for a fraction of the price, and get the benefit of it even for themselves. For recruiting tools, it will give them a benefit. They'll probably want to buy it rather than trying to develop it themselves.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

TechRepublic's Karen Roby spoke with Ira Cohen, chief data scientist with Anodot, about the tools CIOs need to implement artificial intelligence (AI) at their companies.

Image: Mackenzie Burke

Original post:

What CIOs need to know about adding AI to their processes - TechRepublic

China is betting big on AI – and here’s why it’s going to pay off – South China Morning Post

China will see the greatest economic gains from artificial intelligence (AI) by 2030 as the technology accelerates global GDP growth by increasing productivity and boosting consumption, says PwC in a new research report released Tuesday.

Dubbed the fourth industrial revolution, AI technologies are expected to boost global GDP by a further 14 per cent by 2030 the equivalent of an additional US$15.7 trillion and China, as the worlds second largest economy, will see an estimated 26 per cent boost to GDP by that time, the PwC report said.

Launched at the World Economic Forums annual June meeting in northeast Chinas Dalian city, often known as the Summer Davos, the report said labour productivity improvements would account for over half of the US$15.7 trillion in economic gains from AI between 2016 and 2030 more than the current output of China and India combined while increased consumer demand resulting from AI-enabled product enhancements will account for the rest.

The analysis demonstrates how big a game changer AI is likely to be transforming our lives as individuals, enterprises and as a society, said Anand Rao, global leader of artificial intelligence at PwC.

The future is here: China sounds a clarion call on AI funding, policies to surpass US

The technology behind an array of advanced applications, from facial recognition to self-driving vehicles, is the centre of attention for almost every tech company in China as they bet big on AI to gain a competitive edge before it begins to have a more profound impact on peoples lives.

Since the start of this year, Chinese internet heavyweights Baidu, Tencent Holdings and Alibaba Group have been competing harder than ever to lure top AI talent from Silicon Valley in order to accelerate their own AI development. Alibaba owns the South China Morning Post.

PwC predicts that North America will experience productivity gains earlier than China due to its first mover advantage in AI but China is expected to pull ahead of the United States in terms of AI productivity gains within 10 years after it catches up to the technology.

According to the PwC research, AI is projected to boost Chinas GDP by 26 per cent by 2030, while for North America the number is 14.5 per cent. For developing countries in Latin America and Africa, the expected GDP gain will only be about 6 per cent due to the much lower rates of AI technology adoption.

China has already made great leaps in the development of AI and our research shows that [AI] has the potential to be a powerful remedy for slowing growth, said Chuan Neo Chong, chairwoman of Greater China operations for global consultancy Accenture.

Artificial intelligence could put as many as 50m Asian jobs at risk over next 15-20 years: UBS study

In separate research done by Accenture, AI is expected to accelerate Chinas annual growth rate from 6.3 per cent to 7.9 per cent by 2035. The Accenture research, published on Monday, shows that AI could boost Chinas gross value added (GVA) by US$7.11 trillion by 2035 and has the potential to boost Chinas labour productivity by 27 per cent by the same year.

Minimising the economic imbalances brought about by AI will be an important challenge, said Lee Kai-fu, the former Greater China president of Google and founder of venture capital firm Sinovation Ventures.

Those developing countries which will experience rapid population growth in coming decades are expected to be hardest hit by AI in terms job losses, he added.

Most of the wealth created by AI will go into the US and China because of their big pool of talent and [high levels of data generation], as well as the size of their markets, said Li, who is one of the most prominent advocates of AI in China.

Here is the original post:

China is betting big on AI - and here's why it's going to pay off - South China Morning Post

Guavus Unwraps New AI-based Analytics and Automation Products for CSPs – GlobeNewswire

News Summary:

SAN JOSE, Calif., July 16, 2020 (GLOBE NEWSWIRE) -- Guavus, a pioneer in AI-based analytics for communications service providers (CSPs), today announced the launch of Guavus-IQ -- a comprehensive product portfolio that provides a unique multi-perspective analytics experience for CSPs.

Guavus-IQ delivers highly instrumented analytics insights to CSPs on how each subscriber is experiencing their network and services (bringing the outside perspective in) and how their network is impacting their subscribers (understanding how their internal operations are impacting their customers). This single, real-time outside-in/inside-out perspective helps operators identify subscriber behavioral patterns and better understand their operational environments. This enables them to increase revenue opportunities through data monetization and improved customer experience (CX), as well as reduce costs through automated, closed-loop actions.

In addition, Guavus-IQ has been designed to be operator-friendly for CSPs -- it doesnt require the operator to be a data science specialist or expert. It combines network and data science and leverages explainable AI to deliver easy-to-understand analytics insights to CSP users across the business at a significantly reduced cost.

The new Guavus-IQ products build on Guavus ten plus years of experience providing innovative analytics solutions focused exclusively on the needs of CSPs. The products are currently deployed in 8 of the top CSPs in Europe, Latin America, Asia-Pac and North America.

Big Data Doesnt Need to Come at a Big Cost

Guavus-IQ consists of two main product categories:

Just because data is big doesnt mean it cant be resource-efficient. The Guavus-IQ products leverage approximately 50% of the compute/processing-related hardware required by traditional analytics solutions through their use of advanced big data collection capabilities and real-time, in-memory stream processing edge analytics. This results in more powerful data collection from over 200 sources at half the cost.

Ops-IQ provides additional operational efficiencies through a combination of anomaly detection, fault correlation, and root cause analysis -- which not only lower OPEX but elevate CX. Ops-IQ fault analytics suppress more than 99.5% of alarms not associated with network incidents, and accurately predict incident-causing alarms by 93.9%. This significantly improves the Mean-Time-To-Response (MTTR) in a CSP Network Operations Center (NOC), saving more than $10 million a year in OPEX costs currently for a large service provider customer.

Service-IQ also plays a significant role in positively impacting CX and reducing costs. Service-IQ allows for flexible data reuse when it ingests new data, it ingests data once and then enables the reuse of that same data for additional use cases across both Service-IQ and Ops-IQ. This new level of efficiency saves operators time with ingest, a costly and complex part of the analytics process.

Because the data pipeline of previously ingested data can be automatically re-instantiated for use within Service-IQ or Ops-IQ, CSPs dont need to become big data experts in order to leverage the power and value of the data theyve collected. Instead, the Guavus-IQ products apply proven data science methods inside the integrated solutions to do the heavy lifting for the operator. This also allows analytics projects to be streamlined and shortened by more than 40-50%, as many organizations struggle not only with managing and deploying the infrastructure but also with gaining value in the early stage of analytics and AI experimentation.

Supporting Quotes:

In the world of 5G, IoT and now a global pandemic, were seeing an even greater need for operators to take advantage of AI and analytics to deal with increased network complexity, operational costs and subscriber demands for improved experience. To address these challenges, operators need to better understand network and subscriber behavior and be able to do so in real time.

These challenges can be tackled by utilizing big data collection, in-memory stream processing and AI-based analytics capabilities to ingest, correlate and analyze data (on premise and in the cloud) in real time from operators multivendor infrastructure. Insights generated can then be used to better serve operators needs across network, service, and marketing operations.Adaora Okeleke, Principal Analyst, Service Provider Operations and IT, Omdia

Weve seen a lot of excitement from the top CSPs worldwide in Guavus-IQ. Our customers plan to leverage the products for root cause analysis, subscriber behavior analysis, new personalized products, and IoT services, among other use cases. They like the fact that Guavus-IQ is easy to operate and its highly instrumented specifically for operators and their multivendor infrastructures versus traditional general-purpose enterprise platforms or homogeneous network-equipment-oriented solutions.Alexander Shevchenko, CEO of Guavus, a Thales company

Additional Resources:

About Guavus (a Thales company)Guavus is at the forefront of AI-based big data analytics and machine learning innovation, driving digital transformation at 6 of the 7 world's largest telecommunications providers. Using the Guavus-IQanalytics solutions, customers are able to analyze big data in real time and take decisive actions to lower costs, increase efficiencies, and dramatically improve the end-to-end customer experience all with the scale and security required bynext-gen 5G and IoT networks.

Guavus enables service providers to leverage applications for advanced network planning and operations, mobile traffic analytics, marketing, customer care, security and IoT. Discover more at http://www.guavus.com and follow us on Twitter and LinkedIn.

Media Contact:Laura StiffGuavus PR & Analyst Relations+1-408-827-1242laura.stiff@external.thalesgroup.com

Go here to see the original:

Guavus Unwraps New AI-based Analytics and Automation Products for CSPs - GlobeNewswire

Microsoft made its AI work on a $10 Raspberry Pi – Engadget – Engadget

The idea came about from Microsoft Labs teams in Redmond and Bangalore, India. Ofer Dekel, who manages an AI optimization group at the Redmond Lab, was trying to figure out a way to stop squirrels from eating flower bulbs and seeds from his bird feeder. As one does, he trained a computer vision system to spot squirrels, and installed the code on a $35 Raspberry Pi 3. Now, it triggers the sprinkler system whenever the rodents pop up, chasing them away.

"Every hobbyist who owns a Raspberry Pi should be able to do that," Dekel said in Microsoft's blog. "Today, very few of them can." The problems is that it's too expensive and impractical to install high-powered chips or connected cloud-computing devices on things like squirrel sensors. However, it's feasible to equip sensors and other devices with a $10 Raspberry Zero or the pepper-flake-sized Cortex M0 chip pictured above.

To make it work on systems that often have just a few kilobytes of RAM, the team compressed neural network parameters down to just a few bits instead of the usual 32. Another technique is "sparsification" of algorithms, a way of pruning them down to remove redundancies. By doing that, they were able to make an image detection system run about 20 times faster on a Raspberry Pi 3 without any loss of accuracy.

However, taking it to the next level won't be quite as easy. "There is just no way to take a deep neural network, have it stay as accurate as it is today, and consume 10,000 times less resources. You can't do it," said Dekel. For that, they'll need to invent new types of AI tech tailored for low-powered devices, and that's tricky, considering researchers still don't know exactly how deep learning tools work.

Microsoft's researchers are working on a few projects for folks with impairments, like a walking stick that can detect falls and issue a call for help, and "smart gloves" that can interpret sign language. To get some new ideas and help, they've made some of their early training tools and algorithms available to Raspberry Pi hobbyists and other researchers on Github. "Giving these powerful machine-learning tools to everyday people is the democratization of AI," says researcher Saleema Amershi.

View post:

Microsoft made its AI work on a $10 Raspberry Pi - Engadget - Engadget

AI will be a big part of the DoDs big data effort – Federal News Network

Best listening experience is on Chrome, Firefox or Safari. Subscribe to Federal Drives daily audio interviews onApple PodcastsorPodcastOne.

The Defense Departments data strategy released just a few weeks ago says improving data management will help it fight and win wars. It says artificial intelligence will become an important component of data-fueled digital modernization. For an assessment, the CEO of data analysis company Govini, Tara Murphy Dougherty joined Federal Drive with Tom Temin.

Tom Temin: Tara, good to have you back.

Tara Murphy Dougherty:Thanks, Tom. Its great to be here.

Tom Temin: So give us from your standpoint, as someone who analyzes DoD data itself, what does this data strategy really say? What are they actually trying to do here?

Tara Murphy Dougherty:Sure. So as a company thats purpose built for national security problems related to data and leveraging the use of AI and machine learning to solve these problems, we do track defense data issues very closely. So I was thrilled to see the data strategy be released in early October. That alone is a significant achievement. But overall, its really strong on content, too, which is not something you can take for granted with these government documents sometimes.

Tom Temin: Well, these types of government documents seem to fall into two categories, I call them 16 pages and under, 200 pages and greater, and this one falls into the smaller category. So its kind of a high level view of what they plan to do. Theres a few fresh acronyms. But is there more beyond that, that were just not seeing in terms of detail of execution?

Tara Murphy Dougherty:There is Tom, theres a lot packed into this relatively short document. And the most important takeaway from the DoD data strategy is that it establishes data as a strategic resource for the department. And that was exactly the right place to start. The second key takeaway is that for the first time, the department is looking holistically at the use of data across the operational battlefield applications of data and use of data in support of joint all-domain operations, as well as senior leader decision support and business analytics, laying that groundwork and taking that holistic approach. Those are really important elements of the first ever data strategy. Like I said, it was the right place to start. And now the question will be can it execute the blueprint that it provides alongside those core elements, which is fairly ambitious.

Tom Temin: Yes, under Section 4 there are seven goals and enabling objectives: Making data visible, making it accessible, understandable, linked, trustworthy, interoperable, and secure. They even have a acronym VAULTIS, for all of that. And thats a big bite that theyve taken here. And it seems like the only sensible way to try to carry it out is application by application, otherwise theyre, to use the old term, boiling the ocean.

Tara Murphy Dougherty:Boiling The ocean is exactly what I thought as well. But as a framework, the comprehensive nature of it works really nicely. And I believe that was the goal. Also, it wouldnt be DoD if they didnt put an acronym on it. So theres that. The key to achieving these goals is not only going to be execution and how this strategy turns into plans, its going to be the answer to the question of whether the department and the leadership is prepared to resource these activities. We may be facing budget cuts in upcoming years, we may not thats still an open question. But as budgets start to constrict, if they head in that direction, this will be a tough area to argue for additional resources and growing investments for modernization. Hopefully, the strategy is effective in cementing the need for those resources and making the case for the importance of this for the department, which it very, very much is. And the key to that importance is really how the strategy talks about the use of data and the role of data in artificial intelligence, which we know is going to play a significant role in the future of warfare. And data is core to effective AI.

Tom Temin: Were speaking with Tara Murphy Dougherty, she is the CEO of Govini. And the DoD does have a dedicated channel of spending on artificial intelligence. Theres a program office for this. And then you have each of the armed services that have their own artificial intelligence requirements they can see clearly, and perhaps program offices working there, plus all of the other components. So integration, governance, all of those pieces would seem to be also equally important here.

Tara Murphy Dougherty:Exactly. And one of the additional recent changes from DoD is the establishment of the first ever chief data officer that happened a few years ago. And theres a new chief data officer in place David Spirk, coming from [U.S. Special Operations Command] very accomplished, defense professional. He has really taken the perspective that DoD needs to think about data, not just in a strategic way, but in a streamlined way, where he as the person accountable and responsible for data operations within DoD can have visibility into the full spectrum of not just data activities and data systems, but also personnel who are working on data, whether thats as data scientists or people supporting the elements of the data enterprise in DoD. That is an ambitious undertaking but if he can pull it off, will be really effective to having a sense of where those resources need to go. And where the department is getting the best return on its investment in data capabilities.

Tom Temin: Is it necessary for them to inventory the data that the department has? I think the reference is to the larger massive flows of data that are generated? Or is it better to approach it from the application standpoint, by saying, heres what we want to do. Now lets go find the data thats out there that we need for this, and maybe back into some kind of a catalog?

Tara Murphy Dougherty:Its probably best to start with how data is being used. This is an area where the Joint Artificial Intelligence Center for DoD has done a really good job of highlighting the most important areas where DoD needs to be using its data and starting there in terms of setting standards and improving data hygiene. And the not always glamorous, but actually very important aspects of data work that make it effective. You know, you mentioned earlier, the funding the dedicated funding line for DoD, and theres a lot of funding in the department that goes into AI efforts. Theyre not always centralized. Theyre not always coordinated. This was part of the original premise of creating the JAIC, as its known. And yet we see, on one hand, the department saying that artificial intelligence is a key priority for DoD, and an absolutely important aspect of being an effective competitor in our great power competition that were facing primarily with China. And then on the other hand, you look at the budget numbers and funding for the joint Artificial Intelligence Center is flat over the next five years. So we are going to have to get a handle on not just what the strategy is, not just what the plans are to execute it, but ensure that what goes out the door from a funding perspective actually lines up with that strategy.

Tom Temin: Well, just to make an absurd analogy, which maybe shows that this needs to be very diffuse, you could say that, yes, artificial intelligence is important to competitive advantage. So is the ability to shoot straight and hit the target. But theres not one department of shooting straight. Its something that is distributed over every member thats on the tooth end of the military. So maybe they have to diffuse this artificial intelligence, skill and data knowledge to the edges?

Tara Murphy Dougherty:Thats exactly right. And that is another achievement of the DoD data strategy is the fact that it really took on and took on purposefully, the cultural and workforce aspects of this. I was surprised, frankly, that there was so much attention given to the talent and workforce side of bringing data into DoD and making the department an increasingly data-driven organization. And yet it was exactly right to do so. So then the question will be how much does the department decide it needs to internalize and internally resource or create both from a talent perspective in terms of developing technical skills for its workforce. And how much does it want to rely on the private sector? And thats an area where the department had a really solid model for private sector collaboration in the Cold War. And we certainly over the past 10 to 15 years have seen a significant growth in efforts from the Department of Defense to work with innovative new technology companies, and to reach what were calling the national security innovation base, rather than the traditional defense industrial base. But theres still a long way to go there. And Im not convinced that the department yet has really figured out what its new model is, particularly in the data and software and other technology sectors.

Tom Temin: Sounds like they need a couple of good wins they can point to to kind of give everyone an example of whats possible.

Tara Murphy Dougherty:That would go a long way, particularly on the heels of this data strategy. As I mentioned, just getting it out the door is a big win for DoD. Now it will need to take steps to start to implement it and that will cement these principles, these goals these essential capabilities and indeed the way ahead.

Tom Temin: Tara Murphy Dougherty is the CEO of Govini. Thanks so much.

Tara Murphy Dougherty: Thank you, Tom.

Tom Temin: Well post this interview along with a link to the DoD data strategy at FederalNewsNetwork.com/FederalDrive, subscribe to the Federal Drive at Podcastone or wherever you get your podcasts.

View post:

AI will be a big part of the DoDs big data effort - Federal News Network

AI to Ensure Fewer UFOs – IEEE Spectrum

Photo: Black Sage Technologies Searching the Skies: Black Sage Technologies artificial-intelligence system spots flying objects and determines whether theyre a threat.

Is it a bird? A plane? Or is it a remotely operated quadrotor conducting surveillance or preparing to drop a deadly payload? Human observers wont have to guessor keep their eyes glued to computer monitorsnow that theres superhuman artificial intelligence capable of distinguishing drones from those other flying objects. Automated watchfulness, thanks to machine learning, has given police and other agencies tasked with maintaining security an important countermeasure to help them keep pace with swarms of new drones taking to the skies.

The security challenge has only grown over the past few years: Millions of people have bought consumer drones and sometimes flown them into offlimits areas where they pose a hazard to crowds on the ground or larger aircraft in the sky. Off-the-shelf drones have also become affordable and dangerous weapons for the Islamic State and other militant groups in war-torn regions such as Iraq and Syria.

The need to track and possibly take down these flying intruders has spawned an antidrone market projected to be worth close to US $2 billion by the mid-2020s. The lions share of that haul will likely go to companies that can best leverage the power of machine-learning AI based on neural networks.

But much of the antidrone industry still lags behind the rest of the tech sector in making effective use of machine learning AI, says David Romero, founder and managing partner of Black Sage Technologies, based in Boise, Idaho. With machine learning, 90 percent of the work is figuring out how to make it so simple so that the customer doesnt have to know how machine learning works, says Romero. Many companies do that well, but not in the defense community.

He and Ross Lam, his Black Sage cofounder, are poised to take advantage of this opening for the upstarts looking to take on the defense industrys giants. They initially collaborated on a project that trained machine-learning algorithms to automatically detect deer on highways based on radar and infrared camera data. Eventually, they realized that the same approach could help spot drones and other unidentified flying objects.

Since the self-funded startups launch in 2015, it has won multiple contracts from the United States governmentincluding for U.S. military forces deployed in Iraq and Afghanistanand from U.S. allies.

Romero says its fairly straightforward to apply machine learning to the task of automatically detecting and classifying flying objects. But because the stakes are highmistakenly shooting down a small passenger plane or failing to take out an explosives-laden drone intruder could be equally disastrousBlack Sage puts its system through a rigorous training phase when its installed at a new site. The systems radar and infrared cameras capture information about each unidentified flying objects velocity, size, altitude, and so forth. Then a human operator helps train the machine-learning algorithms by positively identifying certain classes of drones (rotor or fixed-wing) as well as other objects such as birds or manned aircraft. For proof that it has learned its lessons well, the AI is tested against 20 percent of the positively identified data setthe part reserved specifically for cross validation.

Another company called Dedroneoriginally based in Kassel, Germany, but currently headquartered in San Franciscois taking a similar approach. When a Dedrone system is being installed at a new site, humans label unfamiliar objects as part of the training process, which also updates the companys proprietary DroneDNA library. Since its launch in 2014, Dedrones machine-learning software has helped safeguard events and locations such as a Clinton-Trump presidential debate, the World Economic Forum, and CitiField, home of the New York Mets baseball team.

Each time we update DroneDNA, we process over 250 million different images of drones, aircraft, birds, and other objects, says Michael Dyballa, Dedrones director of engineering. In the past eight months, weve annotated 3 million drone images.

Though Black Sages and Dedrones automated detection systems are said to be capable of running without human assistance after their respective training phases, the companies clients may choose to put humans in the loop for engaging active defenses, such as jammers or lasers, to take down flying intruders. Such caution is critical at sites like airports, where drone detection accuracy greater than 90 percent still means the occasional false alarm or case of mistaken identity. Even so, a humans interpretive ability can only supplement the ceaseless vigilance that AI systems will need to provide as the number of drones continues to rise.

Link:

AI to Ensure Fewer UFOs - IEEE Spectrum

AI is not optional for retail – VentureBeat

Most people dont realize that theyre likely exposed to AI each and every time they shop online whether its on eBay, Nordstrom.com, Warby Parker, or any other retailer. When you are searching for an item and a merchandising strip appears saying something like similar items thats AI in its simplest terms. Its what gives retailers the ability to automatically make informed recommendations.

AI has been around for many years, but recent advancements have moved AI out of the realm of science fiction and made it a business imperative. The game changers: powerful new GPUs, dedicated hardware, new algorithms, and platforms for deep learning. These enable massive data inputs to be calculated quickly and made actionable, as technology powers new algorithms that dramatically increase the speed and depth of learning. In mere seconds, deep learning can reach across billions of data points with thousands of signals and dozens of layers.

We all aspire to a grand vision of AIs role in commerce, and recent developments are creating a fertile environment for new forms of personalization to occur between brands and consumers. Make no mistake about it, the implications of AI will be profound. This is the new frontier of commerce.

As an industry, we are just beginning to scratch the surface of AI. In the next few years, we will see AI-powered shopping assistants embedded across a wide variety of devices and platforms. Shopping occasions will take advantage of camera, voice interfaces, and text.

We are already witnessing the early success of voice-activated assistants like Google Home, Siri, and Cortana. It wont be long before we see virtual and augmented reality platforms commercialized, as well. We see a future rich with voice-activated and social media assistants on platforms such as Messenger, WeChat, WhatsApp, and Instagram. Personal assistants will be everywhere and are already being woven into the fabric of everyday life. This means commerce will become present wherever and whenever the user is engaged on the social, messaging, camera, or voice-activated platforms of their choice.

AI by itself is simply a catalyst for achieving greater levels of personalization with shoppers. Customer data and human intelligence are the critical ingredients needed to run a personal AI engine. As we continue to launch more sophisticated applications, technologists should continue to focus on how to make greater use of our treasure trove of customer data. Looking ahead, the industry will evolve to combine customer data and human expertise into a deep knowledge graph. This will establish a knowledge base to create highly personal and contextual experiences for consumers. For the commerce industry, thiswill allowus to get a clearer understanding of shoppers intent and to service them in a more personalized way.

Keyword search for shopping is not enough anymore. The ability to use text, voice, and photos is becoming the new norm because these avenues provide users with a much richer and more efficient way to express their initial shopping intent. We call this multimodal shopping. And these new types of consumer interactions yield a tremendous amount of user data that can be poured right back into AI algorithms to improve contextual understanding, predictive modeling, and deep learning.

Across the three spectrums of multimodal AI, were starting to get much better at understanding our customers and the way they like to interact with us. A few good examples of this have to do with how our personal shopping assistant, eBay ShopBot on Facebook Messenger, remembers you. It can keep track of your shirt size or the brands you like, so it wont keep suggesting Nike when you prefer Adidas. The assistant also uses computer vision it can find similar products it knows you like based on a similar image or an exact photo match.

Innovating on a canvas of AI provides many new opportunities to create highly contextual and personalized shopping experiences. From our perspective, every company should be investing heavily in AI, and it shouldnt just be about using cognitive services. Companies should actually be developing their own models that keep them on the cutting edge of technology. While there is still a lot of work to be done in this area, one thing is clear. The companies that chart the right course in this exciting endeavor will prosper. The ones that dont face extinction.

JapjitTulsi is the VP of Engineering ateBay.

Read more from the original source:

AI is not optional for retail - VentureBeat

Google creates AI that can make its own plans and envisage consequences of its actions – The Independent

Designed by Pierpaolo Lazzarini from Italian company Jet Capsule. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph.

Jet Capsule/Cover Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi

Rex

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session

Rex

A test line of a new energy suspension railway resembling the giant panda is seen in Chengdu, Sichuan Province, China

Reuters

A test line of a new energy suspension railway, resembling a giant panda, is seen in Chengdu, Sichuan Province, China

Reuters

A concept car by Trumpchi from GAC Group is shown at the International Automobile Exhibition in Guangzhou, China

Rex

A Mirai fuel cell vehicle by Toyota is displayed at the International Automobile Exhibition in Guangzhou, China

Reuters

A visitor tries a Nissan VR experience at the International Automobile Exhibition in Guangzhou, China

Reuters

A man looks at an exhibit entitled 'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London

Getty

A new Israeli Da-Vinci unmanned aerial vehicle manufactured by Elbit Systems is displayed during the 4th International conference on Home Land Security and Cyber in the Israeli coastal city of Tel Aviv

Getty

Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S

Reuters

The Jaguar I-PACE Concept car is the start of a new era for Jaguar. This is a production preview of the Jaguar I-PACE, which will be revealed next year and on the road in 2018

AP

Japan's On-Art Corp's CEO Kazuya Kanemaru poses with his company's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' and other robots during a demonstration in Tokyo, Japan

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03'

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' performs during its unveiling in Tokyo, Japan

Reuters

Singulato Motors co-founder and CEO Shen Haiyin poses in his company's concept car Tigercar P0 at a workshop in Beijing, China

Reuters

The interior of Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Singulato Motors' concept car Tigercar P0

Reuters

A picture shows Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Connected company president Shigeki Tomoyama addresses a press briefing as he elaborates on Toyota's "connected strategy" in Tokyo. The Connected company is a part of seven Toyota in-house companies that was created in April 2016

Getty

A Toyota Motors employee demonstrates a smartphone app with the company's pocket plug-in hybrid (PHV) service on the cockpit of the latest Prius hybrid vehicle during Toyota's "connected strategy" press briefing in Tokyo

Getty

An exhibitor charges the battery cells of AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

A robot with a touch-screen information apps stroll down the pavillon at the Singapore International Robo Expo

Getty

An exhibitor demonstrates the AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

Robotic fishes swim in a water glass tank displayed at the Korea pavillon during Singapore International Robo Expo

Getty

An employee shows a Samsung Electronics' Gear S3 Classic during Korea Electronics Show 2016 in Seoul, South Korea

Reuters

Visitors experience Samsung Electronics' Gear VR during the Korea Electronics Grand Fair at an exhibition hall in Seoul, South Korea

Getty

Amy Rimmer, Research Engineer at Jaguar Land Rover, demonstrates the car manufacturer's Advanced Highway Assist in a Range Rover, which drives the vehicle, overtakes and can detect vehicles in the blind spot, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Chris Burbridge, Autonomous Driving Software Engineer for Tata Motors European Technical Centre, demonstrates the car manufacturer's GLOSA V2X functionality, which is connected to the traffic lights and shares information with the driver, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Ford EEBL Emergency Electronic Brake Lights is demonstrated during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA

Full-scale model of 'Kibo' on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan

EPA

Miniatures on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan. In its facilities, JAXA develop satellites and analyse their observation data, train astronauts for utilization in the Japanese Experiment Module 'Kibo' of the International Space Station (ISS) and develop launch vehicles

EPA

The robot developed by Seed Solutions sings and dances to the music during the Japan Robot Week 2016 at Tokyo Big Sight. At this biennial event, the participating companies exhibit their latest service robotic technologies and components

Getty

The robot developed by Seed Solutions sings and dances to music during the Japan Robot Week 2016 at Tokyo Big Sight

Getty

Government and industry are working together on a robot-like autopilot system that could eliminate the need for a second human pilot in the cockpit

AP

Aurora Flight Sciences' technicians work on an Aircrew Labor In-Cockpit Automantion System (ALIAS) device in the firm's Centaur aircraft at Manassas Airport in Manassas, Va.

AP

Stefan Schwart and Udo Klingenberg preparing a self-built flight simulator to land at Hong Kong airport, from Rostock, Germany

EPA

See the rest here:

Google creates AI that can make its own plans and envisage consequences of its actions - The Independent

Samsung Galaxy S8’s Bixby AI could beat Google Assistant on this front – CNET

My AI is smarter than your AI.

That's the taunt that Samsung Galaxy S8 owners may be able to lob at Google Pixel users if the S8's rumored Bixby Assistant launches with seven or eight languages, as reported by SamMobile.

Samsung's Bixby AI will go after Google Assistant, Apple's Siri and Amazon Alexa for phones

In the Google Pixel, Assistant currently supports two languages, according to Google's website: English and German. The Google Allo app, which also uses Google Assistant and works on more phones, supports five languages: English, German, Hindi, Japanese and Portuguese. (You can still use Google's voice search/Google Now with many more languages on the Pixel phones, but the Google Assistant launch gesture turns off when you switch your primary language to, say, Spanish.)

Launching its own smart AI assistant is an important move for Samsung and its future Galaxy and Note phones. The company, which strives to dominate the smartphone world against Apple's iPhone, stands to win fans if its Bixby assistant can outperform Google's Assistant, Apple's Siri and Amazon's Alexa, which will land on its first phone later this month.

This isn't the first time that Samsung has tried to out-Google Google either. The company hoped to supplant Google's voice search tool with Samsung's branded S Voice app, and introduced other software services of its own. The company has largely pulled back on preloaded apps and shuttered some of the services, so it'll be interesting to see how well Bixby AI will be able to compete with more established assistants, especially in these early days of AI on phones.

Bixby is rumored to:

The Samsung Galaxy S8 is expected to launch March 29 and sell in mid-April.

Samsung did not immediately respond to CNET's request for comment.

See the original post here:

Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET

The Next Generation Of Artificial Intelligence (Part 2) – Forbes

Deep learning pioneer Yoshua Bengio has provocative ideas about the future of AI.

For the first part of this article series, see here.

The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business?

My previous column covered three emerging areas within AI that are poised to redefine the fieldand societyin the years ahead. This article will cover three more.

AI is moving to the edge.

There are tremendous advantages to being able to run AI algorithms directly on devices at the edgee.g., phones, smart speakers, cameras, vehicleswithout sending data back and forth from the cloud.

Perhaps most importantly, edge AI enhances data privacy because data need not be moved from its source to a remote server. Edge AI is also lower latency since all processing happens locally; this makes a critical difference for time-sensitive applications like autonomous vehicles or voice assistants. It is more energy- and cost-efficient, an increasingly important consideration as the computational and economic costs of machine learning balloon. And it enables AI algorithms to run autonomously without the need for an Internet connection.

Nvidia CEO Jensen Huang, one of the titans of the AI business world, sees edge AI as the future of computing: AI is moving from the cloud to the edge, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, save power. In time, there will be trillions of these small autonomous computers, powered by AI.

But in order for this lofty vision of ubiquitous intelligence at the edge to become a reality, a key technology breakthrough is required: AI models need to get smaller. A lot smaller. Developing and commercializing techniques to shrink neural networks without compromising their performance has thus become one of the most important pursuits in the field of AI.

The typical deep learning model today is massive, requiring significant computational and storage resources in order to run. OpenAIs new language model GPT-3, which made headlines this summer, has a whopping 175 billion model parameters, requiring more than 350 GB just to store the model. Even models that dont approach GPT-3 in size are still extremely computationally intensive: ResNet-50, a widely used computer vision model developed a few years ago, uses 3.8 billion floating-point operations per second to process an image.

These models cannot run at the edge. The hardware processors in edge devices (think of the chips in your phone, your Fitbit, or your Roomba) are simply not powerful enough to support them.

Developing methods to make deep learning models more lightweight therefore represents a critical unlock: it will unleash a wave of product and business opportunities built around decentralized artificial intelligence.

How would such model compression work?

Researchers and entrepreneurs have made tremendous strides in this field in recent years, developing a series of techniques to miniaturize neural networks. These techniques can be grouped into five major categories: pruning, quantization, low-rank factorization, compact convolutional filters, and knowledge distillation.

Pruning entails identifying and eliminating the redundant or unimportant connections in a neural network in order to slim it down. Quantization compresses models by using fewer bits to represent values. In low-rank factorization, a models tensors are decomposed in order to construct sparser versions that approximate the original tensors. Compact convolutional filters are specially designed filters that reduce the number of parameters required to carry out convolution. Finally, knowledge distillation involves using the full-sized version of a model to teach a smaller model to mimic its outputs.

These techniques are mostly independent from one another, meaning they can be deployed in tandem for improved results. Some of them (pruning, quantization) can be applied after the fact to models that already exist, while others (compact filters, knowledge distillation) require developing models from scratch.

A handful of startups has emerged to bring neural network compression technology from research to market. Among the more promising are Pilot AI, Latent AI, Edge Impulse and Deeplite. As one example, Deeplite claims that its technology can make neural networks 100x smaller, 10x faster, and 20x more power efficient without sacrificing performance.

The number of devices in the world that have some computational capability has skyrocketed in the last decade, explained Pilot AI CEO Jon Su. Pilot AIs core IP enables a significant reduction in the size of the AI models used for tasks like object detection and tracking, making it possible for AI/ML workloads to be run directly on edge IoT devices. This will enable device manufacturers to transform the billions of sensors sold every yearthings like push button doorbells, thermostats, or garage door openersinto rich tools that will power the next generation of IoT applications.

Large technology companies are actively acquiring startups in this category, underscoring the technologys long-term strategic importance. Earlier this year Apple acquired Seattle-based Xnor.ai for a reported $200 million; Xnors technology will help Apple deploy edge AI capabilities on its iPhones and other devices. In 2019 Tesla snapped up DeepScale, one of the early pioneers in this field, to support inference on its vehicles.

And one of the most important technology deals in yearsNvidias pending $40 billion acquisition of Arm, announced last monthwas motivated in large part by the accelerating shift to efficient computing as AI moves to the edge.

Emphasizing this point, Nvidia CEO Jensen Huang said of the deal: Energy efficiency is the single most important thing when it comes to computing going forward....together, Nvidia and Arm are going to create the world's premier computing company for the age of AI.

In the years ahead, artificial intelligence will become untethered, decentralized and ambient, operating on trillions of devices at the edge. Model compression is an essential enabling technology that will help make this vision a reality.

Todays machine learning models mostly interpet and classify existing data: for instance, recognizing faces or identifying fraud. Generative AI is a fast-growing new field that focuses instead on building AI that can generate its own novel content. To put it simply, generative AI takes artificial intelligence beyond perceiving to creating.

Two key technologies are at the heart of generative AI: generative adversarial networks (GANs) and variational autoencoders (VAEs).

The more attention-grabbing of the two methods, GANs were invented by Ian Goodfellow in 2014 while he was pursuing his PhD at the University of Montreal under AI pioneer Yoshua Bengio.

Goodfellows conceptual breakthrough was to architect GANs with two separate neural networksand then pit them against one another.

Starting with a given dataset (say, a collection of photos of human faces), the first neural network (called the generator) begins generating new images that, in terms of pixels, are mathematically similar to the existing images. Meanwhile, the second neural network (the discriminator) is fed photos without being told whether they are from the original dataset or from the generators output; its task is to identify which photos have been synthetically generated.

As the two networks iteratively work against one anotherthe generator trying to fool the discriminator, the discriminator trying to suss out the generators creationsthey hone one anothers capabilities. Eventually the discriminators classification success rate falls to 50%, no better than random guessing, meaning that the synthetically generated photos have become indistinguishable from the originals.

In 2016, AI great Yann LeCun called GANs the most interesting idea in the last ten years in machine learning.

VAEs, introduced around the same time as GANs, are a conceptually similar technique that can be used as an alternative to GANs.

Like GANs, VAEs consist of two neural networks that work in tandem to produce an output. The first network (the encoder) takes a piece of input data and compresses it into a lower-dimensional representation. The second network (the decoder) takes this compressed representation and, based on a probability distribution of the original datas attributes and a randomness function, generates novel outputs that riff on the original input.

In general, GANs generate higher-quality output than do VAEs but are more difficult and more expensive to build.

Like artificial intelligence more broadly, generative AI has inspired both widely beneficial and frighteningly dangerous real-world applications. Only time will tell which will predominate.

On the positive side, one of the most promising use cases for generative AI is synthetic data. Synthetic data is a potentially game-changing technology that enables practitioners to digitally fabricate the exact datasets they need to train AI models.

Getting access to the right data is both the most important and the most challenging part of AI today. Generally, in order to train a deep learning model, researchers must collect thousands or millions of data points from the real world. They must then have labels attached to each data point before the model can learn from the data. This is at best an expensive and time-consuming process; at worst, the data one needs is simply impossible to get ones hands on.

Synthetic data upends this paradigm by enabling practitioners to artificially create high-fidelity datasets on demand, tailored to their precise needs. For instance, using synthetic data methods, autonomous vehicle companies can generate billions of different driving scenes for their vehicles to learn from without needing to actually encounter each of these scenes on real-world streets.

As synthetic data approaches real-world data in accuracy, it will democratize AI, undercutting the competitive advantage of proprietary data assets. In a world in which data can be inexpensively generated on demand, the competitive dynamics across industries will be upended.

A crop of promising startups has emerged to pursue this opportunity, including Applied Intuition, Parallel Domain, AI.Reverie, Synthesis AI and Unlearn.AI. Large technology companiesamong them Nvidia, Google and Amazonare also investing heavily in synthetic data. The first major commercial use case for synthetic data was autonomous vehicles, but the technology is quickly spreading across industries, from healthcare to retail and beyond.

Counterbalancing the enormous positive potential of synthetic data, a different generative AI application threatens to have a widely destructive impact on society: deepfakes.

We covered deepfakes in detail in this column earlier this year. In essence, deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do.

The first use case to which deepfake technology has been widely applied is pornography. According to a July 2019 report from startup Sensity, 96% of deepfake videos online are pornographic. Deepfake pornography is almost always non-consensual, involving the artificial synthesis of explicit videos that feature famous celebrities or personal contacts.

From these dark corners of the Internet, the use of deepfakes has begun to spread to the political sphere, where the potential for harm is even greater. Recent deepfake-related political incidents in Gabon, Malaysia and Brazil may be early examples of what is to come.

In a recent report, The Brookings Institution grimly summed up the range of political and social dangers that deepfakes pose: distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.

The core technologies underlying synthetic data and deepfakes are the same. Yet the use cases and potential real-world impacts are diametrically opposed.

It is a great truth in technology that any given innovation can either confer tremendous benefits or inflict grave harm on society, depending on how humans choose to employ it. It is true of nuclear energy; it is true of the Internet. It is no less true of artificial intelligence. Generative AI is a powerful case in point.

In his landmark book Thinking, Fast And Slow, Nobel-winning psychologist Daniel Kahneman popularized the concepts of System 1 thinking and System 2 thinking.

System 1 thinking is intuitive, fast, effortless and automatic. Examples of System 1 activities include recognizing a friends face, reading the words on a passing billboard, or completing the phrase War And _______. System 1 requires little conscious processing.

System 2 thinking is slower, more analytical and more deliberative. Humans use System 2 thinking when effortful reasoning is required to solve abstract problems or handle novel situations. Examples of System 2 activities include solving a complex brain teaser or determining the appropriateness of a particular behavior in a social setting.

Though the System 1/System 2 framework was developed to analyze human cognition, it maps remarkably well to the world of artificial intelligence today. In short, todays cutting-edge AI systems excel at System 1 tasks but struggle mightily with System 2 tasks.

AI leader Andrew Ng summarized this well: If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.

Yoshua Bengios 2019 keynote address at NeurIPS explored this exact theme. In his talk, Bengio called on the AI community to pursue new methods to enable AI systems to go beyond System 1 tasks to System 2 capabilities like planning, abstract reasoning, causal understanding, and open-ended generalization.

We want to have machines that understand the world, that build good world models, that understand cause and effect, and can act in the world to acquire knowledge, Bengio said.

There are many different ways to frame the AI disciplines agenda, trajectory and aspirations. But perhaps the most powerful and compact way is this: in order to progress, AI needs to get better at System 2 thinking.

No one yet knows with certainty the best way to move toward System 2 AI. The debate over how to do so has coursed through the field in recent years, often contentiously. It is a debate that evokes basic philosophical questions about the concept of intelligence.

Bengio is convinced that System 2 reasoning can be achieved within the current deep learning paradigm, albeit with further innovations to todays neural networks.

Some people think we need to invent something completely new to face these challenges, and maybe go back to classical AI to deal with things like high-level cognition, Bengio said in his NeurIPS keynote. [But] there is a path from where we are now, extending the abilities of deep learning, to approach these kinds of high-level questions of cognitive system 2.

Bengio pointed to attention mechanisms, continuous learning and meta-learning as existing techniques within deep learning that hold particular promise for the pursuit of System 2 AI.

Others, though, believe that the field of AI needs a more fundamental reset.

Professor and entrepreneur Gary Marcus has been a particularly vocal advocate of non-deep-learning approaches to System 2 intelligence. Marcus has called for a hybrid solution that combines neural networks with symbolic methods, which were popular in the earliest years of AI research but have fallen out of favor more recently.

Deep learning is only part of the larger challenge of building intelligent machines, Marcus wrote in the New Yorker in 2012, at the dawn of the modern deep learning era. Such techniques lack ways of representing causal relationships and are likely to face challenges in acquiring abstract ideas....They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.

Marcus co-founded robotics startup Robust.AI to pursue this alternative path toward AI that can reason. Just yesterday, Robust announced its $15 million Series A fundraise.

Computer scientist Judea Pearl is another leading thinker who believes the road to System 2 reasoning lies beyond deep learning. Pearl has for years championed causal inferencethe ability to understand cause and effect, not just statistical associationas the key to building truly intelligent machines. As Pearl put it recently: All the impressive achievements of deep learning amount to just curve fitting.

Of the six AI areas explored in this article series, this final one is, purposely, the most open-ended and abstract. There are many potential paths to System 2 AI; the road ahead remains shrouded. It is likely to be a circuitous and perplexing journey. But within our lifetimes, it will transform the economy and the world.

See the article here:

The Next Generation Of Artificial Intelligence (Part 2) - Forbes

AI hiring tools aim to automate every step of recruiting – Quartz

The firms that sell AI tools to automate recruiting have started to work the pandemic into their pitches to prospective clients: As the economy tanks and the hiring process moves almost entirely online, AI recruiting tools offer a chance to save some money and make use of new troves of digital data on prospective candidates.

In fact, the field is expected to expand during the crisis and has been attracting new investment. Its not just automated resume-sifting: There are firms competing to automate every stage of the hiring process. And while the machines seldom make hiring decisions on their own, critics say their use can perpetuate discrimination and inequality.

AI firm Textio claims it can optimize every word of a job posting, using a machine learning model that correlates certain turns of phrase with better hiring outcomes. Companies hiring in California, for example, are advised to describe things as awesome to appeal to local job seekers, while New York employers are counseled to avoid the adjective.

Big name firms like LinkedIn and ZipRecruiter use matchmaking algorithms to comb through hundreds of millions of job postings to connect candidates with compatible companies. Smaller competitors, like GoArya, seek to differentiate themselves by scraping data from the internetincluding social media profilesto inform recruiting decisions.

Firms like Mya promise to automate the task of reaching out to candidates via email, text, WhatsApp, or Facebook Messenger, using natural language processing to have open-ended, natural, and dynamic conversations. The companys chatbots even conduct basic screening interviews, filtering out early-stage applicants who dont meet the employers qualifications. Other companies, like XOR and Paradox, sell chatbots designed to schedule interviews and field applicants questions.

Some AI vendorsincluding Ideal, CVViZ, Skillate, and SniperAIpromise to cut the drudgery of hiring by automatically comparing applicants resumes with those of current employees. Tools like these have faced criticism for recreating existing inequalities: Even if the algorithms are programmed to ignore traits like race or gender, they might learn from past hiring data to pick up on proxies for these traitsfor example, prioritizing candidates who played lacrosse or are named Jared. Amazon developed its own screener and quickly scrapped it in 2018 after finding it was biased against women.

Recruiting firm HireVue, which boasts 700 corporate clients including Hilton and Goldman Sachs, sells an AI tool that analyzes interviewees facial movements, word choice, and speaking voices to assign them an employability score. The platform is so ubiquitous in industries like finance and hospitality that some colleges have taken to coaching interviewees on how to speak and move to appeal to the platforms algorithms.

AI firm Humantic offers to understand every individual without spending your time or theirs by using AI to create psychological profiles of applicants based on the words they use in resumes, cover letters, LinkedIn profiles, and any other piece of text they submit.

Meanwhile, Pymetrics puts current and prospective employees through a series of 12 games to glean data about their personalities. Its algorithms use the data to to find applicants that fit company culture. In a 2017 presentation, a Pymetrics representative demonstrated a game that required users to react when a red circle appears, but do nothing when they see a green circle. That game was actually looking at your levels of impulsivity, it was looking at your attention span, and it was looking at how you learn from your mistakes, she told the crowd. Critics suggest the games might just measure which candidates are good at puzzles.

Read the original here:

AI hiring tools aim to automate every step of recruiting - Quartz

Gartner vision quest sees Microsoft, Google and IBM nipping at Amazon Web Services’ heels in cloud AI – The Register

Gartner analysts have exhaled a "Magic Quadrant" report on Cloud AI developer services, concluding that while AWS is fractionally ahead, rivals Microsoft and Google are close behind, and that IBM is the only other company deserving a place in the "Leaders" section of the chart.

Gartner's team of five mystics reckon that this is a significant topic. "By 2023, 40 per cent of development teams will be using automated machine learning services to build models that add AI capabilities to their applications, up from less than 2 per cent in 2019," they predicted. The analysts also said that 50 per cent of "data scientist activities" will be automated by 2025, alleviating the current shortage of skilled humans.

The companies studied were Aible, AWS, Google, H20ai, IBM, Microsoft, Previson.io, Salesforce, SAP and Tencent. Alibaba and Baidu were excluded because of a requirement that products span "at least two major regions".

Gartner's Magic Quadrant for Cloud AI developer services

AWS was praised for its wide range of services, including SageMaker AutoPilot, announced late last year, which automatically generates machine-learning models. However, some shortcomings in SageMaker were addressed during the course of the research, said the analysts. It is a complex portfolio, though, and can be confusing. In addition: "When users move from development to production environments, the cost of execution may be higher than they anticipated." Gartner suggested developers attempt to model production costs early on, and even that they plan to move compute-intensive workloads on-premises as this may be more cost-effective.

Google was ranked just ahead of Microsoft on "completeness of vision" but fractionally behind on "ability to execute". Gartner's analysts were impressed with its strong language services, as well as its "what-if" tool, which lets you inspect ML models to assist explainability, the art of determining why a AI system delivers the results it does. Another plus was that Google's image recognition service can be deployed in a container on-premises. Snags? The report identified a lack of maturity in Google's cloud platform: "The organization is still undergoing substantial change, the full impact of which will not be apparent for some time."

Microsoft won plaudits for the deployment flexibility of its AI services, on Azure or on-premises, as well as its wide selection of supported languages and its high level of investment in AI. A weakness, said the analysts, was lack of NLG (Natural Language Generation) services, though these are on the roadmap. The report also noted: "Microsoft can be challenging to engage with, due to a confusing branding strategy that spans multiple business units and includes Azure cognitive services and Cortana services. This overlap often confuses customers and can frustrate them." In addition, "it can be difficult to know which part of Microsoft to contact."

IBM is placed a little behind the other three, but still identified as having a "robust set of AI ML services". Further, "according to its users, developing conversational agents on IBMs Watson Assistant platform is a relatively painless experience." That said, like Microsoft, IBM can be difficult to work with, having "different products, from different divisions, being handled by various development teams and having various pricing schemes," said the analysts.

All four contenders can maybe take some comfort from Gartner's report, which places the three leaders close together and IBM, with its smaller cloud product overall, not that far behind. Other considerations, such as existing business relationships, or points of detail in the AI services you want to use, could shift any one of them into the top spot for a specific project.

One of the points the researchers highlighted is that it can be cheaper to run compute-intensive workloads on-premises. Using standard tools gives the most flexibility, and in this respect Google's recent announcement of Kubeflow 1.0, which lets devs run ML workflows on Kubernetes (K8s), is of interest. A developer can use Kubeflow on any K8s cluster including OpenShift. Google said it will support running ML workloads on-premises using Anthos in an upcoming release.

Sponsored: Detecting cyber attacks as a small to medium business

Here is the original post:

Gartner vision quest sees Microsoft, Google and IBM nipping at Amazon Web Services' heels in cloud AI - The Register

Combating Covid-19 with the Help of AI, Analytics and Automation – Analytics Insight

In a global crisis, the use of technology to gain insights into socio-economic threats is indispensable. In the current situation where the entire world faces the global pandemic of Covid-19, finding a cure and distributing it is a difficult task. Fortunately, today we have new and advanced technologies like AI, automation, analytics and more that can perform a better job. While AI is boon in the technological world, it has the potential to orchestrate troves of data to discover connections in the process to determine what kinds of treatments could work and which experiments to follow next.

Across the world, governments and health authorities are now exploring distinct ways to contain the spread of Covid-19 as the virus has already dispersed across 196 countries in a short time. According to a professor of epidemiology and biostatistics at George Washington University and SAS analytics manager for infectious diseases epidemiology and biostatistics, data, analytics, AI and other technology can play a significant role in helping identify, understand and assist in predicting disease spread and progression.

In its response to the virus, China, where the first case of coronavirus reported in late December 2019, started utilizing its sturdy tech sector. The country has specifically deployed AI, data science, and automation technology to track, monitor and defeat the pandemic. Also, tech players in China, such as Alibaba, Baidu, Huawei, among others expedited their companys healthcare initiatives in their contribution to combat Covid-19.

In an effort to vanquish Covid-19, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), earlier this month, conducted a virtual COVID-19 and AI Conference, to discuss how best to approach the pandemic using technology, AI, and analytics.

Since late 2019, several groups have been monitoring the spread of the virus, as a Harvard pediatrics professor John Brownstein said. He says, it takes a small army of people and highlighting efforts by universities and other organizations to use data-mining and other tools to track early signs of the outbreak online, such as through Chinas WeChat app, and understand effects of the intervention.

In the time of crisis, AI is proving its promising capabilities through diagnosing risks, doubt-clearing, delivering services and assisting in drug discovery to tackle the outbreak. AI-driven companies like Infervision brought an AI solution for coronavirus that assists front-line healthcare workers to spot and monitor the disease efficiently. Conversely, a start-up in the AI space CoRover that has earlier developed chatbots for railways ticketing platform, has built a video-bot, in collaboration with a doctor from Fortis Healthcare. Using the platform, a doctor can take questions from people about Covid-19.

Moreover, researchers in Australia have created and are testing a Covid-19 vaccine candidate to fight against the SARS-CoV-2 coronavirus. Researchers from Flinders University, working with Oracle cloud technology and vaccine technology developed by Vaxine, assessed the Covid-19 virus and used this information to design the vaccine candidate. According to Professor Nikolai Petrovsky at Flinders University and Research Director at Vaxine, the vaccine has progressed into animal testing in the US and once they confirm it is safe and effective, then only it will be advanced into human trials.

See the original post:

Combating Covid-19 with the Help of AI, Analytics and Automation - Analytics Insight

On Thinking Machines, Machine Learning, And How AI Took Over Statistics – Forbes

Sixty-five years ago, Arthur Samuel went on TV to show the world how the IBM 701 plays checkers. He was interviewed on a live morning news program, sitting remotely at the 701, with Will Rogers Jr. at the TV studio, together with a checkers expert who played with the computer for about an hour. Three years later, in 1959, Samuel published Some Studies in Machine Learning Using the Game of Checkers, in the IBM Journal of Research and Development, coining the term machine learning. He defined it as the programming of a digital computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning.

On February 24, 1956, Arthur Samuels Checkers program, which was developed for play on the IBM 701, ... [+] was demonstrated to the public on television

A few months after Samuels TV appearance, ten computer scientists convened in Dartmouth, NH, for the first-ever workshop on artificial intelligence, defined a year earlier by John McCarthy in the proposal for the workshop as making a machine behave in ways that would be called intelligent if a human were so behaving.

In some circles of the emerging discipline of computer science, there was no doubt about the human-like nature of the machines they were creating. Already in 1949, computer pioneer Edmund Berkeley wrote inGiant Brains or Machines that Think: Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill... These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.

Maurice Wilkes, a prominent developer of one of those giant brains, retorted in 1953: Berkeley's definition of what is meant by a thinking machine appears to be so wide as to miss the essential point of interest in the question, Can machines think? Wilkes attributed this not-very-good human thinking to a desire to believe that a machine can be something more than a machine. In the same issue of the Proceeding of the I.R.E that included Wilkes article, Samuel published Computing Bit by Bit or Digital Computers Made Easy. Reacting to what he called the fuzzy sensationalism of the popular press regarding the ability of existing digital computers to think, he wrote: The digital computer can and does relieve man of much of the burdensome detail of numerical calculations and of related logical operations, but perhaps it is more a matter of definition than fact as to whether this constitutes thinking.

Samuels polite but clear position led Marvin Minsky in 1961 to single him out, according to Eric Weiss, as one of the few leaders in the field of artificial intelligence who believed computers could not think and probably never would. Indeed, he pursued his life-long hobby of developing checkers-playing computer programs and professional interest in machine learning not out of a desire to play God but because of the specific trajectory and coincidences of his career. After working for 18 years at Bell Telephone Laboratories and becoming an internationally recognized authority on microwave tubes, he decided at age 45 to move on, as he was certain, says Weiss in his review of Samuels life and work, that vacuum tubes soon will be replaced by something else.

The University of Illinois came calling, asking him to revitalize their EE graduate research program. In 1948, the project to build the Universitys first computer was running out of money. Samuel thought (as he recalled in an unpublished autobiography cited by Weiss) that it ought to be dead easy to program a computer to play checkers and that if their program could beat a checkers world champion, the attention it would generate will also generate the required funds.

The next year, Samuel started his 17-year tenure with IBM, working as a senior engineer on the team developing the IBM 701, IBMs first mass-produced scientific computer. The chief architect of the entire IBM 700 series was Nathaniel Rochester, later one of the participants in the Dartmouth AI workshop. Rochester was trying to decide the word length and order structure of the IBM 701 and Samuel decided to rewrite his checkers-playing program using the order structure that Rochester was proposing. In his autobiography, Samuel recalled that I was a bit fearful that everyone in IBM would consider checker-playing program too trivial a matter, so I decided that I would concentrate on the learning aspects of the program. Thus, more or less by accident, I became one of the first people to do any serious programing for the IBM 701 and certainly one of the very first to work in the general field later to become known as artificial intelligence. In fact, I became so intrigued with this general problem of writing a program that would appear to exhibit intelligence that it was to occupy my thoughts almost every free moment during the entire duration of my employment by IBM and indeed for some years beyond.

But in the early days of computing, IBM did not want to fan the popular fears that man was losing out to machines, so the company did not talk about artificial intelligence publicly, observed Samuel later. Salesmen were not supposed to scare customers with speculation about future computer accomplishments. So IBM, among other activities aimed at dispelling the notion that computers were smarter than humans, sponsored the movie Desk Set, featuring a methods engineer (Spencer Tracy) who installs the fictional and ominous-looking electronic brain EMERAC, and a corporate librarian (Katharine Hepburn) telling her anxious colleagues in the research department: They cant build a machine to do our jobthere are too many cross-references in this place. By the end of the movie, she wins both a match with the computer and the engineers heart.

In his1959 paper, Samuel described his approach to machine learning as particularly suited for very specific tasks, in distinction to the Neural-Net approach, which he thought could lead to the development of general-purpose learning machines. Samuels program searched the computers memory to find examples of checkerboard positions and selected the moves that were previously successful. The computer plays by looking ahead a few moves and by evaluating the resulting board positions much as a human player might do, wrote Samuel.

His approach to machine learning still would work pretty well as a description of whats known as reinforcement learning, one of the basket of machine-learning techniques that has revitalized the field of artificial intelligence in recent years, wrote Alexis Madrigal in a 2017 survey of checkers-playing computer programs. One of the men who wrote the bookReinforcement Learning, Rich Sutton, called Samuels research the earliest work thats now viewed as directly relevant to the current AI enterprise.

The current AI enterprise is skewed more in favor of artificial neural networks (or deep learning) then reinforcement learning, although Googles DeepMind famously combined the two approaches in its Go-playing program which successfully beat Go master Lee Sedol in a five-game match in 2016.

Already popular among computer scientists in Samuels time (in 1951, Marvin Minsky and Dean Edmunds built SNARCStochastic Neural Analog Reinforcement Calculatorthe first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons), the neural networks approach was inspired by a1943 paperby Warren S. McCulloch and Walter Pitts in which they described networks of idealized and simplified artificial neurons and how they might perform simple logical functions, leading to the popular (and very misleading) description of todays artificial neural networks-based AI as mimicking the brain.

Over the years, the popularity of neural networks have gone up and down a number of hype cycles, starting with thePerceptron, a 2-layer artificial neural network that was considered by the U.S. Navy, according to a 1958 New York Times report, to be "the embryo of an electronic computer that.. will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." In addition to failing to meet these lofty expectations, neural networks suffered from a fierce competition from a growing cohort of computer scientists (including Minsky) who preferred the manipulation of symbols rather than computational statistics as the better path to creating a human-like machine.

Inflated expectations meeting the trough of disillusionment, no matter what approach was taken, resulted in at least two periods of gloomy AI Winter. But with the invention and successful application of backpropagation as a way to overcome the limitations of simple neural networks, sophisticated statistical analysis was againon the ascendance, now cleverly labeled as deep learning. In 1988, R. Colin Johnson and Chappell Brown published Cognizers: Neural Networks and Machine That Think, proclaiming that neural networks can actually learn to recognize objects and understand speech just like the human brain and, best of all, they wont need the rules, programming, or high-priced knowledge-engineering services that conventional artificial intelligence systems requireCognizers could very well revolutionize our society and will inevitably lead to a new understanding of our own cognition.

Johnson and Brown predicted that as early as the next two years, neural networks will be the tool of choice for analyzing the contents of a large database. This predictionand no doubt similar ones in the popular press and professional journalsmust have sounded the alarm among those who did this type of analysis for a living in academia and in large corporations, having no clue of what the computer scientists were talking about.

InNeural Networks and Statistical Models, Warren Sarle explained in 1994 to his worried and confused fellow statisticians that the ominous-sounding artificial neural networks are nothing more than nonlinear regression and discriminant models that can be implemented with standard statistical software like many statistical methods, [artificial neural networks] are capable of processing vast amounts of data and making predictions that are sometimes surprisingly accurate; this does not make them intelligent in the usual sense of the word. Artificial neural networks learn in much the same way that many statistical algorithms do estimation, but usually much more slowly than statistical algorithms. If artificial neural networks are intelligent, then many statistical methods must also be considered intelligent.

Sarle provided his colleagues with a handy dictionary translating the terms used by neural engineers to the language of statisticians (e.g., features are variables). In anticipation of todays data science (a more recent assault led by computer programmers) and predictions of algorithms replacing statisticians (and even scientists), Sarle reassured his fellow statisticians that no black box can substitute for human intelligence: Neural engineers want their networks to be black boxes requiring no human interventiondata in, predictions out. The marketing hype claims that neural networks can be used with no experience and automatically learn whatever is required; this, of course, is nonsense. Doing a simple linear regression requires a nontrivial amount of statistical expertise.

In a footnote to his mention of neural networks in his 1959 paper, Samuel cited Warren S. McCulloch who has compared the digital computer to the nervous system of a flatworm, and declared: To extend this comparison to the situation under discussion would be unfair to the worm since its nervous system is actually quite highly organized as compared to [the most advanced artificial neural networks of the day]. In 2019, Facebooks top AI researcher and Turing Award-winner Yann LeCun declared that Our best AI systems have less common sense than a house cat. In the sixty years since Samuel first published his seminal machine learning work, artificial intelligence has advanced from being not as smart as a flatworm to having less common sense than a house cat.

See the original post here:

On Thinking Machines, Machine Learning, And How AI Took Over Statistics - Forbes

AI in the field of transportation- a review – AI Daily

The applications for AI in urban mobility are extensive. The opportunity is thanks to a mixture of factors: urbanization, a focus on environmental sustainability, and growing motorization in developing countries, which results in congestion. The rising predominance of the sharing economy is another contributor. Ride-hailing or ride-sharing services enable drivers to access riders through a digital platform that also facilitates mobile money payments. Some examples in developing countries include Swvl, an Egyptian start-up that enables riders heading an equivalent direction to share fixed-route bus trips, and Didi, the Chinese ride-hailing service. These can help optimize utilization of assets where they are limited in EMs, and increase the standard of obtainable transportation services.

By 2020, it is estimated that there will be 10 million self-driving vehicles and more than 250 million smart cars on the road. Tesla, BMW, and Mercedes have already launched their autonomous cars, and they have proven to be very successful. We can gain tremendous productivity improvements in several industrial areas. As the transport industry becomes more data-driven, the talent profile will also shift as new skills will be needed in the workforce to keep up with ongoing changes. AI is already helping to form transport safer, more reliable and efficient, and cleaner. Some applications include drones for quick life-saving medical deliveries in Sub-Saharan Africa, smart traffic systems that reduce congestion and emissions in India, and driverless vehicles that shuttle cargo between those who make it and people who pip out in China. With great potential to extend efficiency and sustainability, among other benefits, comes many socio-economic, institutional, and political challenges that have got to be addressed to ensure that countries and their citizens can all harness the power of AI for economic process and shared prosperity.

Reference: How Artificial Intelligence is Making Transport Safer, Cleaner, More Reliable and Efficient in Emerging Markets- by Maria Lopez Conde and Ian Twinn, International Financial Corporation (IFC), World Bank Group.

Thumbnail credit: Forbes.com

See the rest here:

AI in the field of transportation- a review - AI Daily

How AI Can Live Up To Its Hype In The Healthcare Industry – Forbes

Getty

Whats the problem youre trying to solve?Clayton Christensen, the late Harvard business professor, was famous for posing this aphoristic question to aspiring entrepreneurs.

By asking it, he was teaching those in earshot an important lesson: Innovation, alone, isnt the end goal. To succeed, ideas and products must address fundamental human problems.

This is especially true in healthcare, where artificial intelligence is fueling the hopes of an industry desperate for better solutions.

But heres the problem: Tech companies too often set out to create AI innovations they can sell, rather than trying to understand the problems doctors and patients need solved. At many traditional med-tech conferences and trade shows, for example, talks and sales pitches focus squarely on the technology while routinely overlooking the human fears and frustrations that AI can address.

Because of this failure to prioritize human needs above business interests, medicines most-hyped AI applications have, repeatedly, failed to move the needle on public health, patient safety or healthcare costs.

Fortunately, humanistic problem-solving will take center stage at the upcoming South by Southwest (SXSW) conference and festival in Austin, Texas, from March 13-22. At this alternative cultural event, where hip musical acts overlap with indie film premiers, some 70,000+ conference attendees can find dozens of AI panels and presentations designed to put people first.

Addressing the challenges and opportunities around how technology affects our community is hugely important, said Hugh Forrest, Chief Programming Officer at SXSW. From privacy to blockchain to AI to MedTech, using this lens to filter how we look at a lot of issues facing modern society allows us to connect the dots in a deeper way. Especially in the case of an area like AI, where theres quite a bit of uncertainty and fear, we also want to showcase how these innovations can be ethical and improve lives.

Heres a small sample of the human-focused AI presentations coming to SXSW.

On Making Med-Tech More Humane

In response to our nations mental health crisis, the SXSW presentation titled Can Language Technology Rescue Mental Healthcare? will bring together a technologist and a psychologist to spotlight possible solutions. The duo will talk about tech that predicts suicide attempts 10 times more accurately than a doctors evaluation and algorithms that raise the red flag for onset of psychosis.

The panel Humanitarian AI: Disasters, Displacement & Disease will focus on the untamed global threat of nature disasters. From wars to disease outbreaks to flooding, humanity is still struggling to contain these millennia-old problems. Can technology, and AI in particular, help humanitarian agencies get ahead of the next disaster and help first responders save more lives?

On Med-Tech Making Our Lives Easier, Healthier, Better

Turning to the role of technology in our daily lives, the presentation How Tech is Transforming Healthcare, in Your Home examines the convergence of connected health and smart home products. According to the speakers, technologies like Alexa could soon help enhance independent living, improve health outcomes and reduce medical costs for families.

Similarly, in hospitals, AI and robotic technologies can unburden nurses of menial duties (likemaking repeat trips to the supply room). Doing so frees up time to address more patient-facing problems. Robots and Automation: Happier Healthcare Workplaces will focus on opportunities to improve the workflow of our nations overworked nurses.

On Making Sure Med-Tech Is Ethical, Secure

Several presentations at SXSW will address humanitys growing concern over AI doing more harm than good. In healthcare settings, for example, patients and doctors are expressing valid fears that dirty data will result in unintended medical errors and accidents. The AI Did What?! When AI Isnt Very Smart aims to help designers avoid such failures.

In healthcare, AI adoption has slowed in recent years due, in part, to apparent bias in data and algorithms, leading to inequitable care for minority populations. Looking beyond the walls of American medicine, the talk Hidden Figures: Exposing Bias in AI will focus on the impact, detection and mitigation of biased data in government, society and our daily lives.

The European Union has moved ahead of the United States in regulating technology to ensure greater privacy and equity. A high-profile panel of speakers at SXSW will discuss Shared Values for Ethical AI. This talk follows last weeks much-anticipated announcement about regulating artificial intelligence in the EU. The new proposal establishes technical and ethical standards that would influence the development and use of AI in healthcare and other industries.

In Next Gen AI: The Human Centered Design Challenge, leaders from Google, Microsoft, McKinsey and Ideo will examine how AI can earn the publics trust by learning to be smart, fair and transparent.

Finally, speakers from Carnegie Mellon and Deloitte will present The Accidental Ethicist: Making AI Human-Centered, looking at the same question of ethicsnot through the lens of public policy but through the eyes of those who create and code AI applications. Together, theyll show designers how human-centered approaches can build their ethical toolkit.

On Making Med-Tech More Creative

At its core, the art of medicine is a creative venture wherein humans aspire to help other humans. But all too often medical technologies make the overall healthcare experience feel mechanistic and impersonal. Some of the most interesting talks at SXSW will focus on AI in the arts. These sessions may offer valuable insights into resolving the dichotomy between the art and science of medicine.

Attendees can check out 3 Ways AI is Transforming the Music Industry for insights into how big data analysis, paired with human abstract reasoning, will change the future of music. Elsewhere, AI and Creativity: In Search of Genius examines how recent advances in AI have put within reach a world where art can be created and performed entirely by algorithms. Similarly, leaders from the Metropolitan Museum of Art will take on Art, AI, and Big Data, explaining how they made the institutions art collection more accessible, discoverable and useful.

On Making Med-Tech A Viable Healthcare Solution

On March 14, Ill contribute to the AI discussion at SXSW with a talk titled MedTech: Separating Reality From Hype. The goal is to help people understand why artificial intelligence has made precious few contributions to medical practice so far.

One explanation can be found in the two symbols currently associated with the medical profession.

If youre not familiar, the first symbol, called the Caduceus, features two snakes coiled around a short-winged staff. Its an ancient emblem that dates back to 1400 B.C.

Getty

The other is the Rod of Asclepius, an wingless staff wrapped by a single snake.

Getty

So, first logical question: Why snakes? Because they are reptiles that shed their skin annually, reminding us that its possible to regenerate and start anewa laudatory medical goal if there ever was one. And why a staff? Fair warning, the answer is a bit more stomach-churning. It involves an ancient medical treatment for patients infected by the Guinea worm (Dracunculus medinensis). The parasite enters the human body through the consumption of contaminated water, but doesnt surface until about a year later. Thats when the snake-like creature protrudes through the skin, creating a large blister that causes intense pain. Healers of the past would rupture the blister, wind the snakes head around a stick (the noble staff) and slowly pull the animal out.

Although these symbols are nearly identical, they have very different origins and meanings. The Caduceus is associated with Hermes and is recognized as the symbol of trade and commerce. By contrast, Asclepius was the Greek God of healing.

These two symbols represent a major clash in medicine today. Healthcare is, at once, a healing profession and a highly lucrative trade. Medical technologies, including AI, are caught in the middle. Those who focus creating business solutions often fail to address the most urgent problems that doctors and patients experience.

Hopefully, the creative and immersive environment of #SXSW2020 will inspire technology companies to put the needs of people at the center of future healthcare solutions.

Continue reading here:

How AI Can Live Up To Its Hype In The Healthcare Industry - Forbes