Page 265«..1020..264265266267..270280..»

Category Archives: Ai

Stephen Hawking calls for creation of world government to meet AI challenges – ExtremeTech

Posted: March 19, 2017 at 4:28 pm

In a book thats become the darling of many a Silicon Valley billionaire Sapiens: A Brief History of Humankind the historian Yuval Harari paints a picture of humanitys inexorable march towards ever greater forms of collectivization. From the tribal clans of pre-history, people gathered to create city-states, then nations, and finally empires. While certain recent political trends, namely Brexit and the nativism of Donald Trump would seem to belie this trend, now another luminary of academia has added his voice to the chorus calling for stronger forms of world government. Far from citing some ancient historical trends though, Stephen Hawking points to artificial intelligence as a defining reason for needing stronger forms of globally enforced cooperation.

Its facile to dismiss Stephen Hawking as another scientist poking his nose into problems more germane to politics than physics. Or even to suggest he is being alarmist, as many AI experts have already done. Its worth taking his point seriously, though, and weighing the evidence to see if theres any merit to the cautionary note he rings.

Lets first take the case made by the naysayers who claim we are a long time away from AI posing any real threat to humanity. These are often the same people who suggest Isaac Asimovs three laws of robotics are sufficient to ensure ethical behavior from machines never mind that the whole thrust of Asimovs stories is to demonstrate how things can go terribly wrong despite of the three laws of robots. Leaving that aside, itsexceedingly difficult to keep pace with the breakneck pace of research in AI and robotics. One may be an expert in a small domain of AI or robotics, say pneumatic actuators, and have no clue what is going on in reinforcement learning. This tends to be the rule rather than the exception among experts, since their very expertise tends to confine them to a narrow field of endeavor.

As a tech journalist covering AI and robotics on a more or less full-time basis, I can cite many recent developments that justify Mr. Hawkings concern namely the advent of autonomous weapons, DARPA sponsored hacking algorithms, and a poker playing AI that resembles a strategic superpower, to highlight just a few. Adding to this, its increasingly clear theres already something of an AI arms race underway, with China and the United States pouring increasingly large sums into supercomputers that can support the ever-hungry algorithms underpinningtodays cutting-edge AI.

And this is just the tip of the iceberg, thanks to the larger and more nebulous threat poised by superintelligence that is an algorithm or collection of them that achieved a singleton, in any of the three domains of intelligence outlined by Nick Bostrom in Superintelligence: paths, dangers and strategies those being Speed, Quality/Strategic planning, and Collective intelligence.

The dangers poised to humanity by AI, being somewhat more difficult to conceptualize than atomic weapons since they dont involve dramatic mushroom clouds or panicked basement drills, are all the more pernicious. Even the so called Utopian scenario, in which AI merely replaces large segments of the workforce, would bring with it a concomitant set of challenges that could best be met by stronger and more global government entities. In this light, it seems if anything, Dr. Hawking has understated the case for taking action at a global level, to ensure the transition into an AI-first world is a smooth rather than apocalyptic one.

Read more:

Stephen Hawking calls for creation of world government to meet AI challenges - ExtremeTech

Posted in Ai | Comments Off on Stephen Hawking calls for creation of world government to meet AI challenges – ExtremeTech

AI Can Likely Already Do Your Job Better Than You Can – Futurism – Futurism

Posted: at 4:28 pm

Automation on the Up

Andrew Ng (founding lead of the Google Brain team, former director of the Stanford Artificial Intelligence Laboratory, and now overall lead of Baidus AI team) in an article at the Harvard Business Reviewpoints out that if executives had a better understanding of what machine learning is already capable of, millions of people would be out of a job today. Many executives ask me what artificial intelligence can do. They want to know how it will disrupt their industry and how they can use it to reinvent their own companiesthe biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly bigger than before.

How much bigger? A report from Mckinsey says that 51% of economic activitycould be automated by existingtechnology.

This is not a new phenomenon, new technologies have alwaysdisplaced the need for human labor. In the past the change was gradual and people had time to learn new skills that the economy needed. But the pace at which it is happening this time may betoo rapid for people to adapt.

This is not some issue that you will have to figure out how to deal with in the distant future, it is already happening,relatively quietly,all around us. Somewhere, in agiant tech company ortiny startup, there is someone trying to figure out how to get a computer to do your job better than you ever could.

Below is a handy graph from McKinsey of a variety of skills that canbe replaced by AI and the industries that will be most affected. For a more detailed visualization click hereand here.

Its impact is already being felt from manufacturing jobs in China to insurance claim workersin Japan to tophedge fund managersin America.And this is just the beginning, as AI develops and its array of skills grow, more and more people whose jobs revolve around those skills will be replaced.

Of course, just because the technical ability is there doesnt mean it can be implemented right away. Still, this is an issue thatshould be getting a lot more attention than it does because it will impact you.

This excellent talk was delivered by Robert Reich, Secretary of Labor under Bill Clinton, delivered at Google back in February. He highlights what will be the pressing need of our times, for people to be able to find fulfillment outside of their job.

Link:

AI Can Likely Already Do Your Job Better Than You Can - Futurism - Futurism

Posted in Ai | Comments Off on AI Can Likely Already Do Your Job Better Than You Can – Futurism – Futurism

Hell freezes over: We wrote an El Reg chatbot using Microsoft’s AI – The Register

Posted: March 17, 2017 at 7:20 am

Hands on Microsoft has invested big in its Cognitive Services for programmable artificial intelligence, along with a Bot Framework for using them via a conversational user interface. How easy is it to get started?

Cognitive Services, the AI piece, was announced at the companys Build developer conference in April 2015. The initial release had just four services: face recognition, speech recognition, visual content recognition, and language understanding. That has now been extended to over 20 APIs.

Note that Cognitive Services, which are pre-baked specialist APIs, are distinct from Azure Machine Learning, which lets you do generalized predictive analytics based on your own data.

A year or so later, at the March 2016 Build event, Microsoft announced the Bot Framework, for building a conversational user interface (still in preview). This links naturally to Cognitive Services since a bot needs some sort of language parsing service. Both services are included (along with a bunch of other stuff) in Microsofts overall Machine Learning and AI offering, called Cortana Intelligence Suite.

At the recent QCON software development conference in London, Microsofts exhibition stand was focused entirely on Cognitive Services, and it gave a couple of presentations (albeit in the sponsored track) on the subject, though not without glitches. I dont know why it hasnt picked up Seattle as a place, said the presenter. Note that both the Bot Framework and the Language Understanding Intelligent Service (LUIS) are still in preview.

The main use cases for bots are for sales and customer service. Actions like booking travel or appointments, searching for hotels, and reporting faults are suitable. In most cases interacting with a human is preferable, but also more expensive. Another argument is that the popularity of messaging services means that it pays to have an integrated presence there.

How hard is it to build a bot on Microsofts platform? I sat down to build a Reg bot. In this case the main service is to offer content, so to keep things simple I decided the bot should simply search the site for material in response to a query.

There are several moving parts:

LUIS: Your bot has to send text to LUIS for interpretation, which means you have to create and publish a LUIS app.

Bot Framework: Microsofts cloud service provides the channels your bot uses to communicate. There are currently 11 channels, including Skype, Facebook Messenger, web page widgets, Direct Line (a REST API direct to your bot), Slack, Microsoft Teams, and SMS via Twilio.

Bing Search API: The bot has to know how to search the Register site; using a search API is the quickest way.

Hosting: A bot is itself a web service, and you have to host your bot somewhere. The tools lead you towards Microsoft Azure, but anywhere that can host an ASP.NET Web API application should do. Your LUIS app also has to be hosted, only on Azure. RegBot uses a free Cognitive Services account and the lowest paid-for web app hosting service.

It makes sense to start with LUIS rather than running up Visual Studio immediately. LUIS is a service that accepts a string of text and parses it into an Intent, along with one or more Entities. You can think of an Intent as a verb and an Entity as a noun. RegBot currently has two Intents, DoSearch and Help, and one Entity, TechSubject.

You set up your LUIS app by typing example text strings that match your Intent and tagging them with their Entities. So Tell me about malware becomes Intent: DoSearch TechSubject: Malware. You can test your LUIS app on the page.

Training the RegBot language understanding service

Once you have done the LUIS bit you can get going with the code. I found and installed a Visual Studio Bot Application template, started a new project, restored Nuget packages (which download libraries from Microsofts repository), and got an error: The name GlobalConfiguration does not exist. A quick search told me to add the WebHost package. That is how development is today; you mix up various pre-built pieces and hope they get along.

Unfortunately, the Bot Application template is not pre-configured for LUIS. My approach was to find another bit of sample code which includes LUIS support and borrow a few pieces from it. I also downloaded the Bot Framework Emulator, which lets you test your bot locally.

I messed around with various App IDs and secret keys to hook up my bot to the LUIS app. A key feature of the Bot Framework is that it keeps track of your conversation by means of a context object, so that your app is able to interact with the user. RegBot does not need much interaction, but to test this I wrote code that asks the user how many results they want to see. It does this with one line of code:

PromptDialog.Number(context, AfterDoSearch, "How many results would you like?");

AfterDoSearch: This is the name of the method which gets called when the user responds. Each type of interaction therefore needs a separate method. This wrapping means state management is taken care of for you a substantial benefit.

Getting the Bing Search API working took more time than expected. It turns out that a News Search works better than a Web Search, since it has a useful Description field. I also spent time working out how the JSON response was structured; either I missed it, or Microsoft could do with some more basic samples.

The Bot Framework emulator talking to RegBot

I got it working, connected RegBot to Skype, and successfully tested my bot. Publishing it to the wide world involves a few more steps, so not yet. Afew thoughts though.

Talking to RegBot via Skype (click for larger image)

This can work well for simple, well-defined use cases. LUIS is a bit of a black box, but has the advantage that you can see when it goes wrong and try to fix it. Once your app is up and running, it is easy to modify. The ability to code your own bot with a few hours of work is impressive.

That said, none of it is very sophisticated. Throw anything more than a short, simple sentence at LUIS, and it will quickly get confused or give up.

It would not be difficult to add speech recognition and text-to-speech via yet more Cognitive Services, though in the case of RegBot its not much use unless it also read web content back to you.

I came into this project as a bot sceptic. It is clever, but not clever enough to be useful other than in a few niche cases, or to hand over to a human after collecting some basic information.

You can imagine eyes lighting up at the thought of replacing call center staff with a few lines of code. Now it is simple to run up a prototype showing why, most of the time, this is probably not a good idea.

More positively, the bot concept is a newish way to interact with users: one that is amenable to voice and therefore handy for in-car use or other scenarios where typing is difficult. It does not feel ready yet, particularly in the case of the difficult AI piece, but give it time.

Read more:

Hell freezes over: We wrote an El Reg chatbot using Microsoft's AI - The Register

Posted in Ai | Comments Off on Hell freezes over: We wrote an El Reg chatbot using Microsoft’s AI – The Register

Advances in AI and ML are reshaping healthcare – TechCrunch

Posted: at 7:20 am


TechCrunch
Advances in AI and ML are reshaping healthcare
TechCrunch
The most significant application of AI and ML in genetics is understanding how DNA impacts life. Although the last several years saw the complete sequencing of the human genome and a mastery of the ability to read and edit it, we still don't know what ...

View post:

Advances in AI and ML are reshaping healthcare - TechCrunch

Posted in Ai | Comments Off on Advances in AI and ML are reshaping healthcare – TechCrunch

All the ways AI will slash Wall Street jobs – American Banker

Posted: at 7:20 am

Anyone whos visited the New York Stock Exchange lately knows technology has already taken a toll on Wall Street jobs.

And the decimation is only going to continue as the artificial intelligence industry booms.

By 2025, AI technologies will reduce employees in the capital markets worldwide by 230,000 people, according to a report from Opimas that came out last week. Financial institutions may see a 28% improvement in their cost-to-income ratios.

Additionally, financial firms will spend more than $1.5 billion this year on AI-related technologies and $2.8 billion annually by 2021, not including their investments in AI startups, the Opimas report estimated.

Its clear that AI will change Wall Street, but it is probably too simple to merely attribute the change to the promises of AI. Instead, it is several technologies that fall under the broader umbrella of artificial intelligence that are attacking the industry from all sides.

Process-oriented jobs are being killed by robotic process automation, a lower-IQ form of AI in which small pieces of software are programmed to do simple tasks, like looking up a document or a piece of information.

More analytical jobs are being replaced with things like machine learning, deep learning and the like that can digest large volumes of real-time data quickly and learn to find telling patterns with a speed the human brain cant match.

This has implications for jobs all over capital markets, from the front office to risk, fraud and even HR. And it will create some new jobs though not entry-level ones.

AI is going to touch every aspect of jobs in the capital markets, said Ed Donner, co-founder and CEO of untapt, a startup that presented at the last Accenture fintech demo day and that uses deep learning to match tech job candidates to the right positions. Its not just support functions, its not just operations; its also the front office. And retail banking and private wealth management are affected as well.

Some of this is buzz, for sure, but AIs boom is due to the convergence of several trends: costs associated with the advanced computing and data-storage hardware behind AI have come down. Its now possible to feed AI engines the massive amounts of data they need to learn to do the job of people. And major vendors have all come out with products that can work with existing technology, rather than requiring systems to be ripped out.

Quieter front office

Front-office sales and trading jobs have already dropped, partly because so many firms use algorithmic trading. Theres been a 20% to 30% headcount reduction overall in the front office over the past few years, said David Weiss, senior analyst at Aite Group.

Trade floors are a shadow of their former selves," and "market making and high-touch trading now employ fewer people, Weiss said.

This trend will accelerate as AI advances.

Natural language processing is helping to gain a greater understanding of conversations people are having, the intent of conversations in the front office, said Terry Roche, head of fintech research at the TABB Group. And mining data to gain greater insight to whats happening so it can give sales traders alerts about investment opportunities for their clients.

Analysts and researchers in the front office are being replaced by AI that can monitor vast arrays of data sources in real time, detect signals and put together analyst reports, Donner noted. The startup Kensho, in which Goldman Sachs has invested, is one example. A smaller startup called Agolo is another.

Emptying middle and back offices

Many jobs have been lost in middle and back offices, because those areas tend to be loaded with processes that are connected by human manual intervention, said Axel Pierron, managing director at Opimas.

What AI brings there is the ability to do handwriting recognition and image recognition, as well as robotic process automation, he said. Were already seeing the huge efficiency gains that AI can provide to the industry.

AI is also making a dent in compliance staff.

Compliance went on a huge hiring spree over the past several years in response to global regulatory mandates and regional enforcement actions, Weiss said. Artificial intelligence tech is being deployed to get a better holistic view as well as finally reduce false-positives in ways that more bodies failed to. So that is the next area for attrition and reduced staffing needs.

IBM has fed the contents of the Dodd-Frank Act into its "Jeopardy!"-winning AI machine, Watson. It bought regulatory compliance firm Promontory and wants Watson to imbibe the firms experts knowledge. The startup Quarule is also working on AI for financial compliance.

Asset management firms first

Employees at asset management firms are especially vulnerable, Pierron said.

You have a combined effect of the trend toward exchange-traded funds and self-investment, so it becomes harder to justify the management fee. There is already that tendency toward automating and robo-trading, he said. AI will complement that.

The success of robo-advisers like Wealthfront and Betterment, of course, has been one driver of this change.

But the change is also coming from within, Pierron said.

If you look at hedge fund ROI over the past 10 years, most of the industry has been below the S&P 500, which means there is already a high level of questioning around the added value provided by a human doing a trade themselves, using their gut feeling, he said. Here we already have the right mindset to implement AI and its already being implemented.

The hedge-fund manager Steve Cohen, whose SAC Capital pleaded guilty to insider trading in December, is said to be hoping to replace people with AI machines at his new firm, Point72 Asset Management.

According to Bloomberg, the firm, which manages Cohens personal fortune of $11 billion, is parsing data from its portfolio managers and testing models that mimic their trades.

AI is also being used to watch human traders for signs of rogue behavior. Nasdaq has been doing this for some time on its exchanges. Firms are starting to deploy the technology to monitor market data, trader activity and trader communications simultaneously.

Its collecting data about the traders within your organization to create a profile of them, and if the trader breaks the profile of typical activity, thats identified and flagged for greater analysis, Roche said.

Where will the displaced find new jobs?

Where will all the laid off (or unhired) Wall Streeters go?

Thats a really good question, Roche said. My son is going to be 18 in a month and hes going off to college in autumn. Im really thankful hes going to study for a double major in IT management and supply-chain management.

The shift away from entry-level jobs is happening everywhere, including in malls and fast-food restaurants, Roche noted.

CaliBurger just deployed Flippy the robot, which cooks burgers, he said. Where are those jobs going to go? Its a much broader societal question.

Pierron pointed out that there will be a transition period these jobs will not evaporate in one day.

Thats a much broader discussion around the impact of AI in the industry, not just in capital markets but in our economy we will have to think about what will be that next job creation, he said. With any natural evolution, you have the competencies that are becoming less relevant and you have core competencies and complementing competencies around AI.

Where new jobs will be generated

Vendors that offer robotic process automation and machine and deep learning software will be able to add AI-related jobs, as will the value-added resellers and consulting firms that implement and maintain the technology.

Within Wall Street firms, three categories of new skill sets will become more important, according to Donner: software engineering, data science, and a hybrid of business and digital skills.

This last category is someone whos reporting into the business and knows about the business but also wears a digital hat and knows what it takes to make their business increasingly digital, Donner said. That kind of skill set is going to be more and more predominant in the coming year.

He also sees jobs moving out of primary financial hubs like New York and into financial tech centers near tech schools.

Goldman has a big tech center in Salt Lake City, JPMorgan has one in Houston and other places, Deutsche is in North Carolina, he noted. This shift is happening now and it will continue to happen over the next 10 years.

From the Hathaway Effect to a flash crash

One danger of AIs takeover of Wall Street is the chance that machine learning engines could misinterpret signals and make disastrous trades off them.

Such lapses already occur. Theres the so-called Hathaway Effect, in which the price of Berkshire Hathaway stock tends to bump up by 2% or so around the same time that Anne Hathaway is in the news, for instance when it was announced she was going to co-host the Academy Awards.

Its believed to be because of the number of AI programs finding signals, seeing the word Hathaway and making incorrect deductions and trading decisions, Donner said. Hopefully theyll get smarter and the Hathaway Effect will go away.

Worries about what AI will do under different conditions has held some firms back from using it, Roche noted. So validating the machine thinking by running test and validation scenarios and putting restrictions and stops in place will be critical.

A broader danger is that misinterpreted signals could bring a whole market down. Firms reliance on artificial intelligence systems without human oversight could lead to multiple simultaneous identical reactions to adverse geopolitical or market conditions and greatly amplify them think deeper, tech-induced flash crashes.

The Street will have to give careful thought to where it keeps some humans to watch the robots.

Editor at Large Penny Crosman welcomes feedback at penny.crosman@sourcemedia.com.

Penny Crosman is Editor at Large at American Banker.

Read more:

All the ways AI will slash Wall Street jobs - American Banker

Posted in Ai | Comments Off on All the ways AI will slash Wall Street jobs – American Banker

Two out of three consumers don’t realize they’re using AI – ZDNet

Posted: at 7:20 am

special feature

AI and the Future of Business

Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them.

Businesses are scrambling to use artificial intelligence and trying to work out how best to put digital assistants to work. Consumers are wondering what impact these new technologies will have on their everyday lives.

AI spans the breadth of our activities. From business to leisure, we interact with AI when we use an app to turn on a light bulb, talk to Google Home or Amazon Alexa, or write better emails.

Google, IBM, Yahoo, Apple, Salesforce, and Intel have been acquiring start-ups. Even Twitter, eBay and Microsoft are racing to acquire the top players in AI.

Inbound marketing company HubSpot has been looking at consumer sentiment towards AI and chatbots. It has released its Global AI survey for Q4 2016.

In its survey of more than 1,400 people from Ireland, Germany, Mexico, Colombia, UK, and the US, HubSpot found that many respondents did not know that they were already using AI.

37 percent of respondents said they have used an AI tool. However, of the respondents who said they have not used AI, 63 percent were actually using it. They just were not aware that they were.

74 percent of respondents have used voice search in the past month. They have noticed that responses from Siri, Cortana, Home, and Alexa have improved. Daily use of voice search is up by 27 percent compared to last year.

People are very comfortable asking questions out loud to voice assistants. 84 percent of respondents said they were comfortable using it at home.

However, only 17 percent were comfortable using it in public. High earning men are more likely to use voice commands in public.

47 percent of respondents said that they are open to buying items through a chatbot. Chatbots sit natively on messaging applications, such as Slack, WhatsApp, Line, and Facebook Messenger.

Chatbots can perform a variety of shopping-related tasks, such as suggesting the top-rated items on the site. As machine learning programs get smarter, e-commerce bots will probably deal with more complicated questions from potential buyers.

As long as they can get help quickly and easily, 40 percent of respondents do not care whether a chatbot or a person answers their customer service questions. 57 percent of respondents still prefer to get help from a person.

AI is already impacting many parts of our lives, and although you may wonder how we will cope with the AI chatbot takeover, it does promise to streamline many parts of our online personal and business lives. We just have to get used to it.

See the rest here:

Two out of three consumers don't realize they're using AI - ZDNet

Posted in Ai | Comments Off on Two out of three consumers don’t realize they’re using AI – ZDNet

The Mobile Internet Is Over. Baidu Goes All In on AI – Bloomberg

Posted: at 7:20 am

On Dec. 6, 2016, thousands of translators filed into office buildings across mainland China to pore over brochures, letters, and technical manuals, all in foreign languages, painstakingly rendering their texts in Chinese characters. This marathon carried on for 15 hours a day for an entire month. Clients that supplied the material received professional-grade Chinese versions of the originals at a bargain price. But Baidu Inc., the Beijing-based company that organized the mass translation, got something potentially more valuable: millions of English-Mandarin word pairs with which to train its online translation engine.

China is infamous for its knockoffs, whether luxury handbags or web startups. But the countrys leadership seems to understand that when it comes to artificial intelligence, cheap imitations just wont donot when its rivals include Alphabet, Facebook, IBM, and Microsoft. In February the National Development and Reform Commission appointed Baiduoften described as the Google of Chinato lead a new AI lab, signaling that Beijing believes the company has the makings of a national champion in this sphere.

Of the more than 20 billion yuan ($2.9 billion) Baidu has spent on research and development over the past two and a half years, most has been on AI, according to comments co-founder and Chief Executive Officer Robin Li made at the labs launch last month. But Chinas national interest isnt his main motivation: Baidus revenue growth fell to about 6 percent last year, from an average of more than 30 percent over the prior three years. The search ad business, which contributed the lions share of its 70.5 billion yuan in sales in the fiscal year ended on Dec. 31, is under siege from local rivals. A September report from EMarketer Inc. noted that Alibaba Group Holding Ltd. had overtaken Baidu to become the leader in Chinas digital ad market. Baidu hopes AI can help it reclaim share in search, as well as ensure success in newer ventures.

Baidu showed off Little Fish, a voice-activated robot, at CES.

Source: Baidu

Thats key as the 17-year-old companys attempts at diversification have produced mixed results. The number of daily visitors to its group buying site, Nuomi, dropped 59 percent in the 12 months through February 2017; its Waimai food delivery service lags in third place, according to Natalie Wu, an analyst with China International Capital Corp. The Netflix-like streaming video service iQiyi.com is hugely popular, but it will take 12 billion yuan to keep it stocked with content this year, estimates analyst Ella Ji with China Renaissance Securities (Hong Kong ) Ltd.

Those faltering efforts mean Baidus push into AI is taking on greater importance. The era of mobile internet has ended, said Li in a March 10 interview. Were going to aggressively invest in AI, and I think its going to benefit a lot of people and transform industry after industry.

In January the company named former Microsoft Corp. executive Qi Lu as its chief operating officer, with a mandate to reshape the company around such technologies as deep learning, augmented reality, and image recognition. He joins Chief Scientist Andrew Ng, a Stanford academic who worked on Alphabet Inc.s deep learning group before decamping to Baidu in 2014. Under Ngs watch, the companys AI team, which is scattered across research labs in Beijing, Shenzhen, Shanghai, and Sunnyvale, Calif., has grown to 1,300 and is expected to increase by several hundred more hires this year. A ton of stuff is invented in China, and a ton of stuff is invented in the U.S., says Ng, whos based in Silicon Valley. By having people in both countries, we see the latest trends.

On the day in May 2014 that the Sunnyvale research center opened, Ng and his top lieutenant, Adam Coates, sat down in front of a blank whiteboard to identify their first project. After drawing up a list of possibilities (and challenges), they settled on speech recognition as a foundation on which they could build a series of other offerings.

By mid-2015, the 50-person team had a product called Deep Speech that could decipher much of what was said in English. Rather than picking apart phrases word by word, the software parsed through vast reams of language data and then extrapolated patterns, a process known as deep learning. The system could transcribe speech more accurately than traditional engines that rely on vocabulary lists and phonetic dictionaries, Ng says, because it took into account a words context to determine its meaning.

One thing that consistently tripped it up, though, were words and names that had over time crept into the English lexicon from other languages. If you want to say Play music by Tchaikovsky, the software would return answers like Play music and try cough ski, says Coates, whom Ng recruited from Stanford. We literally dubbed it the Tchaikovsky Problem.

Instead of simply adding Tchaikovsky to the systems vocabulary list, Baidus programmers had to help Deep Speech teach itself to understand the word. That involved pumping in even more data to help the system put things in context.

Shiqi Zhao, the Beijing-based associate director of Baidus natural language processing department, recalls that as a computer science student at Chinas Harbin Institute of Technology he had only 2 million word pairs of English-to-Chinese terms to play with while working on computer-based translation; Baidu has about 100 million. However, thats still far fewer than Alphabets 500 million, according to a 2016 article in Science magazine that featured one of the U.S. companys research scientists, Quoc V. Le.

To help close the gap, Baidu has resorted to an age-old tactic: throw lots of people at the problem. The company now facilitates manual translations year-round and stages marathon events such as the one in December at regular intervals, in which it offers clients prizes such as smartphones and water purifiers. The data collected help enhance the performance of its Baidu Translate engine as well as further the development of Deep Speech.

The software created by the Sunnyvale team had its commercial debut in July 2016 with the release of TalkType, a keyboard app with a talk-to-text feature. The technology has since been incorporated into other products, including a Siri-like personal assistant named DuMi in China and DuEr everywhere else. (DuMi is a fusion of du from Baidu and mi, which means secretary in Mandarin; DuEr sounds like doer.) The machine learning Baidu has inculcated into Deep Speech is helping it animate other products with intelligence. For instance, its the secret sauce in Xiaoyu Zaijia (Little Fish), a voice-controlled robot, la Amazon Echo, that Baidu showed off at the CES show in Las Vegas in January.

Baidus portfolio of web properties gives it access to one of the largest and most detailed sets of consumer data ever produced in China, whichin theory at leastshould give it an edge in building AI-infused products and services for the mainland. Thanks to Nuomi and Waimai, the company knows what Chinese households buy and eat, while Ctrip.com, the worlds second-largest online travel agent, reveals where they want to holiday. Every month 665 million smartphone users surf its mobile portal and apps, while 341 million use Baidu Maps to reach their destination. Its a mistake to think of AI as a productit underpins and enables product, says HSBC Holdings Plc analyst Chi Tsang. Think of all the use cases.

The most important business stories of the day.

Get Bloomberg's daily newsletter.

The new AI products arent contributing much to Baidus bottom line yet. But the companys nascent expertise in this area could help it achieve dominance in segments where its already present and propel it into new ones, such as cloud computing and self-driving cars. In the next three to five years all those areas have the potential to become another Baidu, company President Zhang Ya-Qin says, referring to Baidus $60.2 billion market capitalization. Right now its time to make some bets. With assistance from David Ramli and Alex Webb

The bottom line: Baidus 1,300-person AI team is writing software to improve everything from translation to food ordering.

More:

The Mobile Internet Is Over. Baidu Goes All In on AI - Bloomberg

Posted in Ai | Comments Off on The Mobile Internet Is Over. Baidu Goes All In on AI – Bloomberg

Pentagon sees more AI involvement in cybersecurity – Defense Systems

Posted: at 7:20 am

Cyber Defense

As the Pentagons Joint Regional Security Stacks moves forward with efforts to reduce the server footprint, integrate regional data networks and facilitate improved interoperability between previously stove-piped data systems, IT developers see cybersecurity efforts moving quickly toward increased artificial intelligence (AI) technology.

I think within the next 18-months, AI will become a key factor in helping human analysts make decisions about what to do, former DOD Chief Information Officer Terry Halvorsen said.

As technology and advanced algorithms progress, new autonomous programs able to perform a wider range of functions by themselves are expected to assist human programmers and security experts defending DOD networks from intrusions and malicious actors.

Given the volume and where I see the threat moving, it will be impossible for humans by themselves to keep pace, Halvorsen added.

Much of the conceptual development surrounding this AI phenomenon hinges upon the recognition that computers are often faster and more efficient at performing various procedural functions; at the same time, many experts maintain that human cognition is important when it comes to solving problems or responding to fast-changing, dynamic situations.

However, in some cases, industry is already integrating automated computer programs designed to be deceptive giving potential intruders the impression that what they are probing is human activity.

For example, executives from the cybersecurity firm Galois are working on a more sophisticated version of a honey pot tactic, which seeks to create an attractive location for attackers, only to glean information about them.

Honey pots are an early version ofcyberdeception. We are expanding on that concept and broadening it greatly, said Adam Wick, research head at Galois.

A key element of these techniques uses computer automation to replicate human behavior to confuse a malicious actor, hoping to monitor or gather information from traffic going across a network.

Its goal is to generate traffic that misleads the attacker, so that the attacker cannot figure out what is real and what is not real, he added.

The method generates very human looking web sessions, Wick explained. An element of this strategy is to generate automated or fake traffic to mask web searches and servers so that attackers do not know what is real.

Fake computers look astonishingly real, he said. We have not to date been successful in always keeping people off of our computers. How can we make the attackers job harder once they get to the site, so they are not able to distinguish useful data from junk.

Using watermarks to identify cyber behavior of malicious actors is another aspect of this more offensive strategy to identify and thwart intruders.

We cant predict every attack. Are we ever going to get where everything is completely invulnerable? No, but with AI, we can change the configuration of a network faster than humans can, Halvorsen added.

The concept behind the AI approach is to isolate a problem, reroute around it, and then destroy the malware.

About the Author

Kris Osborn is editor-in-chief of Defense Systems. He can be reached at kosborn@1105media.com.

See the original post here:

Pentagon sees more AI involvement in cybersecurity - Defense Systems

Posted in Ai | Comments Off on Pentagon sees more AI involvement in cybersecurity – Defense Systems

Never Mind Alexa: Why AI Obsession Echoes Past Hype Cycles – MediaPost Communications

Posted: at 7:20 am

For a moment in early 2013, it looked like Google Glass was going to change everything.

At the time, the product had been seeded to top influencers, so it wasnt unusual to see, for instance, Tumblr CEO David Karp wearing a pair in public. A Guardian column claimed that Google Glass would change the world and compared the invention to Gutenbergs printing press.

We all know how that turned out.

I cite the hype around Google Glass to illustrate how susceptible we all are to the belief that we can identify a paradigm-changing technology and predict its influence.

2013 wasnt that long ago, but weve already seen similar hype around the Apple Watch, virtual reality and now artificial intelligence. Now there is similar hype around AI and Amazons Alexa in particular.

While AI probably will have a major effect on our lives, its impact on life in 2017 will be minimal. Marketers would be better off focusing on whats available now rather than what might be here in the 2020s.

advertisement

advertisement

The limitations of 2017 AI

Amazons cylindrical Echo device was, per the Economist in more than 4 million U.S. households before Christmas. It was the retailers top seller during this years holiday season so theres a good chance you know someone with an Echo.

This install base is greater than Google Glass so a change may be upon us. The reality, however, is that Echo is merely a novelty at this point.

Its worth noting that everything you can do with an Echo you could do with your phone two years ago. In fact, if you attach your phone to a set of speakers it is indistinguishable from Echo, except that youll be talking to Siri or Googles voice assistant.

Like Siri and Googles assistant, when you talk to Alexa, you realize youre talking to something thats artificial, but not all that intelligent.

As David Gerwitz at ZDNet has noted, Alexa doesnt handle natural language all that adeptly and doesnt make the logical jumps you would expect from a sophisticated system. For example, when Gerwitz asked Alexa to play Preservation Hall Jazz Band, hed get the same frustrating message that it was not in his music library, but if he added on Pandora, then Alexa would get the gist.

The other issue with voice control is that it offers limited use cases. Do it on the subway and youll annoy anyone within 12 feet. Similarly, voice-control systems in cars have proven to be distracting for drivers. It is also plagued with problems ranging from background noise to an inability to understand voice command.

As for AI itself, while the technology is making huge strides, at this point, it is merely pattern recognition. As Mark Zuckerberg has said: People who work in AI arent scared of The Singularity because no one actually understands how the human brain works and how real thinking works.

Focusing on 2017

None of this is to say that AI wont become a transformational technology over time. But such a transformation is still years away. In 2017, the smartphone is the key device. IoT devices are still niche. That will probably change over time. However, its too early to prep a zero user interface strategy when the vast majority of activity is happening on phones.

Speaking of phones, lets recall how a truly transformational product, the iPhone, was actually received when it launched in 2007. Though some, like Walt Mossberg, accurately predicted the phone would herald a new era, others groused about its lack of a qwerty keyboard and sniffed that it wasnt as cool as a BlackBerry.

Recall too, that few thought the iPod would be a big deal, either.

As German philosopher Arthur Schopenhauer once said, all truth passes through three stages. First it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.

Sounds like Google Glass is on the right path after all.

Read more:

Never Mind Alexa: Why AI Obsession Echoes Past Hype Cycles - MediaPost Communications

Posted in Ai | Comments Off on Never Mind Alexa: Why AI Obsession Echoes Past Hype Cycles – MediaPost Communications

Why parents might not be ready for AI in the classroom – VentureBeat

Posted: March 12, 2017 at 8:13 pm

We previously looked at the ways artificial intelligence may disrupt the traditional classroom. From blended learning to AI tutors, algorithms are poised to reshape the way teachers engage with their students. But AI may do more than influence classroom experiences. It has the potential to replace classrooms entirely. No one can reliably predict the degree of impact AI may have in education, but one thing seems clear parents should expect to deal with more complexity and greater responsibility in overseeing their childrens education.

Parents are responsible for nearly every aspect of their childrens development. Health care, cognition, socialization, behavioral modeling parents do it all. The one area in which they exercise less control is in formal education. They make decisions about whether to send their children to private or public schools or to home school, oversee homework sessions, and volunteer for the PTA. But they leave the actual teaching to the teachers.

History shows that new technologies upend existing paradigms, usually in incremental ways. But artificial intelligence is unlike any technology weve encountered. AI could radically alter learning environments the schools themselves. What will it mean for parents if their children can learn just as well, if not better, from the comfort of their homes instead of in traditional classrooms?

Before we can answer that, we have to address something more fundamental: What is it that we expect of education? And, in particular, what is it that parents expect? Consider these three statements about education, which capture the range of expectations:

Education does not mean teaching people to know what they do not know. It means teaching them to behave as they do not behave. John Ruskin

British parents are very ready to call for a system of education which offers equal opportunity to all children except their own. Lord Eccles

The value of an educationis not the learning of many facts but the training of the mind to think something that cannot be learned from textbooks. Albert Einstein

Depending on how it is structured, education is expected to provide a child with a craft, career, or trade; a foundation of knowledge; the development of culture; the capacity to learn; a hunger for knowledge and wisdom; or good behavior. That is a pretty long list of expectations. So long, in fact, that there is no school that can actually deliver on everything that might be expected of it.

Rather than try to define what education should be, lets simply acknowledge the most common elements of peoples expectations. In general, we expect schools to achieve or facilitate:

1) Preparation of children for a productive life and career

2) The transfer of an agreed-upon base of knowledge

3) The development of a childs understanding of their own culture

4) Socialization of a child around behavioral norms

5) Creation of habits supportive of lifelong learning

The American education system is built on standardization. Unless students attend Montessori or other philosophically driven schools, most learn from generalized lessons delivered in generalized classrooms. When theyre old enough, they begin taking standardized tests to determine how well theyve kept up.

Of course, many students fall behind as they struggle to grasp concepts that are presented in ways they dont understand. They may be ill-suited to the standardized school environment, or their cognitive development may take place at a different rate than that of their peers, either faster or slower.

Artificial intelligence offers an alternative for these children in the form of personalized learning systems that adjust lessons, reviews, and activities based on individual skill levels and strengths. The technologys adaptive customization around individual capabilities also offers the opportunity for students to advance at the pace most appropriate for them.

Given evidence that AI-powered intelligent tutoring systems outperform traditional classrooms, AI could have a democratizing effect on education not to mention reducing the need for large centralized physical schools. With the capacity to constantly adapt to an individual childs capabilities and circumstances, AI learning systems allow what in manufacturing is called mass customization.

But if children are learning at their own pace, in their own way, what happens to our existing one-size-fits-all approach, where children are collected in one large place and put through a standardized curriculum? No one knows the answer yet.

But taken to its logical extreme, if there is less reason to send children to large, centralized, physical schools, parents may begin serving as the educational gatekeepers. Theyll also have to facilitate behavioral and social learning opportunities. And, of course, theyll have to grapple with questions of how to prepare their children for a rapidly changing workforce. AI is likely to give us choices, societally and as individuals, which we have not had before and for which we have not considered the full ramifications.

With AI in the mix, it seems likely that our educational choices will broaden, and the context of education is likely to change quickly, as well. A World Economic Forum report on the future of jobs predicts that 65 percent of students starting elementary school today will eventually work in jobs that dont exist yet. If a core aim of education is to groom students for career success, how do we do that when we dont know what careers will be relevant when they come of age?

We dont know how the impact of AI will play out. It is worth recalling the excitement and exuberance in the early and mid-1980s, when personal computers were first introduced into school systems. There was great anticipation that computers would have significant positive impacts on students educational outcomes. But while computers in schools changed education practices and experiences, data shows that they did not make a meaningful difference in educational outcomes, at least in the aggregate. National scores on the National Assessment of Educational Progress tests for graduating seniors have barely budged in nearly fifty years.

All of which is to say that it is premature to make firm forecasts of how AI might change educational outcomes. We can, however, think through the logical consequences of reasonable assumptions. AI-enabled education might give parents much more control over their childs education than does our current one-size-fits-all approach. But with AIs potential comes more complexity, consequentiality, and personal accountability. Parents may find themselves facing entirely new and complicated decisions related to their childrens education.

If we indeed move to a system of education that optimizes individual learning experiences and outcomes, then we might expect better outcomes overall but also potentially greater variance in outcomes. Moving away from a factory-style, standardized educational model might also drive higher levels of knowledge acquisition. Right now, education is still strongly a community activity. What happens if the administrative focus changes from large regions to local neighborhoods and becomes centered around self-organizing groups of parents with shared goals? Greater local control but also, perhaps, less normalization across larger groups.

Following through with this logic, here are eight possible implications of the adoption of AI in education that parents and society at large may have to address:

AI-driven learning is a transformative solution with the power to change the way kids view the world and how they interact with the people around them. A child who learns via AI technologies could gain untold benefits and skills intellectually, socially, and emotionally. But this method is likely to demand increased parental oversight, including time-consuming direct supervision of kids AI learning activities. Parents may have to make tough decisions about their careers to oversee their childrens educations, or about where the family will live to access the best resources and support for this new type of learning.

AI has the potential to change the quality, delivery, and scalability of education. But it may also change forever the role parents play in their childrens education.

Additional article contributors: Charles Bayless, Mehdi Ghafourifar, and Brian Walker.

This article appeared originally at Entefy.

Alston Ghafourifar is the CEO and cofounder of Entefy, an AI-communication technology company, introducing the first universal communicator.

Read more from the original source:

Why parents might not be ready for AI in the classroom - VentureBeat

Posted in Ai | Comments Off on Why parents might not be ready for AI in the classroom – VentureBeat

Page 265«..1020..264265266267..270280..»