Page 242«..1020..241242243244..250260..»

Category Archives: Ai

Google Is Already Late to China’s AI Revolution – WIRED

Posted: June 3, 2017 at 12:29 pm

v0z ILyh{"e{h@Jbb3p<^DR+HUUUOy^A2~cY?{='%)>goqykC[`7C?$T*F9~[.x`I>UU +jub@<~:+jYOm7;?I.d2D8 Nemy tKE4{L8S^P G[[^p"w n[x l:Je2t%'P:k W$e NlX!Bv }^{w#J^8^J|IRI:KkzstgVO-18Pv.wle;vFdyt9|yDylvb-z#'4;W'e]~Ex/ QC^,|-<1~3 $~&%wv}?3z1m!%I 1DE)FNt 4iSmcA@OFxu L7mVwdE%,vaO:%$3Ps pV>}Mhs<"8 S3@`KAH )%2SGS13c;. {P5RVF;E8rf^V`_ .MTwNnvGP!~|h>$G Cq`[%anHrv+ztv{)Xqv j;N}2CqH~_u0:(z>C}>/R[ 2DgPf%s0{ b4 m" b(:#IaQ*)!Hj1HgGwh]"[w)-c$02XP+kO75^ul78}nA@jfu&Q$GyXc+Wa+ tdx8FtrcgwbTWX>!>F_toF*Q?AIG`_q~F=>#PIFQA/{sVynx=~ 6nciB}(0I72*HtX.%xZ@#$y{cAnr D5R!6K^@,t39(x; };X4}N{-3<|~9V5OGA=PhyZ3z5{yC'`;#h'fs0a.kdO9`oUA4x3&1,a Xlii-%7oUp+VJm7.gjmnMQ-7bXTp?01zTS[.RLAh^k1wrS|.YMdD5gyif25I%*.~:Qx" R|^#KRgP2K&d.<$fn<`hS5&as+T7vY# Wgg$sG1A|:b 7c9mvwG8kZ'S!lJw4=I8Ev3* i&}Uk^%oq?U|owUnya5&xx@py`MqHhi{qU5 F{wIdt5k$_L%lz/q}zmoroO^AYUOxy3 *H?VcZ9-AlNlw=K45&K%0L2nPt3+L NR9?{yK?Zr:Iqv,- V:K7?~=(v> F|X@QN)(Cc3Avw_wJU3q<@cPeH7UxpP4<8@5)%eQ94lnL#P Gbq"N(l!m ;& oJA|? ; O>?38HO9`Qk-H@al{}~r4jQiT~r_T~QZ/R-Ee[zB)C6Lh]ZoMT#'IT(tSo*'2dx@t[2d[eEJ!3I$iIK!4(GCOyuZ='|F7{BV;hBjjLN'Raz=U6*$:Sv)]B91L|0w%c gg#)xI'jxVUO)/ORh%UIMXF'<@G _bxC4n=zSonx`>>XVZA{ru7@!Zj 3]8o> ^a)hs Zdw8fx(G=T %JXI]~7.Vjj=*627$c5 fC_,;T3u8rS#QEkXq RLgqFs;E@@[MQn4pnW9FnUc|x0GR,%* XK7'|Fode{etE`HB/j +*Qnv!$(5y4{d8> AmTg[T&aV@Z:`kG.s~'*kFZ-E0l UlbH}hr+5gBi^NAo lW;<"ExYIQ-Ioi9z t1bq'u=(hZ$^cG'EZ;!YhiV8X4yXTSVEuk+B'j/tlQ*Y6 GgPqz;H-i+oKq39gaF-@W0C7#'PMxa82Pjz0ez:=6,y_1"zZNYFRIU-,Y3Zgv9LO:LL$ZyyB9}9CR `7Gv"YQ: O.;S#5I^#2P+Qn$$YC,nG]F&r`dMT!=`6H)W-^ fPOYqDE{AM lo&YZ}09J9-F{J+GjG,,zX8Ckjzy~SW| lE%'GgT teMB!6&gTqzr+~U{8#-CP Dr(v_$Z96s(N!Q`Nq>:K(- (kb*CQSP h] V3)eFEyBozPQUj3Zm6#Dfd%|Ed#2H/nV{rP.>Egb9 o}bWr`gfD{2aK&p 1*l}-?{rP+PsL{rKAY*"8 ~q[FVqfTF?rB?g/A%uCxcp6Z#Y+{d)R hb[,Ft_|E|'"b{A'COCo.XN_0_9|j3Lx m78l5WEg8L~MGObF @OHO1}Kf~J!puxB{zUd>*7vtWr7Vzv=^)t};$cEmA^,>C/ '|vM4cG^z{&H7n_Rm+r0gb1yhheNzaE#RTRBj/W+o 0SFHRbf{I#z.$VIW>)OG~c58fXUtW[sJ "BNRU1f',Nq`'w*6 x8'c}$!ih~:J9v"d}VX{eav.*CNnvnzkp.=q: x^2TT kq-z+,/" $"j PORrl!=]LKi=_cCfGeHA^9>}O8rz&LqAf#{RbxbqZ!t]/alOQt; ryasC*,RF^t0$9}u qz4#Qb=n1DL*: w&Fg_X6jfsv="=; M+d'[? FrS`Fe1{kWK~[3|N+}evF(^y[t`GO ^eIzUDg`g.@Qs% o>)z[bw6oeMTYo>7E[I,w $<)"NH&[Ye+9D>RMzs@!>qR6`H>n>AmX2 c=)xlUET1LPdKxJlx0b`_r4V s$?om~{V3HNqsD1=M N)V2T!H <7^Di4#=j o,;0bF`uMn"R?'VmQ|>ARa#Kgb CY51Cu|O'"O#KX[:WZ**3A0+8dmDd0vG_4+uN%^'*cax?.y#&"H5 N^t9s"$I`/4Z}XvM n`F0CzLw;''~s{04`M{%A7eFS|5lSu@17j9Kg3CXD.DtX)cQ!Lm!MaQ&b3%tiS:;UuS&~[nK&rOj{RdY)24[}zUnLBb^)j@#{G" 0?$w( n?|BHHl0`YfnU HkGtan^#UV8>d%f!+^$~6bmeO:fMh(f KF#[H+^M4Pz.62w&033t29}Fs3:EJwPoOB|`]jDA "J} *">&Oi^:fqX[|LoY`9U5oI9:<(-:08DY-56u =|@vv#OE>%X5F#pHZX?9ml#4rT+Nd:zkV^[uG2nLn4rCua@d5S'LI#RTL]6h(:0)8^K7%3rRe6OfTzV?UgP9C?_Y*l|AkZ#,}j{U5.SkpZ>F}jF?Xkiws,R%i{uMg% 2%&oe]0HI9'NJ- 5?R}3Qeod^>Z)P.H^%St{q/#6Tn;!);LSS^0&uL aHQ:*M'ZNW+1Ri Gm8,3Mw0L +acroLi6EVbEemiHjiECg0m(/0t3r%6!bBrsms]}N]9N;YAB/o}2U2.Ek}ovPu.4@wT]XT{/f$XwNwpi,kFm-'we$ a&I^wk5vNf[fYXKt_r{%(kd6rvL^C|;r'c&vQ[w0p4K'uHR5i(NOn45[P.2^FY(Gbansz2C +@bG0$:XA61t@)/Eht|1Vn6CC

Go here to see the original:

Google Is Already Late to China's AI Revolution - WIRED

Posted in Ai | Comments Off on Google Is Already Late to China’s AI Revolution – WIRED

How AI Is Changing the World of Advertising Forever – TNW

Posted: at 12:29 pm

For how much Hollywood loves remakes, Im curious to see what a futuristic Mad Men is going to look like. Dont get me wrong; Im not expecting to see robotic Don Draper, who writes poignant lines of copy aggregated from data points all over the world (thatd be cheesy and boring). Rather, Id be more excited to see how technology is going to change the world of advertising for good.

A lot of people might think that under the reigns of Artificial Intelligence every job will suddenly be replaced by a robot. However, the core component of advertising is storytelling, which is something that requires a human touch. Even more, AI isnt going to replace storytellers, but rather empower them. Yes, the world of artificial intelligence is about to make advertising more human. Heres why:

Its no secret that the advertising world goes giddy over any innovation in the tech realm. After all, a big portion of how firms gain an edge in their industry is by being up on the latest and greatest, as well as demonstrating a capacity to look at how new practices can be applied to client campaigns. And when it comes to AI, a lot of major agencies have already situated themselves ahead of the curve.

The interesting thing to note here isnt necessarily that these agencies are using AI in general, but rather, how theyre using it. For example, the link above notes how a few firms have teamed up with AI firms to work on targeting and audience discovery. While these practices have been implemented long before, Artificial Intelligence has been accelerating the process. However, even with major players teaming up with the likes of IBM Watson, smaller agencies and startups have been on this trend as well.

An excellent example of this is the company Frank, an AI based advertising firm for startups. Franks goal is to use AI in the same manner of targeting mentioned above, only offering it to those businesses that could really use the savings. The platform allows you to set the goals of your campaign, as well as hones in on targeting and bidding efficiently. This saves time and money often devoted to outsourcing digital advertising efforts, as well as gives an accurate depiction of how ads are performing in real time. Expect players like Frank to make a significant change in how small businesses and startups approach how to use AI in their marketing.

One of the biggest news stories to hit about AI and advertising was Goldman Sachs $30 million investment into Persado. If you havent heard about it yet, Persado essentially aggregates and compiles cognitive content, which is copy backed by data. It breaks down everything, from sentence structure, word choice, emotion, time of day, and even can bring in a more accurate call-to-action. And for those that hire digital marketers and advertisers, this sounds like a dream come true in saving time and money. However, when it comes to writing, AI can only go so far.

While some content creators and digital copywriters might be a little nervous that AI will eventually take their jobs, thats simply not the case. Writing involves a certain sense of emotional intelligence and response that no computer can feel. Moreover, the type of content that AI can create is limited to short-term messages. Im not sure about you, but Ill safely bet that no major marketing director is willing to put their Super Bowl ad in the hands of a computer. Overall, while Wall Street recognizes Artificial Intelligences potential impact in the creative world, its safe to say when it comes to telling a story, that human touch will never go away.

Perhaps one of the most underrated things about AI is its potential to eliminate practices altogether. While we mentioned above that, yes, certain jobs in the creative field will never go away, theres a possibility that certain processes in the marketing channel might change drastically.

For example, companies like Leadcrunch are using AI to build up B2B sales leads. While before B2B sales could rely on either targeted ads or sales teams to bring clients in, software like Leadcrunchs is eliminating those processes altogether. Granted, this isnt exactly a bad thing as a lot of B2B communications relies heavily on educating consumers, something a banner ad cant do as accurately as a person. Overall, companies like this are going to drastically change how our pipelines work, potentially changing how the relationship between advertising and AI work hand-in-hand for a long time.

Read next: How Automation is Making The Sales Process Easier

See more here:

How AI Is Changing the World of Advertising Forever - TNW

Posted in Ai | Comments Off on How AI Is Changing the World of Advertising Forever – TNW

How to Integrate AI Into Your Digital Marketing Strategy – Search Engine Journal

Posted: at 12:29 pm

Artificial intelligence (AI) used to be pure science fiction. But now AIis science fact.

This once futuristictechnologyis nowimplemented in almost every aspect of our lives. Wemust adjust.

Luckily, were used to adjusting, pivoting, and expecting a change of any kind and quickly. The use of AI in our digital marketingstrategies is no different.

We have to think of AI as we thought of mobile years ago. If we dont learn about it and apply it, then we are destined to be out of a job.

AI encompasses a large scope of technologies. The basic concept of AI is that it is a machine that learns to mimic human behavior.

Googlehas built AI into pretty much every product they have frompaid and organic search, to Gmail, to YouTube. Facebook powers all its experiences using AI, whether itsposts that appear inthe news feed or the advertising you see.

AI can be used for many things, but today were only focusing on what it can do for your marketing strategy and how to implement it.

Hereare fourways that businesses of any size can start using AI.

Contentcreation is expensive and time-consuming. But now we have access to AItools that allow users to input data and output content.

Companies likeWordsmithallow us to connect data and write short outlines, then theyll generate a story in seconds. This allows for companies everywhere to scale their content production while improving the quality of the content they put out.

AI generated content is built using an NLG (natural language generation) engine. These tools make it easy toformat and translate content quickly.

Not ready yet to produce content at scale with robots? No problem. You can still integrate AI into your talent sourcing for writers.

Companies like Scripted connect you to freelance writers by analyzing content to findthe best writer for yourjob. This saves youhours of work reading through content.

Basically, AI generated content creation and content creator sourcing is going to improve the quality of content. It is important to jump on board and get your content process in order before your competition moves ahead of you.

Chatbots are copying human behavior just like any other sort of AI, but they have a specialty. Chatbots interpret consumer questions and queries. They even help them to complete orders.

There are many companies out there creating chatbots and virtual assistants to help companies keep up with the times. For instance, Apples Siri, Google Assistant, Amazon Echo, and others are all forms of chatbots.

Chatbots are also being created specifically to help marketers with customer service.

Facebook is particularly interested in helping brands create chatbots toimprove customer service. You can access the tools they have created, called wit.ai bot engine.

For people who find this to be way too developer centered and would like other companies to build these for you, you can use tools like ChattyPeople.

Whichever tools you choose, its important for any company that providescustomer service to start implementing chatbotsnow.

Anyone who has ever had to do an image audit on a large or commercial website should truly understand the value in a tool that can auto-recognize images.

When issues of image and video licensing, poor image and video tagging, and UX come into play, AI can solve our problems.

Next time youre hit with an image audit or a need to categorize your images and video for improved UX, you should look to AI tools like Dextro or Clarifai.

Youcan also use tools for smart tagging. Instead of bringing in tools to review current assets, you can use Adobe Experience Manager to maintain appropriate tagging with their smart tagging features.

All of these tools will save you so much time, energy, and (in some cases) money.

Voice search tools, such as Amazon Echo and Google Home, will change the face of marketing forever.

We must start looking at the ways voice search will change our content, websites, and customer service options.

We have to think about how a person would ask for something instead of how they would search for it using text.

Studiessuggest that query length in voice search is much longer than in text search. Sofocus on long-tail keywords instead of short.

We also know that language is a much greater signifier of intent. This means that as we start to use more voice search, the conversions (in theory) should be higher and the quality of leads should be better.

Voice search isnt on its way. Its here. Make sure your brand stays ahead of the game.

AI once soundedscary and futuristic. But it is neither.

If youstudy, test, and start implementing what youknow about AI into your strategies now, then youll be able to keep up.

Dont let yourself get left behind. Its only a matter of time before AIbecomes the new normal and the next big thing hits.

See more here:

How to Integrate AI Into Your Digital Marketing Strategy - Search Engine Journal

Posted in Ai | Comments Off on How to Integrate AI Into Your Digital Marketing Strategy – Search Engine Journal

Researchers Have Created an AI That Could Read and React to Emotions – Futurism

Posted: at 12:29 pm

In Brief University of Cambridge researchers have developed an AI algorithm that can assess how much pain a sheep is in by reading its facial expressions. This system can facilitate the early detection of painful conditions in livestock, and eventually, it could be used as the basis for AIs that read emotions on human faces. Reading Sheep

One of todays more popular artificially intelligent (AI) androids comes from the TV series MARVELs Agents of S.H.I.E.L.D. Those of you who followed the latest seasons story no spoilers here! probably love or hate ADA by now. One of the most interesting things about this fictional AI character is that it can read peoples emotions. Thanks to researchers from theUniversity of Cambridge,this AI ability might soonmake the jump from sci-fi to reality.

The first step in creating such a system is training an algorithm on simplerfacial expressions and justone specificemotion or feeling. To that end, the Cambridge team focused onusing a machine learning algorithm to figure out if a sheep is in pain, and this week, they presentedtheir research at the IEEE International Conference on Automatic Face and Gesture Recognition in Washington, D.C.

The system they developed, the Sheep Pain Facial Expression Scale (SPFES), was trained using a dataset of 500 sheep photographs to learn how to identify five distinct features of a sheeps face when the animal is in pain. The algorithm then ranks the features on a scale of 1 to 10 to determine the severity of the pain. Early tests showed that the SPFES could estimate pain levels with an 80 percent accuracy.

SPFES was a departure for Peter Robinson, the Cambridge professor leading the research, as he typically focuses on systems designed toread human facial expressions. Theres been much more study over the years with people, Robinson explained in a press release.But a lot of the earlier work on the faces of animals was actually done by Darwin, who argued that all humans and many animals show emotion through remarkably similar behaviors, so we thought there would likely be crossover between animals and our work in human faces.

Asco-author Marwa Mahmoud explained, The interesting part is that you can see a clear analogy between these actions in the sheeps faces and similar facial actions in humans when they are in pain there is a similarity in terms of the muscles in their faces and in our faces.

Next, the team hopes to teach SPFES how to read sheep facial expressions from moving images, as well as train the system to work when a sheep isnt looking directly at a camera. Even as is, though, the algorithm could improve the quality of life of livestock like sheep by facilitating the early detection of painful conditions that require quick treatment, adding it to thegrowing list ofpractical and humane applications for AI.

Additionaldevelopments could lead to systems that are able to accurately recognize and react to human emotions, further blurring the line between natural and artificialintelligences.

Go here to read the rest:

Researchers Have Created an AI That Could Read and React to Emotions - Futurism

Posted in Ai | Comments Off on Researchers Have Created an AI That Could Read and React to Emotions – Futurism

When Will AI Exceed Human Performance? Researchers Just Gave A Timeline – Fossbytes

Posted: at 12:29 pm

Short Bytes: Various leaders in the technology world have predicted that AI and computers would overpower us in the future. These assumptions have become clearer with a new research published by Oxford and Yale University. There are 50% chances that in the next 45 years AI could be used to automate almost all of the human tasks.

According to a survey conducted by the researchers which involved 353 responses (out of 1634 requests) from the AI experts who published at NIPS and ICML conferences in 2015 There is a 50% chance of AI achieving the efficiency which would put it on par with humans, within the next 45 years.

The participants were asked to estimate the timing for specificAI capabilities like language translation and folding laundry. The superiority at specific occupations like surgeons, truck drivers, superiority over humans at all tasks, and the how these advancements would impact the society.

The researchers calculated the median figure from the data collected from various participants and created an estimated timeline which shows the number of approximate years from 2016 for AI to excel in various activities.

The intervals in the figure represent the date range from 25% to 75% probability of the event occurring, with the black dot representing 50% probability.

So, what do the numbers say? Would we have AIs playing Angry Birds better than us in the next seven years? Would AIs be able to replace those following-you-forever salespersons in the supermarkets, maybe in the next 15 to 30 years? In fact, you will be surprised to know that KFC has even launched an AI-powered store in China where the bots know which item you would prefer.

Similarly, you could expect AI surgeons opening up your body parts by 2060 and AI researchers creating more advanced AI by 2100. The research predicts that by 2140, AI would be able to almost everything that humans can do.

Clearly, these numbers induce a sense of insecurity amongst us. But still, it appears that the researchers have underrated the extent of the development currently going on in this field.

The report suggested a time span of around 12 years for AI to defeat humans in the game GO. But the recent news about Googles AlphaGO winning over Chinese world champion Ke Jie says a different story about the future.

One important thing to consider here is the t speed of AI development. Experts based in Asia might have witnessed a faster growth rate than the ones in the United States. The researchers further noted that the age and know-how of the experts didnt affect the predictions but their locations did.

North American researchers estimated around 74 years for AI to outperform humans while the number was only 30 in the case of Asian researchers.

Also, the 45-year prediction made for AI to outperform humans should be taken with a pinch of salt. Its a long timespan, often more than the complete professional life of a person. Thus, any of the predicted changes are less likely to happen with the technology currently accessible to us. This suggests that it is a number to be treated with caution.

The research which is yet to be peer-reviewed has been published on arxiv.org.

Got something to add? Drop your thoughts and feedback.

Read the original here:

When Will AI Exceed Human Performance? Researchers Just Gave A Timeline - Fossbytes

Posted in Ai | Comments Off on When Will AI Exceed Human Performance? Researchers Just Gave A Timeline – Fossbytes

AlphaGo AI stuns go community – The Japan Times

Posted: at 12:29 pm

Googles artificial intelligence program AlphaGo stunned players and fans of the ancient Chinese board game of go last year by defeating South Korean grandmaster Lee Sedol 4-1. Last month, an upgraded version of the program achieved a more astonishing feat by trouncing Ke Jie from China, the worlds top player, 3-0 in a five-game contest. In the world of go, AI appears to have surpassed humans, ushering in an age in which human players will need to learn from AI. What happened in the game of go also poses a larger question of how humans can coexist with AI in other fields.

In a go match, two players alternately lay black and white stones on 361 points of intersection on a board with a 19-by-19 grid of lines, trying to seal off a larger territory than the opponent. It is said that the number of possible moves amounts to 10 to the power of 360. This huge variety of options compels even top-class players to differ on the question of which moves are the best. Such freedom to maneuver caused experts to believe it would take a while before AI would catch up with humans in the world of go. Against this background, AlphaGos sweeping victory over the worlds No. 1 player is a significant event that not only symbolizes the rapid development of computer science but is also encouraging for the application of AI in various fields.

In part of the contest with Lee in Seoul in March 2016, AlphaGo made irrational moves, cornering itself into a disadvantageous position. But in the case of its contest with Ke in the eastern Chinese city of Wuzhen in late May, it made convincing moves throughout, subjecting the human to a horrible experience. He called AlphaGo a go player like a god.

AlphaGo was built by DeepMind, a Google subsidiary. It takes advantage of technology known as deep learning, which utilizes neural networks similar to those of human brains to learn from a vast amount of data and enhance judging power. This is analogous to a baby learning a language by being exposed to a huge volume of utterances over a period of time. The program not only learns effective patterns of moves for go by studying enormous volumes of documented previous games but also hones its skills by playing millions of games against itself. In this manner, it has accomplished a remarkable evolution over the past year. Unlike humans, it is free of fatigue and emotional fluctuations. Because it grows stronger by playing games against itself, there is no knowing how good it will become in the future.

Feeling intimidated by AI programs should not be the only reaction of human go players. They can receive inspiration from AlphaGo since it shows a superior grasp of the whole situation of a contest, instead of being obsessed with localized moves, and it often lays stones in and around the center of the board. Human players usually first try to seal off territory around the corners. Its playing records also prove that even some moves traditionally considered as bad have advantages. By learning from AlphaGo, go players can acquire new skills and make their contests more interesting.

AlphaGo does have a weak point. It cannot explain its thinking behind the particular moves that it makes. When watching ordinary go contests, fans can enjoy listening to analyses by professional players. Also, ordinary go contests are interesting since psychology plays such an important part of the game, especially at critical points. This shows there are some elements of go that AI cannot take over.

DeepMind is thinking about how it can apply the know-how it has accumulated through the AlphaGo program to other areas, such as developing drugs and diagnosing patients through data analysis. But the fact that the program made irrational moves during its match with South Koreas Lee shows that the technology is not error-free a problem that must be resolved before AI can be applied to such fields as medical services and self-driving vehicles. Many problems may have to be overcome to make AI safe enough for application in areas where human lives are at stake.

A report issued by Nomura Research Institute says that in 10 to 20 years, AI may be capable of taking over jobs now being done by 49 percent of Japans workforce. At the same time, it says AI cannot intrude into fields where cooperation or harmony between people is needed or where people create abstract concepts like art, historical studies, philosophy and theology. It will be all the more important for both the public and private sectors to make serious efforts to cultivate peoples ability to think and create while finding out what proper roles AI should play in society.

Follow this link:

AlphaGo AI stuns go community - The Japan Times

Posted in Ai | Comments Off on AlphaGo AI stuns go community – The Japan Times

The next big leap in AI could come from warehouse robots – The Verge

Posted: June 1, 2017 at 10:39 pm

Ask Geordie Rose and Suzanne Gildert, co-founders of the startup Kindred, about their companys philosophy, and theyll describe a bold vision of the future: machines with human-level intelligence. Rose says these will be perhaps the most transformative inventions in history and they arent far away. More intriguing than this prediction is Kindreds proposed path for achieving it. Unlike some of the most cash-flush corporations in Silicon Valley, Kindred is focusing not on chatbots or game-playing programs, but on automating physical robots.

Gildert, a physicist who conceived Kindred in 2013 while working with Rose at quantum computing company D-Wave, thinks giving AI a physical body is the only way to make real progress toward a true thinking machine. If you want to build intelligence that conceptually thinks in the same way a human does it needs to have a similar sensory motor as humans do, Gildert says. The trick to achieving this, she thinks, is to train robots by having them collaborate with humans in the physical world. Rose, who co-founded D-Wave in 1999, stepped back from his role as chief technology officer to work on Kindred with Gildert.

Kindred wants to train robots by having them collaborate with humans in the physical world

The first step toward their new shared goal is an industrial warehouse robot called the Orb. Its a robotic arm that sits inside a hexagonal glass encasement, equipped with a bevy of sensors to help it see, feel, and even hear its surroundings. The arm is operated using a mix of human control and automated software. Because so many warehouse workers today spend a significant amount of time sorting products and scanning barcodes, Kindred developed a robotic arm that can do some elements automatically. Meanwhile, humans step in when needed to manually operate the robot to perform tasks that are difficult for machines, like gripping a single product from a cluster of different items.

Workers can even operate the arm remotely using an off-the-shelf HTC Vive headset and virtual reality motion controllers. It turns out that VR is great for gathering data on depth and other information humans intuitively use to grasp objects.

Kindred is now focused on getting its finished Orb into warehouses, where it can begin learning at an accelerated pace by sorting vastly different products and observing human operators. Because the company gathers data every time a human uses the Orb, engineers are able to improve its software over time using techniques such as reinforcement learning, which improves software through repetition. Down the line, the Orb should slowly take over more responsibility and, ideally, learn to perform new tasks.

But Kindreds ultimate goal is much more ambitious. It may sound counterintuitive, but Rose and Gildert think warehouses are the perfect place to start on the path toward human-level artificial intelligence. Because the US shipping marketplace is already rife with single-purpose robots, thanks in part to Amazon, there are plenty of opportunities for humans to train AI. Finding, handling, and sorting products while maneuvering in a fast-moving environment is a data gold mine for building robots that can operate in the real world.

Rose and Gildert believe the next generation of AI wont be in the form of a disembodied voice living in our phones. Rather, they believe the greatest strides will come from programs running inside a physical robot can gain knowledge about the world and itself from the ground up, like a human infant does from birth.

Kindreds is working toward whats known as artificial general intelligence, or software capable of performing any task a human being can do. Artificial general intelligence, or AGI, is sometimes referred to as strong or full AI because it exists in contrast to AI programs, like DeepMinds AlphaGo system, with very specific applications. Other more conventional forms of weak or narrow AI include the underlying software behind Netflix and Amazon recommendations, Snapchat camera effects that rely on facial recognition, and Googles fast and accurate language translations.

These algorithms are developed by applying deep learning techniques to large-scale neural networks until they can, say, differentiate between an image of a dog and a cat. They perform one task, or perhaps many in some cases, far better than humans can. But they are extremely limited and dont learn or adapt the way humans do. The software that recognizes a sunset cant predict whether youll like a Netflix movie or translate a sentence into Japanese. Right now, you cant ask AlphaGo to face off in chess it doesnt know the rules and wouldnt know how to begin learning them.

Kindred thinks our physical body is intrinsic to the secrets of human cognition

This is the fundamental challenge of AGI: how to create an intelligent system, the kind we know only from science fiction, that can truly learn on its own without needing to be fed thousands of examples and trained over the course of weeks or months.

The biggest names in AI research, like DeepMind, are focused on game-playing because it seems to be the most viable path forward. After all, if you can teach software to play Pong, perhaps it can take the lessons learned and apply them to Breakout? This applied knowledge approach, which mimics the way a human player can quickly intuit the rules of a new game, has proven promising.

For instance, AlphaGo Master, DeepMinds latest Go system that just bested world champion Ke Jie, now effectively teaches itself how to play better. One of the things were most excited about is not just that it can play Go better, but we hope that thisll actually lead to technologies that are more generally applicable to other challenging domains, DeepMind co-founder and CEO Demis Hassabis said at the event last week.

Yet for Kindreds founders, the quest to crack the secret of human cognition cant be separated from our physical bodies. Our founding belief was that in order to make real progress toward the original objectives of AI, you needed to start by grounding your ideas in the physical world, Rose says. And that means robots, and robots with sensors that can look around, touch, hear the world that surrounds them.

This body-first approach to AI is based on a theory called embodied cognition, which suggests that the interplay between our brain, body, and the physical world is what produces elements of consciousness and the ability to reason. (A fun exercise here is thinking about how many common metaphors have physical underpinnings, like thinking of affection as warmth or something inconceivable as being over your head.) Without understanding how the brain developed to control the body and guide functions like locomotion and visual processing, the theory goes, we may never be able to reproduce it artificially.

The body-first approach to AI is based on a theory called embodied cognition

Other than Kindred, work on AI and embodied cognition mostly happens in the research divisions of large tech companies and academia. For example, Pieter Abbeel, who leads development on the Berkeley Robot for the Elimination of Tedious Tasks (BRETT), aims to create robots that can learn much like young children do.

By giving its robot sensory abilities and motor functions and then using AI training techniques, the BRETT team devised a way for it to acquire knowledge and physical skills much faster than with standard programming and with the flexibility to keep learning. Much like how babies are constantly adjusting their behavior when attempting something new, BRETT also approaches unique problems, fails at first, and then adjusts over repeated attempts and under new constraints. Abbeels team even uses childrens toys to test BRETTs aptitude for problem solving.

OpenAI, the nonprofit funded by SpaceX and Tesla CEO Elon Musk, is working on both general purpose game-playing algorithms and robotics, under the notion that both avenues are complementary. Helping the team is Abbeel, who is on leave from Berkeley to help OpenAI make progress fusing AI learnings with modern robotics. The interesting thing about robotics is that it forces us to deal with the actual data we would want an intelligent agent to deal with, says Josh Tobin, a graduate student at Berkeley who works on robotics at OpenAI.

Applying AI to real-world tasks like picking up objects and stacking blocks involves tackling a whole suite of new problems, Tobin says, like managing unfamiliar textures and replicating minute motor movements. Solving them is necessary if were to ever deploy intelligent robots beyond factory floors.

Wojciech Zaremba, who leads OpenAIs robotics work, says that a holy grail of sorts would be a general-purpose robot powered by AI that can learn a new task scrambling eggs, for instance by watching someone do it just once. This is why OpenAI is working on teaching robots new skills that are first demonstrated by a human in a simulated VR environment, much like a video game, where its much easier and less costly to produce and collect data.

You could imagine that, as a final outcome, if its doable, you have files online of recordings of various tasks, Zaremba says. And then if you want the robot to replicate this behavior, you just download the file.

When I first operated the Orb, on an April afternoon in Kindreds San Francisco warehouse space, a group of six or so engineers were scattered about testing the robotic arms with various pink-colored bins of products vitamin bottles, soft plastic cylinders of Lysol cleaning wipes, rolls of paper towels.

The Orb is designed to help sort these objects in a large heap inside its glass container, while the arm sits affixed to the roof of the container. First, an operator wearing a VR headset moves the arm to a desired object, lowers the gripper, and adjusts the two clamps until a firm grip is established. Then the human can simply let go. Kindred has already automated the process of lifting the object in the air, scanning the barcode, and sorting it into the necessary bin.

Using the Orb resembles operating a video game version of a toy claw machine

In any gigantic warehouse, people have to walk around and pick up things, says George Babu, Kindreds chief product officer. The most efficient way to do that is to pick up a whole bunch of different things at the same time. Those go to someplace where you have them separated. Our robot does that job in the middle. The idea is that warehouse workers can dump a bunch of products into the Orb, while a remote operator works with the robot to sort them.

Amazon is working on something similar, and the company now holds an annual picking challenge to spur development in industrial robotics that are capable of handling and sorting physical items. Kindred is quick to recognize Amazons prowess in this department. In the fulfillment world, Amazon uses a different set of approaches than all of the other fulfillment provisioners. They have the scale, the scope, and the know-how to implement end-to-end systems that are very effective at what they do, Rose says. But he thinks Amazon is likely to keep this technology to itself. The advancements that Amazon makes toward doing this job well dont benefit all of their competitors.

Kindreds system, on the other hand, is designed to integrate into existing warehouse tools. Last month, Kindred finished its first deployable devices, and it created more demand than we anticipated, according to Jim Liefer, Kindreds chief operating officer, though he wont disclose any initial customers.

I was surprised when using the Orb, with a Vive headset, by just how much it resembles a video game. Think of a toy claw machine, where the second the clamp touches down on an object, the automated process takes over and the arm springs to life with an uncanny jerkiness. It makes sense, considering Kindred built its depth-sensing system using the game engine Unity.

Kindred imagines future versions of the Orb being affixed to sliding rails or bipedal roaming robots

Max Bennett, Kindreds robotics product manager, says that the process is designed so that human warehouse workers can operate multiple Orbs simultaneously, gripping objects and letting the software take the reins before cycling to the next setup. Kindred imagines future versions of the robotic arm being affixed to sliding overhead rails or maybe even to bipedal robots that roam the floor. There is also a point at which the Vive is no longer necessary. Nobodys going to want to use a VR headset all day, Bennett tells me, suggesting that an Xbox controller or even just a computer mouse will do in the future.

As for how the Orb might impact jobs, Babu says there will be need for human labor for quite some time. Hes partly right: Amazon hired 100,000 workers in the last year alone, and plans to hire 100,000 more this year, mostly in warehouse and other fulfillment roles. But systems like the Orb raise the possibility that fewer jobs are needed as the work becomes more a matter of assisting and operating robots.

My view is that the humans will all move on to different work in the stream, Babu says.

Still, Forrester Research predicts that automation will result in 25 million jobs lost over the next decade, with only 15 million new jobs created. The end goals of automation have always been to reduce costs and improve efficiency, and that will inevitably mean the disappearance of certain types of labor.

Kindred is unique in the AI field not just for its robotics focus, but also because its diving head first into the industrial world with a commercial product. Many of the big tech companies working on AI are doing so with huge research organizations, like Facebook AI Research and Google Brain. These teams are filled with academics and engineers who work on abstract problems that then help inform real software features that get deployed to millions of consumers.

Kindred, as a startup, cant afford this approach. Day one we said: Were going to find a big market. Were going to build a wildly successful product for that initial market, and build a business by executing along that path first with one vertical and then maybe others, Rose explains. He adds that his experience with D-Wave, which raised more than $150 million over the course of more than a decade just to release its first product, inspired him to seek out a different approach to tackling big-picture problems.

Gildert and Rose dont want to rely solely on venture capital funding to build Kindred

You have this quandary that doing it right is going to take a long time, on the order of decades. How do you sustain that organization for that length of time without all the negative side effects of raising a lot of rounds of VC? Rose says. The answer is that you have to create a real business that is cash-flow positive very early. Kindred has raised $15 million in funding thus far from Eclipse, GV, Data Collective, and a number of other investors. But Rose stresses that the companys focus is to become profitable with the Orb, and that will help it in its main objective.

That objective, since the beginning, is human-level AI with a focus on what Gildert calls in-body cognition, or the type of thought processes that only arise from giving AI a physical shell. Intelligence absence a body is not what we think it means, she says. Intelligence with a body brings to it a number of constraints that are not there when you think about intelligence in a virtual environment. We certainly dont believe you can build a chatbot without a human-like body and expect it to pass [for a human].

Brains evolved to control bodies, Rose adds. And all these things that we think about as being the beautiful stuff that comes from cognition, theyre all side effects of this.

See the rest here:

The next big leap in AI could come from warehouse robots - The Verge

Posted in Ai | Comments Off on The next big leap in AI could come from warehouse robots – The Verge

Will China own the future of AI? – The Week Magazine

Posted: at 10:39 pm

Sign Up for

Our free email newsletters

In the 1982 film Firefox, Clint Eastwood plays an Air Force pilot and Vietnam vet on a secret mission to steal an advanced Soviet fighter jet. The airplane is super fast, radar invisible, and can be controlled by thought (as long as those thoughts are in Russian). "Yeah, I can fly it," Eastwood says. "I'm the best there is."

Two year later, Tom Clancy published The Hunt for Red October, later made into a film starring Alec Baldwin and Sean Connery. In this thriller, the revolutionary piece of Soviet technology is a super quiet nuclear submarine, almost undetectable by sonar.

Both pieces are fascinating Cold War artifacts playing off fears that the Soviet Union would manage a military version of Sputnik, leapfrogging U.S. tech and giving Moscow the decisive upper hand against the West. In reality, of course, the opposite was happening.

What if these two films were, as Hollywood puts it, "reimagined" for today's audiences? The tech MacGuffin would likely be Chinese artificial intelligence. Imagine Jason Bourne sneaking into China to download super intelligent software that would make that country's military and economy dominant. Or maybe he would kidnap a key Chinese computer scientist and bring him back stateside for interrogation.

Such a film would have an obvious "ripped from the headlines" feel about it. Specifically, headlines like this one from last weekend's New York Times: "Is China outsmarting America in AI?" Reporters John Mozur and John Markoff declare the "balance of power in technology is shifting" with China perhaps "only a step behind the United States" in artificial intelligence.

And as Beijing readies new multibillion dollar research initiatives, what is America doing? "China is spending more just as the United States cuts back," the Times journalists write. Indeed, the new Trump administration budget proposal would sharply reduce funding for U.S. government agencies responsible for federal AI research. For instance, the pieces notes, budget cuts could potentially reduce the National Science Foundation's spending on "intelligent systems" by 10 percent, to about $175 million.

It is unlikely that Congress would ever pass a budget with such draconian cuts, especially since wonks and policymakers on the left and right see basic science research as a proper and necessary role for government. Then again, Washington hasn't really been acting like science is an important national priority. As a share of the federal budget, basic science research has declined by two-thirds since the 1960s.

President Trump's proposed cuts are particularly striking since the just-departed Obama administration saw AI as critical technology with "incredible potential to help America stay on the cutting edge of innovation." Striking, but not surprising given that candidate Trump didn't even have a technology policy agenda. And what passed for an industrial strategy focused on reviving American steel manufacturing and coal mining. Perhaps America First doesn't really apply to science.

Of course, some conservative budgeteers advising the Trump White House argue that inefficient and speculative public investment "crowds out" private investment that is more likely to pay off in practical advances. But no one has apparently informed Eric Schmidt, executive chairman of Alphabet, the parent company of Google. The $700 billion tech giant, noted for its "moonshot" projects, is often held up an an example of how companies are where the really important research is done.

But in a recent Washington Post op-ed, Schmidt wrote that the "miracle machine" of American postwar innovation comes from the twin "interlocking engines" of the public and private sector. Without more public research investment, "we may wake up to find the next generation of technologies, industries, medicines, and armaments being pioneered elsewhere."

China is obviously far more capable of both invention and commercial application than the old Soviet Union. Its companies are already leaders in mobile tech. It's not hard to imagine why it would be better for U.S. workers to have America be the nation where the next generation of innovation is turned into amazing products and services. Plus, it would be odd for the world's leading military power not to also be the nation pushing the tech frontier. Certainly better us than an authoritarian nation that plans on using its advanced AI to enhance its ability to control its citizens, as well as enhance military capabilities.

America must spend more, maybe a lot more, on research. It should also do a better of job of attracting and keeping the world's best and brightest. Let's make sure this story has a happy ending.

Read the rest here:

Will China own the future of AI? - The Week Magazine

Posted in Ai | Comments Off on Will China own the future of AI? – The Week Magazine

AI will be better than human workers at all tasks in 45 years, says Oxford University report – The Independent

Posted: at 10:39 pm

Experts believe artificial intelligence will be better than humans at all tasks within 45 years, according to a new report.

However, some think that could happen much sooner.

Researchers from the University of Oxford and Yale University have revealed the results of a study surveying a larger and more representative sample of AI experts than any study to date, and they will concern people working in a wide range of industries.

Their aim was to find out how long it would be before machines became better than humans at all tasks, with the researchers using the definition: High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.

According to their findings, AI will outperform humans in many activities in the near future, including translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053).

Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans, reads the report.

Asian respondents expect HLMI in 30 years, whereas North Americans expect it in 74 years.

10 per cent of the experts believe HLMIwill arrive within nine years.

The results of the studyecho comments made by Stephen Hawking and Elon Musk.

The real risk with AI isn't malice but competence, said Professor Hawking.

A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.

Mr Musk, meanwhile, has suggested that people could merge with machines in the future, in order to remain relevant.

Ray Kurzweil, a futurist and Googles director of engineering, has gone even further and predicted that the so-called singularity the moment when artificial intelligence exceeds man's intellectual capacity and creates a runaway effect, which many believe will lead to the demise of the human race is little over a decade away.

Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo, the AI report adds.

The full text is available here.

Read the rest here:

AI will be better than human workers at all tasks in 45 years, says Oxford University report - The Independent

Posted in Ai | Comments Off on AI will be better than human workers at all tasks in 45 years, says Oxford University report – The Independent

Is AI the end of jobs or a new beginning? – Washington Post

Posted: at 10:39 pm

Artificial Intelligence (AI) is advancing so rapidly that even its developers are being caught off guard. Google co-founder Sergey Brin said in Davos, Switzerland, in January that it touches every single one of our main projects, ranging from search to photos to ads everything we do it definitely surprised me, even though I was sitting right there.

The long-promised AI, the stuff weve seen in science fiction, is coming and we need to be prepared. Today, AI is powering voice assistants such as Google Home, Amazon Alexa and Apple Siri, allowing them to have increasingly natural conversations with us and manage our lights, order food and schedule meetings. Businesses are infusing AI into their products to analyze the vast amounts of data and improve decision-making. In a decade or two, we will have robotic assistants that remind us of Rosie from The Jetsons and R2-D2 of Star Wars.

This has profound implications for how we live and work, for better and worse. AI is going to become our guide and companion and take millions of jobs away from people. We can deny this is happening, be angry or simply ignore it. But if we do, we will be the losers. As I discussed in my new book, Driver in the Driverless Car, technology is now advancing on an exponential curve and making science fiction a reality. We cant stop it. All we can do is to understand it and use it to better ourselves and humanity.

Rosie and R2-D2 may be on their way but AI is still very limited in its capability, and will be for a long time. The voice assistants are examples of what technologists call narrow AI: systems that are useful, can interact with humans and bear some of the hallmarks of intelligence but would never be mistaken for a human. They can, however, do a better job on a very specific range of tasks than humans can. I couldnt, for example, recall the winning and losing pitcher in every baseball game of the major leagues from the previous night.

Narrow-AI systems are much better than humans at accessing information stored in complex databases, but their capabilities exclude creative thought. If you asked Siri to find the perfect gift for your mother for Valentines Day, she might make a snarky comment but couldnt venture an educated guess. If you asked her to write your term paper on the Napoleonic Wars, she couldnt help. That is where the human element comes in and where the opportunities are for us to benefit from AI and stay employed.

In his book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, chess grandmaster Garry Kasparov tells of his shock and anger at being defeated by IBMs Deep Blue supercomputer in 1997. He acknowledges that he is a sore loser but was clearly traumatized by having a machine outsmart him. He was aware of the evolution of the technology but never believed it would beat him at his own game. After coming to grips with his defeat, 20 years later, he says fail-safes are required but so is courage.

Kasparov wrote: When I sat across from Deep Blue twenty years ago I sensed something new, something unsettling. Perhaps you will experience a similar feeling the first time you ride in a driverless car, or the first time your new computer boss issues an order at work. We must face these fears in order to get the most out of our technology and to get the most out of ourselves. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty, and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer or even playing chess.

In other words, we better get used to it and ride the wave.

Human superiority over animals is based on our ability to create and use tools. The mental capacity to make things that improved our chances of survival led to a natural selection of better toolmakers and tool users. Nearly everything a human does involves technology. For adding numbers, we used abacuses and mechanical calculators and now spreadsheets. To improve our memory, we wrote on stones, parchment and paper, and now have disk drives and cloud storage.

AI is the next step in improving our cognitive functions and decision-making.

Think about it: When was the last time you tried memorizing your calendar or Rolodex or used a printed map? Just as we instinctively do everything on our smartphones, we will rely on AI. We may have forfeited skills such as the ability to add up the price of our groceries but we are smarter and more productive. With the help of Google and Wikipedia, we can be experts on any topic, and these dont make us any dumber than encyclopedias, phone books and librarians did.

A valid concern is that dependence on AI may cause us to forfeit human creativity. As Kasparov observes, the chess games on our smartphones are many times more powerful than the supercomputers that defeated him, yet this didnt cause human chess players to become less capable the opposite happened. There are now stronger chess players all over the world, and the game is played in a better way.

As Kasparov explains: It used to be that young players might acquire the style of their early coaches. If you worked with a coach who preferred sharp openings and speculative attacking play himself, it would influence his pupils to play similarly. What happens when the early influential coach is a computer? The machine doesnt care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. It is entirely free of prejudice and doctrine. The heavy use of computers for practice and analysis has contributed to the development of a generation of players who are almost as free of dogma as the machines with which they train.

Perhaps this is the greatest benefit that AI will bring humanity can be free of dogma and historical bias; it can do more intelligent decision-making. And instead of doing repetitive data analysis and number crunching, human workers can focus on enhancing their knowledge and being more creative.

See more here:

Is AI the end of jobs or a new beginning? - Washington Post

Posted in Ai | Comments Off on Is AI the end of jobs or a new beginning? – Washington Post

Page 242«..1020..241242243244..250260..»