Page 229«..1020..228229230231..240250..»

Category Archives: Ai

Meet Penny, an AI That Predicts a Neighborhood’s Wealth From … – WIRED

Posted: June 29, 2017 at 11:16 am

rF0)0GZ$nZ$)>P qa}8?}}}y*FS 9<1TgVVWgo/T3'}%!gv.0OR{{v`~X7+x#AC)>A _)|3urpuG>[[WAN8sks~).f?we,]:oi0gcGw&9{nfNT3smr ~}pOv6=ErBBVjL5Mw:vEH$^|?]$e_Lac[lZ?$AUAWtr$NG}d} nvd_HiNGP+ HR>4BZ',_BA$wk2U;'7t@'6h {=2:DuSCSa14waL|s0>0K|`nY,n9=:wABh{oA>cN`'K?GcE!_y0G'm=S9 d~tp>Y<>0@8x|4;~4s.zy6o7_98$K"~tK0v?9MzY3,^1VGx4_k1[q.<4_R8r x#?9b )JQd/!zzs9:fevE[nOtrt ]wJW29<}Za-|/ 'CXQ)HxZQ G%&uz,T??69B'{z{e,3)[tK_CM) 1"8I1AU +`0^S^aTBqi^tT)>l=(e+Z?7 6BUhq-3J&Y,<;g''Q6?//_]x^/w6b31EU=1I$jE`}v[pH%jPQE$^4@ B({CieYv<9"iZ{@};m%[@ @dVHaz# _j3&A#lZ/ZH>vc6wmyEX2?_JQxRn BmUM+[u%V&EMk$zxX6hlsSgAk#}v,9l[Fp=Hhg "w&b_%LpVBMu0CTD =&v *I,PkI2/Z)_2mX:AH0]4w7@nnVqfWDc:+EC'v'{7k3M['hhPlGQ=-[ueJoQMvim6u.MZLz'ln0&i_KYstB):lQ'A>F/lTI]P5Q&>Fwo2v&uMB [iEL>-~Yl+u-Rhnb6/2fg4MJNw/5K][8YVn5`EC pY_6T0%q6fUo5k^L?"ijV>4DP.EWV]JTKi8 qR w4RtZCp,aKjqz[E -s"Vn3Hnh1 uTREHddmcC/&_:b 9oxMBz}U5+4-FbxzdSebamDFl+$?"rsd jYRR Dw|:2y,%qMQ+l+"Xb5@fLY0}`B#>{RIVSM.1*J/6j[Nw*BAr8!}kS U.,hR`Qugjj/Wu8 BW wT1A0kPv xFAz8<+5iZe%M;vtfyr#u}bX_('hhv?ikp:$g4W*aT(cZJa y| 0^ L;zs!bc_zLZ9i s2atC24uBz49 ~ uNNy&@ti;Grz'I/u1tCi i$nm4G a7a/Fqi=q?stZ hi6vjt&@vCIN Mzdtr94R{F?-i}v[a?Vc8q?m5Xa<[N4eNf0G@~ !{wr94'HO>tHGnH uis3qW @f?Vf?Vj~Qc5Fz'I?:4)Hs2SwjOjm GXc&@~raAgD Mz?Fn [rHN~Ff7~V3gfdvHr%%u~|Q;uGGr4A> =q?X#I?:u4Rw%Jqqw{O{w~q_>/;4R7nho4Xx,Q7FeOx#ioM@> Mm;:5,4R'OYmtOY9t[I?}m'XIw.lO.l]4i2gecc7{~?bt~i5I/1#i}vNYv~iwnyOny-O~iL;Grf7&G5=EX;/'et `~`u'$MOva?O7T;tY'~]@=p~}gY6{]6K8MOmeO'@?;QOz;ymz|3. 7=Fw9OwV7MOv~VxcopIG^~p{:I'Sv8a%r%]'^*tXg?5v#C[}R7xH)6456[oi ?tp+sb>Nw[s k__@F78$SXCSkB'j%Y8h}t.=N/wN9{_y&ks7f7^`?`1r:Iw>OetmuF - -HKb=Rrm?<|t<)f-O:2^Ta+'Zm{fMhG 1 f$zQrS]7W:I2>7BYdU5gttp~E.?Tjx|3XV"OA+UP#` VUCTB-Ze"PnZ( 9PtrBMWFZ"W5^Vl9^J8GCAYBTw.24sX^M7-5:tk}2PB^-7-o,2zc"C<[DvWRV{VV1!nw+j[}!b;c]o1;cMo1V]c{cccc;cc[L{cTnz>sn1X-CXn>{Az!=[}p>n>{AzgHV} =[}p>n>{AzgHV= =[}pn>@zgHV= =[}pn>@zgHV =[}p~n>{AzgHVa=7[}pn>}zgoXVa=[}pn>zgXVa=?[}pn[}Ao1[} V'5gmi :L/>AW!xga C>H% i> X+[bWI(LYp]1Gx&OWA9+x'cUff!0>>f3/,=H&H]WYsoD&HD7 ;:0Zy#%D6ki6[ &r0HWuXn#rEEFq|Y}x!BP Q15(],>dc7Q^s+#GJ2|;m~Tyw#koZ>oig+< <^CcK +a2G6WA,P=@6.XK(`dV`R"8oA8%T~xT|1H5w6{ >pq,Nq/r76/^][o[&O .1#JAebL@8&F)fr4i)]H^'EDxy$GPRRKD/D#JX-plix$0@d?C>#)H`k_@Z.R2W06G|bC.H9[@s,}!{9M{g7a~L`O=,cfAr+<@sPj}s z!K^mJh8!+BQ@H !%IN, >@cfBoE.[L|t!^LR Dews2)M(qy*F Gbq"9P= '|j@c>!W;)hC%#a QyI#E7S*`W1 <@&GECL-F?zJ~4~J6p#X8. 8lb l$0":?wqa#"+C KW48 CUp(+mHKQZU6cMS3[JAH,LV,89">0;* 24'SC$3!77 ]MDQ`,SvX@ ">5hE.Z@7"A4"G)Ic "s&u6j dA*v6(WzfCUuq+_cqrtCSQMI}+3W1({"atwz;{zGIi:55(=uE9or4 J,SxM?h}@9U%gTbY cniseMW8QCGKaJ!k0=&bx_B,IR{c)}le*hllL|Ky?Rf`4CC>;9wBkRP wz80z8c?3z8O3~3z8M4jx{9x/i)7.|"K*Q9G<*;~&;xzQY^ZAwH6^Q<4UJEQ 53U%&jwN_?fEA`P|,S$YN#tT*2=M*>&@+i1?y@a{)m5f2] f{t60XgKSg;G2R|!C)o(-J Q> ,x[:'21dnKsR)~| 'y+KJj|EBChAdkh=>bM*+9aPD!V:WE.{u], 8@|c (Jp)6?rrD0uYoN2 &Y)l#D:C y*x@Ac6 HTWs>>wHuuelwfd1KD[;W;)/A"wcv1jNBUJ:HGlyJXgVR{Q6FM/HM6S)iK(9bO BC6(khCGRUl&)TPjxVWgChoPMmf?l5!7OA6|Xd7Vb VUCRCzm.R'_A,1ZK3z,IO9(8B@p{ _p!~oc`gmByge9u72@7Zcm9oj^^/ XLy:HJy(nQ$b!S}?M%-NYL ];;XWIwo.,T6nI %iU!n6[:A}]R~ K9wRhT BjcRjFFXa Xe0e&{WrSm/%7.2}zbSrTIsCTmX2Deyr.+ xG5&'tVc@Ig 51_q-JT9;6` LZoH5Bi{zWz ;6r,qyJnX++jPC )!}bOXW<9_msUg%)CSwCzLMkbCm@l]]M<>p5TeuVA9EHEUQEMZj_#`8m=kzKdV?5RK_(EmH:|Hp?SnxENFSJU0u!K_'[Am?U{ ~r(H06:Nwmi.H?YSgVAXEWOm`;tm|"`[A0X9PT>oQ!URrv#y;(FUgnu*mZyHBN5qj}=tSn8@1v/>]v>3"m ~v.gyOw8x 4p pE%Q xV5p1/oEEXt!L#^Mk zdm,X@ug&.<_Wr#QO1=wY=~&>bz(y*Ls:ts-;}W`uyR&5ot"X8#w5Rws7o/4$xZJ*ZLx=Upl&6Vx8yh>GvGZRo Xws$q1RKK.qdDYBM#6; qW" |*1A9^N N_Mffq:Vq//4jK(~piaXaMl& %`*G{H,pIzK2%T?T !dwH:dz'(%lBWLy+,=[j"57_%1|dY"!A]8M;:M!q^{f&rzb4weh5F8>4hF{ eZ qq.,JO{}IR]Xi>EpOrmvZ*&EPC 64i Bp56O$Rg^!xxj6ARxO'q }6RktT68[B_s6jaT=@JKyA>%Q]I54mNww qFl75VQ[fYKx[^UMeV> #?p0nms+[7oQlH;Vn?QxbLUdasYz)m;C v]!b;HFTtw]856J%R09?$= JbUuhGTEq<%0Iu>n()p Od|"6?+~|o;mQqsN`E>6}:>&7f p`hl>{s&PSfSK?]1/GHoqE! #,v#O*=Yptd|q w`r{OO]Ly<,pgifrJKEw8so)*IOUfAmTu"lKM%\5@fLYBl &42,IO'x_"9H`cbs2eGZW9Gm+ K;u:xsPP9v;4NWeG2^&S4m+ Yt5# :"BUi4D5s29J ^ D5d'A+@ q{kvrW|>i4;ZCS;^=.CH *-7]X"pxL}Eb''#!Zwuz1W3)W|b$CJKQd5Wyy.APpKEaendpAY"S#ov,xommhDUy`X,FR{1K'CG&_An^vH? UrL(UwGPxj_,w #\=uPEvdfBgC|`D^N#BG^tN /dq4f,-N">qs6k= T"/e-GKKvfBgT1J6]@(YU1 !m(O>o;3jv88?4^Jamhgqa?p!x']p_w^# hPA?q^|-^O!AuaT}0_*spn@alp&MX ~V kk104 x;J0p4{I*2c?4T2&Fx~fp[>w|D+M>6cNzWQ&696x"xej1kt8go<@`.(rl/t :i^"c|3"wYq'K{ In$M^g'6'"p-o,am"Yp%]"<,,H !lAXxI&,l8N2th2C7N"^jr0@wi2&c>fp5.NqHrw+p6U*cIlk{&eiq$HVyXSu1A:K=--Q#/b;"(*+*sxqm8RR ?N 4%LFG7$]Nn&)z 3T?A7/_]_?poiw^^JaPbM"/,BNgx]qJ|XFcTY~Y,dsdA/*LvMn8_](g%n hjE9`tjXj";a{ @-N95 92l7|`gla$Dr- ew;vhhtUGL'X4O{ idqJ1c f>0VF[s0 |y4@ "zVrx?$.72T sCn6~Jpho}!99Jt17&@ 4$XxftC9 $;j!zS6*RWxlb~sI{N4"2s4!jpSgF*4YpayF}!u#9a62GL|y$#7K+ROMS, , Oz0_@.p`RxR 3JeyFEN=e$RZco2Zk3Da=,da6k9]SX,I:#(gh+1a n%TAFAYE/#54 1(*% dn,'@!w;D/NQ:qV2$21%=8<.yWLLR-y4

Visit link:

Meet Penny, an AI That Predicts a Neighborhood's Wealth From ... - WIRED

Posted in Ai | Comments Off on Meet Penny, an AI That Predicts a Neighborhood’s Wealth From … – WIRED

Instagram now uses AI to block offensive comments – The Verge

Posted: at 11:16 am

Instagram is introducing an enhanced comment filter today meant to wipe out nasty remarks using AI. The app first began offering a comment filter last September, but it was a very simple approach: Instagram would only remove comments that contained words and phrases it had specifically identified as offensive. (Users could also add their own custom banned phrases.)

Now, the system is getting a lot smarter. It uses machine learning to identify comments that seem offensive, giving the system some ability to take into account the replys context, potentially catching more bad comments and cutting down on false positives at the same time. Wired has a big story on how the system was made, and it mentions that when a comment gets flagged, itll be blocked for everyone except the person who wrote it, so they wont know their remark didnt get through.

An AI spam filter has secretly been in place since October

One other notable change here: Instagram is turning the offensive comment filter on by default, whereas the earlier filter had to be enabled. Youll still be given the option to turn it off from inside the apps settings, and Instagram still includes the ability to block custom words and phrases.

The filter only works in English at launch, but Instagram says its working to expand it to other languages over time.

Instagram is also announcing an AI spam-filtering system today, too. The spam filter has secretly been in place since last October, but its only being revealed today. Given that no one has noticed it in the past nine months, the filter probably isnt blocking too many comments that it shouldnt. That filter is active in nine languages, including English. (As a side note: Instagram really needs a better system for blocking spam accounts, as well. I set my profile to private recently in order to cut down on spam followers, but now Im just getting follow requests from spam accounts instead.)

View original post here:

Instagram now uses AI to block offensive comments - The Verge

Posted in Ai | Comments Off on Instagram now uses AI to block offensive comments – The Verge

This is why AI shouldn’t design inspirational posters – CNET – CNET

Posted: at 11:16 am

Inspirational posters have their place. But if you're not the kind of person to take workplace spark from a beautiful photograph of a random person canoeing at twilight or an eagle soaring, you might want to turn the poster-making over to an artificial intelligence.

An AI dubbed InspiroBot, brought to our attention by IFL Science, puts together some of the most bizarre (and thus delightful) inspirational posters around.

This one's probably not a good idea for either a stranger or a friend.

The dog's cute, but this isn't great advice either.

Hard to argue with this one, which is kinda Yoda-esque.

Hey! Who you callin' "desperate"?

This bot obviously doesn't know many LARPers, or hang around at Renaissance Faires.

The bot's posters fall in between Commander Data trying to offer advice and a mistranslated book of quaint sayings. And they're mostly fun. Except sometimes, when the AI gets really dark and it's time to leave the site entirely and Google kittens fighting themselves in the mirror.

View original post here:

This is why AI shouldn't design inspirational posters - CNET - CNET

Posted in Ai | Comments Off on This is why AI shouldn’t design inspirational posters – CNET – CNET

AI enters the hospital room – CNET

Posted: at 11:16 am

Imagine you're stuck in a hospital bed after having surgery. You can't even close the window blinds without a nurse's help. And you can forget about requesting a blanket to take off the chill or getting details on visiting hours when everyone's busy handling more-pressing matters.

You feel powerless.

But what if you got what you needed just by saying it? You could instantly open the blinds, find out more about your doctor's expertise or turn up the room temperature. Sounds great, right? All you'd need is one of today's digital voice assistants that constantly listen for a request, send your query to the internet and either answer your question or complete a task.

Unfortunately, you can't do that right now with the current crop of smart assistants like Apple's Siri, Amazon's Alexa and Google's Assistant because they can't satisfy hospitals' privacy and security requirements. Yet according to Bret Greenstein, vice president of IBM's Watson Internet of Things platform, some medical staff can spend nearly 10 percent of their time with patients answering questions about lunch, physician credentials and visiting hours. If a smart speaker can answer those questions, doctors and nurses could spend more time on patient care.

Harman's JBL clock radio packs smarts from IBM's AI technology to help patients get information and control their hospital room's lighting and temperature.

It's why Thomas Jefferson University Hospitals in Philadelphia decided to work with audio giant Harman and IBM's Watson artificial intelligence technology. Together, they developed smart speakers that will respond to about a dozen commands. When a patient says "Watson," the speakers can, for instance, play calming sounds and adjust the room's lighting, thermostat and blinds.

"This is a way for patients to get some simple comfort measures addressed just by speaking," says Dr. Andrew Miller, associate chief medical officer at the Philadelphia hospital group. "How great is that?"

For the hospital, it's just the beginning.

Like Amazon's popular Echo speaker, Harman's JBL clock radio packs smarts that respond to command words it hears spoken.

Jefferson Hospital experimented with Amazon's popular Echo speaker, but found the hospital couldn't simultaneously control multiple speakers from one management system. What's more, the Echo couldn't access the hospital's secure Wi-Fi network, and it didn't have the right "skills," or capabilities, for a medical environment.

Dr. Andrew Miller

"It would have done simple things people are used to doing in the home, but not the things we wanted to do," says Neil Gomes, the hospital's chief digital officer.

So late last year, Jefferson Hospital started testing five prototype speakers that Harman made using the external casing of a regular JBL cylindrical speaker and components specially designed for artificial intelligence.

The initial trial tested two models. One required patients to press a button to wake up the device, getting around privacy concerns of an ever-listening microphone. The other woke when someone said "Watson," the name of IBM's AI technology that won the $1 million first-place prize on "Jeopardy" in 2011.

"The button gives a sense of privacy, but it proved to be very frustrating to users because they had to keep pushing it," says Greenstein.

Harman's JBL smart speakers have gone through a range of shapes and sizes.

The newest speakers, now built into Harman's round JBL clock radios, rely solely on voice commands. The hospital is testing about 40 of the new speakers, with IBM and Harman tweaking the smarts as they go. The speakers also tie into the hospital's automated facilities management system, which lets administrators control things like heating, air conditioning and lighting online. That's a convenience for everyone.

"When my father-in-law was in the hospital, we had to talk to the nurse about adjusting the thermostat," says Kevin Hague, vice president of technology strategy at Harman. "It was absurd that we had to have an RN come in and figure out on the computer how to adjust the temperature."

As of this writing, the hospital hadn't decided if it would stick with "Watson" or go with some other wake-up word, like "Jefferson."

It's fair to say we'd rather voice assistants do our bidding in a hotel room instead of in a hospital.

Some hotels are exploring that option and finding that off-the-shelf digital assistants work just fine.

Marriott, for instance, has been testing Apple's Siri and Amazon's Alexa at an Aloft Hotel in Boston. The hotel installed iPad tablets and Echo speakers in 10 rooms, letting guests speak commands to control the TV and adjust the lighting. That sounds awfully tempting considering how tough it can be sometimes to figure out which switch does what.

See more from CNET Magazine.

"The room would become an extension to your personal tech," says Toni Stoeckl, Marriott global brand leader and vice president. "I don't think we're there quite yet."

In the meantime, Jefferson Hospital, Harman and IBM are working on ways to teach their smart speaker to branch out beyond simple tasks. The possibilities are intriguing. Maybe Watson could follow you home to make sure you're taking your medication correctly. Or it could prompt you to take a walk so you could heal faster, easily change pharmacies or arrange follow-up appointments.

Right now, the speakers don't need regulatory approval, although that could change if they provide information about your diagnosis or explain your medications.

No matter how the hospital ends up using them, one thing is certain. It sucks being in a hospital. Having a little control over your environment could make it suck a little less.

This story appears in the summer 2017 edition of CNET Magazine. Click here formore magazine stories.

Special Reports:CNET's in-depth features in one place.

Technically Literate:Original works of short fiction with unique perspectives on tech, exclusively on CNET.

Excerpt from:

AI enters the hospital room - CNET

Posted in Ai | Comments Off on AI enters the hospital room – CNET

A Brutal Intelligence: AI, Chess, and the Human Mind – lareviewofbooks

Posted: at 11:16 am

JUNE 29, 2017

CHESS IS THE GAME not just of kings but of geniuses. For hundreds of years, it has served as standard and symbol for the pinnacles of human intelligence. Staring at the pieces, lost to the world, the chess master seems a figure of pure thought: brain without body. Its hardly a surprise, then, that when computer scientists began to contemplate the creation of an artificial intelligence in the middle years of the last century, they adopted the chessboard as their proving ground. To build a machine able to beat a skilled human player would be to fabricate a mind. It was a compelling theory, and to this day it shapes public perceptions of artificial intelligence. But, as the former world chess champion Garry Kasparov argues in his illuminating new memoir Deep Thinking, the theory was flawed from the start. It reflected a series of misperceptions about chess, about computers, and about the mind.

At the dawn of the computer age, in 1950, the influential Bell Labs engineer Claude Shannon published a paper in Philosophical Magazine called Programming a Computer for Playing Chess. The creation of a tolerably good computerized chess player, he argued, was not only possible but would also have metaphysical consequences. It would force the human race either to admit the possibility of a mechanized thinking or to further restrict [its] concept of thinking. He went on to offer an insight that would prove essential both to the development of chess software and to the pursuit of artificial intelligence in general. A chess program, he wrote, would need to incorporate a search function able to identify possible moves and rank them according to how they influenced the course of the game. He laid out two very different approaches to programming the function. Type A would rely on brute force, calculating the relative value of all possible moves as far ahead in the game as the speed of the computer allowed. Type B would use intelligence rather than raw power, imbuing the computer with an understanding of the game that would allow it to focus on a small number of attractive moves while ignoring the rest. In essence, a Type B computer would demonstrate the intuition of an experienced human player.

When Shannon wrote his paper, he and everyone else assumed that the Type A method was a dead end. It seemed obvious that, under the time restrictions of a competitive chess game, a computer would never be fast enough to extend its analysis more than a few turns ahead. As Kasparov points out, there are over 300 billion possible ways to play just the first four moves in a game of chess, and even if 95 percent of these variations are terrible, a Type A program would still have to check them all. In 1950, and for many years afterward, no one could imagine a computer able to execute a successful brute-force strategy against a good player. Unfortunately, Shannon concluded, a machine operating according to the Type A strategy would be both slow and a weak player.

Type B, the intelligence strategy, seemed far more feasible, not least because it fit the scientific zeitgeist. Fascination with digital computers intensified during the 1950s, and the so-called thinking machines began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a digital computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, youd be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do. In a 1958 paper, the prominent AI researchers Herbert Simon and Allen Newell declared that computers are machines that think and, in the near future, the range of problems they can handle will be coextensive with the range to which the human mind has been applied. With the right programming, a computer would turn sapient.

It took only a few decades after Shannon wrote his paper for engineers to build a computer that could play chess brilliantly. Its most famous victim: Garry Kasparov.

One of the greatest and most intimidating players in the history of the game, Kasparov was defeated in a six-game bout by the IBM supercomputer Deep Blue in 1997. Even though it was the first time a machine had beaten a world champion in a formal match, to computer scientists and chess masters alike the outcome wasnt much of a surprise. Chess-playing computers had been making strong and steady gains for years, advancing inexorably up the ranks of the best human players. Kasparov just happened to be in the right place at the wrong time.

But the story of the computers victory comes with a twist. Shannon and his contemporaries, it turns out, had been wrong. It was the Type B approach the intelligence strategy that ended up being the dead end. Despite their early optimism, AI researchers utterly failed in getting computers to think as people do. Deep Blue beat Kasparov not by matching his insight and intuition but by overwhelming him with blind calculation. Thanks to years of exponential gains in processing speed, combined with steady improvements in the efficiency of search algorithms, the computer was able to comb through enough possible moves in a short enough time to outduel the champion. Brute force triumphed. It turned out that making a great chess-playing computer was not the same as making a thinking machine on par with the human mind, Kasparov reflects. Deep Blue was intelligent the way your programmable alarm clock is intelligent.

The history of computer chess is the history of artificial intelligence. After their disappointments in trying to reverse-engineer the brain, computer scientists narrowed their sights. Abandoning their pursuit of human-like intelligence, they began to concentrate on accomplishing sophisticated, but limited, analytical tasks by capitalizing on the inhuman speed of the modern computers calculations. This less ambitious but more pragmatic approach has paid off in areas ranging from medical diagnosis to self-driving cars. Computers are replicating the results of human thought without replicating thought itself. If in the 1950s and 1960s the emphasis in the phrase artificial intelligence fell heavily on the word intelligence, today it falls with even greater weight on the word artificial.

Particularly fruitful has been the deployment of search algorithms similar to those that powered Deep Blue. If a machine can search billions of options in a matter of milliseconds, ranking each according to how well it fulfills some specified goal, then it can outperform experts in a lot of problem-solving tasks without having to match their experience or insight. More recently, AI programmers have added another brute-force technique to their repertoire: machine learning. In simple terms, machine learning is a statistical method for discovering correlations in past events that can then be used to make predictions about future events. Rather than giving a computer a set of instructions to follow, a programmer feeds the computer many examples of a phenomenon and from those examples the machine deciphers relationships among variables. Whereas most software programs apply rules to data, machine-learning algorithms do the reverse: they distill rules from data, and then apply those rules to make judgments about new situations.

In modern translation software, for example, a computer scans many millions of translated texts to learn associations between phrases in different languages. Using these correspondences, it can then piece together translations of new strings of text. The computer doesnt require any understanding of grammar or meaning; it just regurgitates words in whatever combination it calculates has the highest odds of being accurate. The result lacks the style and nuance of a skilled translators work but has considerable utility nonetheless. Although machine-learning algorithms have been around a long time, they require a vast number of examples to work reliably, which only became possible with the explosion of online data. Kasparov quotes an engineer from Googles popular translation program: When you go from 10,000 training examples to 10 billion training examples, it all starts to work. Data trumps everything.

The pragmatic turn in AI research is producing many such breakthroughs, but this shift also highlights the limitations of artificial intelligence. Through brute-force data processing, computers can churn out answers to well-defined questions and forecast how complex events may play out, but they lack the understanding, imagination, and common sense to do what human minds do naturally: turn information into knowledge, think conceptually and metaphorically, and negotiate the worlds flux and uncertainty without a script. Machines remain machines.

That fact hasnt blunted the publics enthusiasm for AI fantasies. Along with TV shows and movies featuring scheming computers and bloody-minded robots, weve seen a slew of earnest nonfiction books with titles like Superintelligence, Smarter Than Us, and Our Final Invention, all suggesting that machines will soon be brainier than we are. The predictions echo those made in the 1950s and 1960s, and, as before, theyre founded on speculation, not fact. Despite monumental advances in hardware and software, computers give no sign of being any nearer to self-awareness, volition, or emotion. Their strength what Kasparov describes as an amnesiacs objectivity is also their weakness.

In addition to questioning the common wisdom about artificial intelligence, Kasparov challenges our preconceptions about chess. The game, particularly when played at its highest levels, is far more than a cerebral exercise in logic and calculation, and the expert player is anything but a stereotypical egghead. The connection between chess skill and the kind of intelligence measured by IQ scores, Kasparov observes, is weak at best. There is no more truth to the thought that all chess players are geniuses than in saying that all geniuses play chess, he writes. [O]ne of the things that makes chess so interesting is that its still unclear exactly what separates good chess players from great ones.

Chess is a grueling sport. It demands stamina, resilience, and an aptitude for psychological warfare. It also requires acute sensory perception. Move generation seems to involve more visuospatial brain activity than the sort of calculation that goes into solving math problems, writes Kasparov, referring to recent neurological experiments. To the chess master, the boards 64 squares definenot just an abstract geometry but an actual terrain. Like figures on a landscape, the pieces form patterns that the master, drawing on years of experience, reads intuitively, often at a glance. Methodical analysis is important, too, but it is carried out as part of a multifaceted and still mysterious thought process involving the body and its senses as well as the brains neurons and synapses.

The contingency of human intelligence, the way it shifts with health, mood, and circumstance, is at the center of Kasparovs account of his historic duel with Deep Blue. Having beaten the machine in a celebrated match a year earlier, the champion enters the 1997 competition confident that he will again come out the victor. His confidence swells when he wins the first game decisively. But in the fateful second game, Deep Blue makes a series of strong moves, putting Kasparov on the defensive. Rattled, he makes a calamitous mental error. He resigns the game in frustration after the computer launches an aggressive and seemingly lethalattack on his queen. Only later does he realize that his position had not been hopeless; he could have forced the machine into a draw. The loss leaves Kasparov confused and in agony, unable to regain his emotional bearings. Though the next three games end in draws, Deep Blue crushes him in the sixth and final game to win the match.

One of Kasparovs strengths as a champion had always been his ability to read the minds of his adversaries and hence anticipate their strategies. But with Deep Blue, there was no mind to read. The machines lack of personality, its implacable blankness, turned out to be one of its greatest advantages. It disoriented Kasparov, breeding doubts in his mind and eating away at his self-confidence. I didnt know my opponent at all, he recalls. This intense confusion left my mind to wander to darker places. The irony is that the machines victory was as much a matter of psychology as of skill.[1]

If Kasparov hadnt become flustered, he might have won the 1997 match. But that would have just postponed the inevitable. By the turn of the century, the era of computer dominance in chess was well established. Today, not even the grandest of grandmasters would bother challenging a computer to a match. They know they wouldnt stand a chance.

But if computers have become unbeatable at the board, they remain incapable of exhibiting what Kasparov calls the ineffable nature of human chess. To Kasparov, this is cause for optimism about the future of humanity. Unlike the eight-by-eight chessboard, the world is an unbounded place, and making sense of it will always require more than mathematical or statistical calculations. The inherent rigidity of computer intelligence leavesplenty of room for humans to exercise their flexible and intuitive intelligence. If we remain vigilant in turning the power of our computers to our own purposes, concludes Kasparov, our machines will not replace us but instead propel us to ever-greater achievements.

One hopes hes right. Still, as computers become more powerful and more adept at fulfilling our needs, there is a danger. The benefits of computer processing are easy to measure in speed, in output, in dollars while the benefits of human thought are often impossible to express in hard numbers. Given contemporary societys worship of the measurable and suspicion of the ineffable, our own intelligence would seem to be at a disadvantage as we rush to computerize more and more aspects of our jobs and lives. The question isnt whether the subtleties of human thought will continue to lie beyond the reach of computers. They almost certainly will. The question is whether well continue to appreciate the value of those subtleties as we become more dependent on the mindless but brutally efficient calculations of our machines. In the face of the implacable, the contingent can seem inferior, its strengths appearing as weaknesses.

Near the end of his book, Kasparov notes, with some regret, that humans today are starting to play chess more like computers. Once again, the ancient game may be offering us an omen.

Nicholas Carr is the author of several books about computers and culture, including The Shallows, The Glass Cage, and, most recently, Utopia Is Creepy.

[1] A bit of all-too-human deviousness was also involved in Deep Blues win. IBMs coders, it was later revealed, programmed the computer to display erratic behavior delaying certain moves, for instance, and rushing others in an attempt to unsettle Kasparov. Computers may be innocents, but that doesnt mean their programmers are.

Read more:

A Brutal Intelligence: AI, Chess, and the Human Mind - lareviewofbooks

Posted in Ai | Comments Off on A Brutal Intelligence: AI, Chess, and the Human Mind – lareviewofbooks

IBM is telling Congress not to fear the rise of an AI ‘overlord’ – Recode

Posted: June 28, 2017 at 6:17 am

The brains behind IBMs Jeopardy-winning, disease-tracking, weather-mapping Watson supercomputer plan to embark on a lobbying blitz in Washington, D.C., this week, hoping to show federal lawmakers that artificial intelligence isnt going to kill jobs or humans.

To hear IBM tell it, much of the recent criticism around machine learning, robotics and other kinds of AI amounts to merely fear mongering. The companys senior vice president for Watson, David Kenny, aims to convey that message to members of Congress beginning with a letter on Tuesday, stressing the real disaster would be abandoning or inhibiting cognitive technology before its full potential can be realized.

Labor experts and reams of data released in recent months argue otherwise: They foretell vast economic consequences upon the mass-market arrival of AI, as entire industries are displaced not just blue-collar jobs like trucking, as self-driving vehicles replace humans at the wheel, but white-collar positions like stock trading too.

Others fear the privacy, security and safety implications as more tasks, from managing the countrys roads to reading patients X-ray results, are automated and the most dire warnings, from the likes of SpaceX and Tesla founder Elon Musk, include the potential arrival of robots capable of destroying mankind.

But as IBM seeks to advance and sell its AI-driven services, like Watson, the company plans to tell lawmakers those sort of concerns are fantasy. Along with a private meeting with some lawmakers near Capitol Hill on Wednesday, Kenny is urging Congress to avoid reacting out of fear and pursuing some proposals, like an idea from Bill Gates to tax robots, as regulators debate how to handle this fast-growing field.

The impact of AI is evident in the debate about its societal implications with some fearful prophets envisioning massive job loss, or even an eventual AI Overlord that controls humanity, Kenny wrote. I must disagree with these dystopian views.

For IBM, the stakes are high: Watson and the future of what it calls cognitive technology are critical to Big Blues business. Beyond Watsons existing work from aiding in cancer research to funnier tasks, like writing a cookbook IBM has sought to bring its famed supercomputer to tackle some of the sprawling, data-heavy tasks of the federal government.

In some ways, though, the most vexing challenges facing AI arent technological theyre political.

Self-driving cars, trucks and drones, for example, cant just take to the roads and skies without permission from local and federal regulators, which are only just beginning to loosen restrictions on those industries.

Others fear that automation might lead to discrimination: Under President Barack Obama, the White House spent months warning that highly powerful algorithms could share the biases of their authors, leading to unfair treatment of minorities or other disadvantaged communities in everything from obtaining a credit card to buying a house. Thats why his administration in October explicitly urged Congress to help it hire more AI specialists in key government oversight roles.

And more challenging still are the economic implications of AI. It will be up to federal officials including President Donald Trump or his successors to grapple with untold numbers of Americans who might someday find themselves out of a job and in need of training in order to find new careers. (Trumps own Treasury secretary, however, previously has said AI is more than 50 years away from causing such disruptions.)

Sensing potential political hurdles, companies like Apple, Facebook, Google and IBM chartered a new organization, the Partnership for Artificial Intelligence, last year. In doing so, they hoped to craft ethical standards around the safe and fair operation of machine learning and robotics before government regulators in the United States or elsewhere sought to more aggressively target AI with consumer protection regulations. AI now counts among some of those tech companies regular lobbying expenses.

And IBM, in particular, has spent years trying to tell a friendlier, less economically catastrophic story about AI in the nations capital a campaign that it will continue this week.

Technological advancements have always stoked fears and concern over mass job loss, Kenny wrote in his Tuesday letter to Congress. But history suggests that AI, similar to past revolutionary technologies, will not replace humans in the workforce.

In many cases, like cyber security and medicine, Kenny told lawmakers its still humans at the end of the day who can choose the best course of action when an AI system has identified a problem. He stressed that government should instead focus its attention on fixing a shortage of workers with the skills needed to work in partnership with AI systems.

For all the scrutiny facing the industry, however, some in Congress are still getting up to speed. Thats why lawmakers like Rep. John Delaney, D-Md., co-founded the Congressional Artificial Intelligence Caucus, an informal organization of Democrats and Republicans studying the issue. His group is joining IBMs Kenny and other tech leaders at a private, off-record event at the Capitol Hill Club on Wednesday.

Delaney told Recode he comes to AI from the perspective that the sky is not falling that even if industries change, old jobs might still be replaced by new ones in emerging fields. With the caucus, he said, the goal is to make sure Congress is as informed on this issue as possible, so when it inevitably has some sort of knee-jerk reactions on AI, the final response from Capitol Hill is more measured in scope.

Asked if the tech industry similarly appreciates the economic implications of its inventions, the congressman replied: I think the [tech] industry focuses, as it should, on being at the cutting edge of innovation and creating products and services that enhance productivity and improve peoples lives.

So when theyre thinking about driverless cars, do they spend more time thinking about how this will enhance productivity, and how it will [protect] safety, than the jobs that will be affected? I think the answer is probably yes, but I dont think they do it in some of kind of nefarious way, Delaney said.

See original here:

IBM is telling Congress not to fear the rise of an AI 'overlord' - Recode

Posted in Ai | Comments Off on IBM is telling Congress not to fear the rise of an AI ‘overlord’ – Recode

Blatantly ironic art show about AI makes portraits using AI – Mashable

Posted: at 6:17 am


Mashable
Blatantly ironic art show about AI makes portraits using AI
Mashable
To make the portraits he built his own proprietary AI-assisted computer program that takes images and online filters to create stylized prints of everyday people and celebrities, like Tesla CEO Elon Musk, Facebook CEO Mark Zuckerberg, performer Kanye ...

and more »

Here is the original post:

Blatantly ironic art show about AI makes portraits using AI - Mashable

Posted in Ai | Comments Off on Blatantly ironic art show about AI makes portraits using AI – Mashable

Drive.ai raises $50 million in funding; Andrew Ng joins board – Reuters

Posted: at 6:17 am

SAN FRANCISCO Self-driving startup Drive.ai said on Tuesday it raised $50 million in a second round of funding as the Silicon Valley company prepared to deploy its technology in pilot vehicles later this year.

The company, one of a handful of startups building fully autonomous systems for cars, also said it had added to its board Andrew Ng, a prominent figure in the artificial intelligence (AI) industry.

Ng formerly led AI projects at Baidu and Alphabet's Google. Ng is the husband of Drive.ai's co-founder and president Carol Reiley, a roboticist.

The latest round of funding - led by New Enterprise Associates, Inc, GGV Capital and existing investor China-based Northern Light Venture Capital - came as investor interest in autonomous vehicles continued to intensify.

Drive.ai is aiming to build an after-market software kit powered by artificial intelligence to turn traditional vehicles operated by businesses into self-driving models.

The company said existing business fleets would deploy its kits in pilot tests by year-end.

Drive.ai plans to distinguish itself through the team's expertise in robotics and deep learning, a subset of AI in which massive amounts of data are fed into systems until they can "think" for themselves.

Drive.ai, which received $12 million in an initial funding round last year, also named to its board the head of Asia for New Enterprise Associates, Carmen Chang.

(Reporting by Marc Vartabedian, editing by Alexandria Sage and Andrew Hay)

CHIBA, Japan Japan's Toshiba Corp has pushed back its timeline to clinch a sale of its prized flash memory chip unit, saying the $18 billion deal was being held up due to differences of opinion within the consortium chosen as preferred bidder.

TOKYO Japan's Toshiba Corp said on Wednesday it is filing a lawsuit against joint venture partner Western Digital Corp.

See the article here:

Drive.ai raises $50 million in funding; Andrew Ng joins board - Reuters

Posted in Ai | Comments Off on Drive.ai raises $50 million in funding; Andrew Ng joins board – Reuters

Volvo and Autoliv aim to sell self-driving cars with Nvidia AI tech by 2021 – TechCrunch

Posted: June 27, 2017 at 7:14 am

Volvo is forming a new joint partnership with Autoliv, called Zenuity, with a focus on developing self-driving automotive software. The plan is to eventually get to the point where they can field self-driving cars for sale, based on Nvidias Drive PX in-car AI computing platform, by the not-so-distant target year of 2021.

Thats a tall order, but Nvidias Drive PX is already being used to power self-driving vehicles in road testing today, including Nvidias own demonstration vehicles. Volvo and Autolivs Zenuity will use Nvidias AI car compute groundwork as the basis for their own software development, with the hopes of speeding up the development progress of Volvos commercially-targeted autonomous vehicles.

The software that were doing with them will be in some cases unique to Volvo, explained Nvidias Senior Director of Automotive on call. But Autoliv also has the rights to make the software available to other automakers. I think were starting to see, in the industry, these types of collaborations, and the opportunity to leverage from Nvidia a lot of this great work as well.

Zenuity, as a new entity, will provide the resulting self-driving software from the partnership to Volvo directly, while Autoliv will also sell the same software to third-party OEMs using its existing supply channels and relationships. Its great news for Nvidia, too, since that means their PX platform will be a key ingredient for OEMs looking to implement the system in their own vehicles.

Autoliv, a longtime safety technology supplier for the automotive industry, has been working on active safety systems including radar, vision and other ADAS tech for quite some time. But the company says that Nvidias AI platform will help it take its own autonomous and driver assistance tech to the next level.

Volvo and Nvidia had previously partnered for Volvos Drive Me autonomous car pilot program, but this is the first time the two have announced a partnership aimed at commercial sales of vehicles.

Excerpt from:

Volvo and Autoliv aim to sell self-driving cars with Nvidia AI tech by 2021 - TechCrunch

Posted in Ai | Comments Off on Volvo and Autoliv aim to sell self-driving cars with Nvidia AI tech by 2021 – TechCrunch

AI could kickstart a new global arms race we need better ways to govern it before it’s too late – The Conversation UK

Posted: at 7:14 am

There is a lot of money to be made from Artificial Intelligence. By one estimate, the market is projected to hit US$36.8 billion by 2025. Some of this money will undoubtedly go to social good, like curing illness, disease and infirmity. Some will also go to better understanding intractable social problems like wealth distribution, urban planning, smart cities, and more efficient ways to do just about everything. But the key word here is some.

Theres no shortage of people touting the untold benefits of AI. But once you look past the utopian/dystopian and techno-capitalist hyperbole, what we are left with is a situation where various stakeholders want to find new and exciting ways to part you from your money. In other words: its business, not personal.

While the immediate benefits of AI might be clear from a strategic business perspective, the longer term repercussions are not. Its not just that the future is impossible to predict, complex technologies are hard to control, and human values are difficult to align with computer code, its also that in the present its hard to hear the voices calling for temperance and judiciousness over the din of companies clamouring for market advantage.

This is neither a new nor recent phenomenon. Whether it was the social media boom, the smart phone revolution, or the commercialisation of the world wide web, if theres money to be made, entrepreneurs will try and make it. And they should. For better or worse, economic prosperity and stability depends on what brilliance can be conjured up by scientific minds.

But thats only one side of the coin. The flipside is that prosperity and stability can only be maintained if equally brilliant minds work together to ensure we have durable ways to govern these technologies, legally, ethically, and for the social good. In some cases, this might mean agreeing that there are simply certain things we should not do with AI; some things that profit should not be derived from. We might call this conscious capitalism but it is, in fact, now a societal imperative.

There are structural problems in how the AI industry is shaping up, and serious asymmetries in the work that is being done. Its all well and good for large companies invested in presenting themselves as the softer, cuddlier, but no less profitable, face of this new technological revolution to tout hashtags like #responsibleAI or #AIEthics. No rational person is object to either, but they should not distract from the fact that hashtags arent coherent policy. Effective policy costs money to research, devise, and implement and right now, there is not enough time, cash, brainpower and undivided attention being devoted to building the robust governance infrastructure that will be required to compliment this latest wave of technological terraforming.

There are people out there thinking the things that need to be thought and implemented on the law, policy and governance side, but they are being drowned out by the PR, social media influencers and marketing campaigns that want to turn a profit from AI, or tell you how they can help your company do so.

Ultimately, our reach exceeds our grasp. We are far better at building new, exciting and powerful technologies than we are at governing them. To an extent, this has always been the case with new technologies, but the gap between what we create and the extent we can control it is widening and deepening.

Over the course of my PhD, where I researched long term strategies for AI governance and regulation, I was offered some sage advice: If you want to ensure youre remembered as a fool, make predictions about the future. While I try and keep that in mind, I am going to go out on a limb: AI will fundamentally remake society beyond all imagination.

Our commitment to ensuring safe and beneficial AI should amount to more than hashtags, handshakes and changing the narrative. It should be internalised into the ethos of AI development. Technical research must go hand in hand with law and policy research on both the public and private side. With great power comes great shared responsibility and its about time we recognise that this is the best business model we have for AI going forward.

If we are going to try and socialise the benefits of AI across society as the familiar refrain goes we need to get serious about the distribution of money across the AI industry today. Public and private research and public engagement has a critical role to play in this, even if its easier (and cheaper) to co-opt it into in-house research. We need to build a robust government-led research infrastructure in the UK, Europe and beyond to meet head on the challenges AI and other tech will pose. This means we need to think about more than just about data protection, algorithmic transparency and bias.

We also need to get serious about how our legal and political institutions will need to adapt to meet the challenges of tomorrow. And they will need to adapt, just as they have proven able to do in the face of earlier technological changes, whether it was planes, trains, automobiles or computers. From legal personhood to antitrust laws, or criminal culpability to corporate liability, we are starting to confront the incommensurability of certain legal norms with the lived reality of the 21st century.

AI is a new type of beast. We cannot do governance as usual, which has meant waiting for the latest and greatest tech to appear and then frantically react to keep it in check. Despite protestations to the contrary, we must be proactive in engaging with AI development, not reactive. In the parlance of regulation, we need to think ex ante and not just ex post. The hands-off, we-are-just-a-platform-and-have-no-responsibility-here tone of Silicon Valley must be rejected once and for all.

If we are going to adapt our institutions to the 21st century we must understand how they have adapted before, and what can be done today to equip them for the challenges of tomorrow. These changes must be premised upon evidence; not fatalistic conceits about the machines taking over, not philosophical frivolity, not private interests. We need smart people on the law and policy side working with the smart people sitting at the keyboards and toiling in the labs at the companies where these engines of tomorrow are being assembled line by line. Some might see this as an unholy alliance, but it is, in fact, a noble goal.

The governance and regulation of AI is not a national issue; it is a global issue. The untold fortunes being poured into the technical side of AI research needs to start making its way into the hands of those devoted to understanding how we might best actualise the technology, and how we can in good conscience use it to solve problems where there is no profit to be made.

The risk we run is that AI research kick starts a new global arms race; one where finishing second is framed as tantamount to economic hari-kari. There is tremendous good that the AI industry can do to help change this, but so far these good intentions have not manifested themselves in ways conducive to building the robust law, policy and social-scientific infrastructure that must compliment the technical side. As long as this imbalance continues, be afraid. Be very afraid.

View post:

AI could kickstart a new global arms race we need better ways to govern it before it's too late - The Conversation UK

Posted in Ai | Comments Off on AI could kickstart a new global arms race we need better ways to govern it before it’s too late – The Conversation UK

Page 229«..1020..228229230231..240250..»