NVIDIA’s Accelerated Computing Platform To Power Japan’s Fastest AI Supercomputer – Forbes

NVIDIA's Accelerated Computing Platform To Power Japan's Fastest AI Supercomputer
Forbes
Tokyo Tech is in the process of building its next-generation TSUBAME supercomputer, featuring NVIDIA GPU technology and the company's Accelerated Computing Platform. TSUBAME 3.0, as the system will be known, will ultimately be used in tandem with ...
Japan announces AI supercomputerScientific Computing World

all 5 news articles »

Read more:

NVIDIA's Accelerated Computing Platform To Power Japan's Fastest AI Supercomputer - Forbes

Reveal Acquires NexLP to become the leading AI-powered eDiscovery Solution – PR Newswire India

"The future of eDiscovery is artificial intelligence. We've acquired the leader in this space to ensure our platform is powered by cutting-edge AI technology and NexLP's premier data science team," said Reveal CEO, Wendell Jisa. "This exclusive integration of NexLP AI into Reveal's solution provides our clients the opportunity to lead in the evolution of how law is practiced."

NexLP's artificial intelligence platform turns disparate, unstructured data - including email communications, business chat messages, contracts and legal documents - into meaningful insights that can be used to deliver operational efficiencies and proactive risk mitigation for legal, corporate and compliance teams.

Reveal clients have access to the next-generation solution now. The companies have worked to fully integrate NexLP's AI software into Reveal's review software for more than a year. All features, including the industry-exclusive ability to run multiple AI models, as well as all future functionality, become part of Reveal's standard software. NexLP's artificial intelligence platform will remain available as a stand-alone application for current clients.

With the acquisition, Jay Leib, Co-Founder and CEO of NexLP, joins the leadership team of Reveal as its EVP of Innovation & Strategy.

"We chose Reveal, after considering all the major players in the space, because they offer by-far, the most comprehensive, solutions-oriented technology on the market and we have a shared vision for the future of legal technology," said Jay Leib, Reveal EVP of Innovation & Strategy. "Reveal's global footprint and ability to deploy the Reveal solution in the cloud or on-premise enables us to rapidly expand the adoption of AI to tens of thousands of legal, risk and compliance professionals overnight. Our existing clients and partners should all be thrilled with our ability to expand our capabilities by joining Reveal."

The NexLP acquisition is Reveal's second major investment since Gallant Capital Partners, a Los Angeles-based investment firm, acquired a majority stake in Reveal in 2018. In June 2019, Reveal acquired Mindseye Solutions, an industry-leading processing and early case assessment software solution.

About Reveal Data Corporation

Reveal helps legal professionals solve complex discovery problems. As a cloud-based provider of eDiscovery, risk and compliance software, Reveal offers the full range of processing, early case assessment, review and artificial intelligence capabilities. Reveal clients include Fortune 500 companies, legal service providers, government agencies and financial institutions in more than 40 countries across five continents. Featuring deployment options in the cloud or on-premise, an intuitive user design, multilingual user interfaces and the automatic detection of more than 160 languages, Reveal accelerates legal review, saving users time and money. For more information, visit http://www.revealdata.com.

About NexLP

NexLP's Story Engine uses AI and machine learning to derive actionable insight from structured and unstructured data to help legal, corporate and compliance teams proactively mitigate risk and untapped opportunities faster and with a greater understanding of context. In 2014, NexLP was selected to be a member of TechStars Chicago. For more information, visit:http://www.nexlp.com.

Contact

Jennifer Fournier[emailprotected]

Photo - https://mma.prnewswire.com/media/1226822/Jisa_and_Leib_Announcement.jpg

http://www.revealdata.com

SOURCE Reveal

Original post:

Reveal Acquires NexLP to become the leading AI-powered eDiscovery Solution - PR Newswire India

A Super Smash Bros-playing AI has taught itself how to stomp professional players – Quartz

A Super Smash Bros-playing AI has taught itself how to stomp professional players
Quartz
The AI, nicknamed Phillip, had been built by a Ph.D student from MIT, with help from a friend at New York University, and it honed its craft inside an MIT supercomputer. By the time Gravy stopped playing, the bot had killed him eight times, compared to ...

and more »

Read more:

A Super Smash Bros-playing AI has taught itself how to stomp professional players - Quartz

Elon Musk’s Freak-Out Over Killer Robots Distracts from Our Real AI Problems – WIRED

F(z 4{[:.OIlE#Z $ 6.U^k~w/@HFBvOU$-###_x6/mjZDq,C-PerTrW(z^1"yE0/rUNu4~v2m"VY^v@%q9mI]8eR&aQKA~a^&Q*PS XL'2Lm 7*LGi2g_R*nVt|*m^z*,O4E'2^&B,JT+(s[h<[@h}R{X8 KI*vs&Q> Jr.>v$e*y,&YY*jo$9=eE8 bi,YB[9lr'g:_`q5:0kD0L^vF&>_8Wb?<>YR j!?^'/Ol~)?~,F?r9+_"o0+Rir1?Kn&_'? KlE}S|U__~=/~MU -5WtEZ=W+_4~.YY;}}H*M_F]JR"`"Exnt6 u6KP'q{OrA]w/>lXme:)E~aShc>jXj4eTqLOx6)# j5i>y`TmmLf6@PAb>~m}Di9&:_& e^7T|pR>#%??OPuCmF5v%Pj3 PY~Hzhf$RYZcJJAgrL=jJNQ,Zz{O=Me,uDtl>A$F7Ewk[m{RCo~Qwg(:x]UM{cax.N?f!Zk> E6I@Pf2OuoNM0c3V3%> q_hh]0> IU*Mm'f3CEjo0OB= 'iJdY{)H/ F|EeCb NG;T|Mp{MiC:Tk[@; *PX5ko]/=AVR?ZS(F*wub{ w^TNxxItRt2'`>(zXDO iIpgfvio[0o7/*}6VBe}YdUow?W{{^{7QW7ZMAo!6n1rex+'LUVi)}zLn2`ik{LIwAM*8ywX7ODTn_D3a-k Qz t A0&U9,zu -<@B#@nbbA=SQ&UhC96* 4g-e mloW+a rRSN>#$p2::8lsutvRF(sa5APCv0! Z 5 /CkOE8^#Ei[~C7o?iVExa*5u,^EYo;+uvZKYv^c7-7NfDo>=byj_,LIIq}=bm*GC{;@^h"Hv_1LDsn"zcmFFjM=h$=1RrL5 <4f;z>`8a/6If3Y@?,Odq2Mn%0TdQrvOZ6^Z"W xp/ -%&P3 gbD_yj'"UXRBt;auJH [8bq~=e mp?DL3RzS#o02R}x}Z:X4b^wiuMqD7lFQboZ/lHNA{h""]ESc 54]i!=iD:g4y{ .:et9WvWmJ|S[,5bmZ 8_W_|4Q%a WO4Luzdv[_.&zk0Xa|YN9;f9SXca02k>f5O<}z0<4-DBsi'Y6R!&Pt6hSqzq[_l%L;4 kGC6nA%%K0AD-/#i|X*UkPvgawGWoo-THlqY_9)dJV;>yLJB,P%&JOP5eAsvmw8m>ypa6O5SHPShAS#O>H^| 'Ez1&kmq-ck0(xFE;:1W&@qFz0B %a(B[-v*=-&k~Ord.(BOhOT]yXpi20B4y{ .}uDNneIpj vO".LNBL{0"<69ID;9Og :#G4iPFe:,lBl^.!{w*+WWKL_<|e ** XP^<"WsZZ@T2y"C,,LfAePi-exPT>*QexPfaG[d&9-*+W.a= yz{T*hV@$x4Gi}exP&("8,, Z6a-lmyVPyi nmqk y(?0W9q[8mAT2vA<5N+|( ]:D%Zx:mBTaAEW.r {.M v GyjtyVC+:<*.uy[`r<&K09]&vy%b >(omN`!&d9c >LAXL'l)(QSE&(SmC3s;b,'%`-,mu)L)LS$$Agw2JiA$I W`!&d+#|6Gl3mc3<6Gl3mY`3)$LyRb hjISMqiS&*BL(=bLf%`J.lfrRe;8v )p(fd8<)0!>xQ,p([P_0eav(rSC(s"6 I$/RJ8AigD9Le!bAF/.8vB^,]&MR6M"x2LY%`J.%LI)dLiK rB(SoCF&Xe8 dL)%`J p1LyG~{Lod1=J QV%81 ,J"kZx(n[mQ)=`JJ>fY#)TJ,2JZaWX>yjJ_txW0%T)/bV0eO!@(FSL|JPBLP<)0#'`!d)=m&m`45bBF6d`SHf"}A`$2gLU@qHL!(gOqWd+pf#$33hZb2,8bBF3&oEdBw#H~"%q9 X}xEa=q{cX}f>yruC#<{-|}iqXzP&Hxhq, We-( ^SaeK,P/~UwGYao -d9~Zjd2_fy]E%Z,d}/e5:nX.#yg^i4(Us6h8PPRe&K{A-POL/G,dRnTXNLUPK*MfgPAOL^Tyxc,]7 I q()go0u,FV)c-?|3a^&Q*,7v KM|0z< .]6cWH1*vZTMH?>3u?B_@3Bi>nDz`Cs0{TC.pVX }fgrYvU1rOjI ;vmC:/}0h)?G&k*s 4][|xn<.e}:[Uik/%` ~PBov2D_58 :C(YN9+C97^niiT/^,PL!u=@rm0a@1_<,L D cx(v5z7Z")9` DsrWepC.@c0]4T-7%~$Y)F:<3Nf[` q3g,FWq[_l%l/^lfi|X3 kMRB`aWa2&eTz'Y|1 VJU#3hl/(<[6#HW5}8 KSKk9tKv`[Q1*@>z*lM}H(wJmn490PUTvVt`7uJO5rWc ggQ[:{{S8Y@]cj&EZ'q,{1S+I4pxU,:d-=5]UgQnXoJxoPL`#]j~7<^5c@!$3ChP%ibomwg^?V-pxZKk~ h*/s$wb*cyPBQOOKpPBV}{Cpj.puDNY]"kO2BZU!*U%%tpy9Lr(2f.!]r 5AXP23"%cJBtr$ ,(aPLQP'&OeLwSfnr1t'EH8etEE8a QLL9-*Bl&Sh&^;x)W2kn&x.&|B|^. x vcZA,2tyMl$N 7<6O m-L[LDt6O:`HvyHvDcKNK(}]gB4' &TP]9t:UW]W_R!h i},{ >O|Op=OF%@G|L>r8%zyexPfLcPf: D< ? _.P#O 9Bc)S=A^0]X/LAgPH5Tb7LvI8d3 [X+&Pzd7dMh2&%$jc!dgS(d3=>cJI pK&'P'KGX xN&El2Yf@IG,L,W08BL|f19-J|bQ"2BL(s|BI-CO(S:jaQ&, 5a1EMP2mT2gOPbX9Q0E9mHiI^08 sp(;C(s"B/,_0]p(72XESLl"LEA!dJ.v]K(9SoA-| 2BQ2}T2Lf%`,p(O-R~JocJAI[-VI3cr{@)<='%`JpRcAbGS<>'Y0e))}|)/!+6%`J+(xS.^C3 %q`+|B >P>x})^?&`@ B(yS"`CF&MnOB<(QST{@qLn"hj( mDLIeCP1P x8grP5JJl[=&&W;%`J-(YSnA-qld#52R 9SqddJ2pNS#bBvZ6b!d[1!;&SqdmM,B<0B <2dJX pN,BL(c7D&S"]dmp"xr8m,1&S^S,'Pgd5&S^SNa!&d&LJ^)kIk2e5HAF6&SOC`j'I&j2e5MB,@ L >eyi?AFQ&4 ;X AS^90ebTo&S7C!}$&%WGLIbg2%89g(fdXMq,2gL-&MMFZ,EAKr~9AD4z,N7|L ; P5GJx?Z@90L8mNY$[jQrYg_@5G+Zrsb0neB*,JsmX^k$Ku&FL{3b*piP_t LmZ7=>p'2L2'Z'^^#iaY X= lGDUSy9,,X}-Xo@J'O}QRU7:y#+1lUyeO}-~mO( ?7*Pu*ciC?P^;?{C-YXSX1P!PQ4CYQjN*/i^ M_$'QR*&dy-|XLBlrm_"9*hq0P%La#Y5xaeLL5g kVZ8*N x)J p6kw `yI>uxcjS ?$1e=O`PY^6I1^iq_e3S+!` #*0zF6Q rh|=Qq7O9(,F),fkl`fwcOsT +poCO"*z1d=lK,;V@i~oyjVAeH%M9E M4X.%`8&B#g]RAR{~1ZQ!!0-QeEa-_C7Mav5l!g#lQ)^q71egY~.O/H353 ,4aD-e45zX"MNQt wDJ(10CQ3rV2%8 P$@kW$ kqi_e0#N@88a4G}0Wd?/n)+%({Vj`@G}3 o; % @Ta|18Arz9jP8n3IQV]mYV7LkplD6;K5,-=OPd H"9^U6v[k2&B8$zUK^ |ZJP;AVxJi9%#0Goj:0M'khr:QYf ?HZ|y D&GSel%"gZ&@{t(5 G;`Y.%xbkEjbZ8BTZE.AdlGf~00RVF(3fJg1[@@AJ&Ksr5q5%L[9?~0.Podz9PMeHr:s ekkU+!>_QX F8P[00NzEN@e2*oz.2]2n +EhH$sVpS:s$/Ph~ ;iYo =f*{Ly XAs} V_.MWXVi[Zqwxf("4@)2}vPYTpW+t0bx:XtXe20#Bw' u>i h.GI]X.tRC6d~g3i36IJ F2XeQ~',`;NUNSUwCef.U!Is"unG*uZ8;vNomYf}j=wzfJ3hTA=}|ErqQ[-U/g eu-G{;olaxzmUi`}<7`mf)[ f^Tu+;DY}Vw!&I-HWU9q:u[Pcs%(poM]dG*dMdH**Z)RYD^$-Br4>Bh9Z{ -T:~F"zg77wKo(zcb{O!FX!(KOR{,_s]O[zblGw@5(s{e.$ sxp2,@y%EK]~'{yXb,71bO=@ax[TzhO w5ygMX ?*bFs|P|:M]XmW <>mi#}P`xr[k4m<]]/1zV]5s;<9@PWMrU>*q>#:#kvg-_N~pb-v{47 /. k($nufGn9rW'^Dtl}r]!S(a9`oj1,ya'x=yS~Qm:{]_k}QT(zG`]9E]G|U]FI^hU[v]U: R[S B|'ai-_Uj3obF}9e)"};b!/6&o{.l=P v<>F&% SuWi`ub j'K%RX;]3b-cUb^ho g@IP-+TC?$X~]GO[yVd],q8quS%jbYmcu0$??'|Eeim&1m BVRivJP+|h'=cN57,xp!6lZUQ$R.!i/HJJGGow5@>K4{1au V$>#M2L4Xsbr^1-)}?6QD,xL^FV:q,~[V$>=81U^CzM&r}q mTk.f2wHjxT[S13L{FXmCl0}4;;IZ[_+]9'h1X(gtAZnYeBr6: //{YG;?C m|y1Mwk-sYe&bw4j~!uE99f { N-QnF*q|Vyz6|ucpq~t|c`=sOq0lqP=#3+LENep 221*uL[y6W e60GI>8%#$U'm4[|I[h_3 |toA}at .J 10 OFo:4S5}/ }/D%<' NV{i~7qzEcWHs|5~M7?Qsc52?|.uG+{uL!R<,q>2n01qO"3<,G;azew@X~TX}>FG6"U39+1/eu5j!>9r]Y]iK/xT{PN'y5cw|>!: :d ms1H4cvR[qt.hD<1RrUQ?[pl1|ijm/6IeZ&L%*D7YN(fV, 2Z-JIZTR.%^sX;c!=_V[N=KLp'g:A,)5';ItH-3L-(ET?c>a?u" WK.#_U)Muf`zFF U<0uNU yTj0i%|9Nl[/t00yPln3NOU^Gmr}4psJa4qXG[U3j] n:x"ry{:s/g+T=r0*=AZ,v'jFF=c3*9z_Z nhr& ^O2zu0_IoA0V?Yq[WTCK;>zn+p:hWx[fL hkfIp.;6VfYqU@vT~h.ZRwJ=Yw:0M6.U~3OF?0S<;qc8x63;+%|.KUu| xyi> <9C7*X? 4BE*1AmpO!kloZ^YmL74"0Uk,9~#g(.W P 5%T<{"-c0iz!6ob6*l!UR44"e?5"[[sXFF|jl]5CrM=#MpCX&okR9JV6@n}E8jD23?}i=QhPi% SR_B* zMF+yPCN|(Y3uVM6DE.y?jV*gazug_9O ERe#:8x9u4gx{=)6Vjca>Q^{Ccp#1TT>T.Q&,_&.!thI1d2+ x]YE2>bt0b ZkJ7* rtYIQ@Q]S=)`;&{q@L0&2.5 hj 9})]cq4q"/aPeE.wDwa:1qB),B:a6F}uCwy>s3D<7Mw1Cjgt['hrucMEk0(& SjZj2OT|@og(hEUguj3hJZ&PHEkXmZ1e7Va^t xj%Lni{}L{9Fp`N =R3mlDkmk<"q(3-}>Gg$OBIS%_'gZU<.-ag{p(xMzhSbTt>x^&SO`*_vxAK4q&^%-,~`S(?+X!#=s<_4dX'G`vkvfPMPgx& BQ ]9mR}i[5eRGX%8;~w~06UgJ~E+yP7dencl6__EPi!EZ6>Z4Xp7t1@iXi82@8.6&2NwqW},}UCi$LEt>s7l8_mS49} 3Ht*T1 ;Ge o+)>?)/oiF ]XR%j+zX-~'pc1qb270`jO6e$}sgm=7`AgfyShvMO"sn!s/d6K1v9SyTRlRo.*)H 8OP,7*-q]-(m{a,-NcSs&R3jU#K 4ttN7C8.fXdi *]A?|R*no0@!CK%.xA*'Mbd6'GB :g&#($~cR$!SGp}zb#gTXst#@YRSd4|$+OlY[phBd#D&y5 y=6bQr#12 Ni6|/=S_@/^J+cx61KPIMX _J- 2n4oHk |#HQ/GW |+BT: E%|&O<-I E{ )UcI }35cE%`*i8:.yI"hhC8!!Ty.lv]tuUj4( RBOaVMMUbp7*_5 = (0w((8+4K)DxA!i796*@u-_nb|%tR^y@vC6W[YP8}V`TBIA41/+I$QI k$R!WH VAXx+?/[E#Yg"5*^Y*<(S).5Ns u6(4(zAFW"471|sM$fRZX$t:2&ki!=JXa4>"d~Qaxs}qR!b@(01Di6fApgJ4sy=4`BNH,7m[GR8|Z/4Mm2z%e^H7.?[C@mUhw/]S[5l_3];ykj:[Uu3nickU_N-,qkmg=y#>n^'b2=Ghlm}ct=[MIhtVN 5M0T$MwJnvIkjqQQ&NUBqNSnUSnQr{Ru1Rr]}I4U>#iC*>1T*-(kR.;ErDR(BMre !wX".W5+GGN|'7X,xNukj564$]O5P9T#Q |cP~T?);+NwE0q )9XUo`Urzlw/D}p[b;& 8/Arcb9F`4`? ,kY=(sH$XcgQ>QY F7D s(dFTx xH_!)!%%c]cn@W 9e?;SO8 :YozTQ4D2v&|"V6OV!}([2R)(.O8NV}KR0oVIUMo!BvQwm UW2h;%&-`Ss$-qHCIU3/SCVfpT!*$uXgxX!{y/ls"t<&.M$Ev% /Ibg.RRqUYya622:AzU_*UYZW ]H u>4-9])D)y :2K5SKCP{kH(j^5%ol?Pvc?eq#D 8)-+J <:lEp@Y=W*TXEGFN4BW&7EaJ@z+*:.KiM5'nRo FoH:J_- iD[SlTJY * (T~h0>bk SAsdSO OMN@#JU 7|M OjJ7Nrx6l( >.J_FObIg@JXF)B[;PKGF3K*uhs6NQ[CMcjN: k89|:u Xxyq#z{|~PVnM+:mS,>X .f{{RO~mELY*eTS4G%*<k@Cu'c(m'KU^ |c$-Aic'JjrZ,TL[0+CN|G8^7d;!i!Bw~c%=:6?rD|Ih;a,%jKAQvVX*I_*~>"`0V-P%)._5_+F/F@Hu)}Zg4P}tq'w|zw'7szvr u:.//Wv6?f'g-gcldm5N{[eS5mL(7OF4?&}[=>oxCk8R 2WHPON| i{I8beYAAFHnW*.v:ZXPAuT0}[3CkJ]EJ/mJlN-n*psNcHB^ro}6p~j:-C:<[(3F12E d)%c!meyxrbM=Kp$}^7GhP~9vFCH`:tLnZOY@5A!7uY3f/DI8im_ QD91x_Zm!SWNW#{hvm D D>5HmZ!RMp+rTwRv2[!V$sS6.]b:4+j'ciuLWa Gg@z&7Q,CD2gkB"lf:jpQ.P&GdYD3$H?Z/5{E:/(~/Huf|_3v8Iw Z40'C#gL HSia[ ~ZEi@A,H"Q$y1b40!* z*J$?[ =V.?%)PuQ6W[v|,/b@-A ySs7]mj'acFzxf4 zzDiFK*dt0EzUBI%(o-}4gRtZjKL'W`Z9)luKoZVhF#)chX&cL ]O5UJ:16"pjlV8T]E{LBV[ 1#(1"&I2A>p2y4Tg7DR9mf*ZBmTx r*2FS+}rK7("j7)rhtgB%kbypXryV)c!Guv,v|u{bV`:X)D/#ba@Ue6bBf_"f(WdXnHK;0tsEzMrZ8Tk}'M4'h4UISY WIv"*3dk 75F6)N%9e.OowwU,XwIx,Gar^G{F%}w"3/EO%mOX'VIZ6'$h?`';z`3/Gf C [7*}1sEr Mg*|sMw 1`XWo VWDpproke0aqiXfP0$kocC2r .[@'Il<*((Q_TAgKvm|g53Lm7qmF|cBCx@}Vm@u0~n%eh/d'&1Ya31AG!'~Z1

Link:

Elon Musk's Freak-Out Over Killer Robots Distracts from Our Real AI Problems - WIRED

Game Tree and Optimization under Adversarial in AI – Analytics Insight

Game theory, in economics, considers the environment with multiple agents as a game. This is regardless of whether the agents in the game are cooperative or competitive. In AI Solutions Company, games are considered adversarial in nature. By adversarial we mean that the environment is two-player, turn-taking in which after the game, the utility values are equal and opposite.

These take place in a deterministic, full observable environment. An example of this is chess. The rules of chess can make the computer representation of the game correct in every relevant detail. However, the presence of an opponent makes solving more complicated because it introduces uncertainty.

Lets consider a game of two players, and call them MIN and MAX. These two players will compete where MAX will try to maximize the payoff. A search problem in the form of a game can be defined with the following 4 components:

In a normal search problem, the objective of MAX would be to search for a payoff value that leads to a terminal state that is a winner, thereby making its first move. However, in the presence of adversary MIN, the strategy of MAX will be to find a winning terminal state regardless of what move MIN makes.

To determine the optimal strategy for MAX, the minimax algorithm is used to determine the best first move. The 5 steps in this algorithm are:

Since the decision maximizes the payoff under the assumption that the opponent will play perfectly to minimize it, it is called the minimax decision.

Figure 1

In the two-player game tree generated in Figure 1, the A nodes indicate the moves by MAX and the V nodes indicate those of MIN. The terminal nodes show the payoff value for MAX calculated by the rules of the game, i.e. the utility function. The payoff values of the other nodes are calculated by the minimax algorithm from the payoffs of their successors. In this case, MAXS best move is A1, and MINs best reply is A11.

Read the original:

Game Tree and Optimization under Adversarial in AI - Analytics Insight

Controversial facial recognition software being used to identify child victims of sexual… – Business Insider – Business Insider

Police departments across the United States are paying tens of thousands of dollars apiece for access tosoftware that identifies faces using images scraped from major web platformslike Google, Facebook, YouTube, and Twitter.

The software is produced by a relatively unknown tech startup named Clearview AI, and the company is facing major pushback over its data-gathering tactics, which wereearlier reported by The New York Times. It pulls images from the web and social media platforms, without permission, to create its own, searchable database.

Put simply: The photos that you uploaded to your Facebook profile could've been ripped from your page, saved, and added to this company's photo database.

Photos of you, photos of friends and family all of it is scraped from publicly available social media platforms, among other places, and saved by Clearview AI. That searchable database is then sold to police departments and federal agencies.

Those law enforcement groups are using those photos for, among other things, identifying child victims of abuse. According to a report in The New York Times on Friday, police departments across the US have repeatedly used Clearview's application to identify "minors in exploitative videos and photos."

In one example from the report, Clearview's application assisted in making 14 positive IDs attached to a single offender.

The company doesn't hide the fact that its software is used as such. "Clearview helps to exonerate the innocent, identify victims of child sexual abuse and other crimes, and avoid eyewitness lineups that are prone to human error," the company's website reads.

It's a clear upside to a piece of technology that comes with major tradeoffs many of the billions of photos Clearview scraped from the internet weren't intended for use in a commercially sold, searchable database. The company pulls its photos from "the open web," including services like YouTube, Facebook, and Twitter.

The companies in charge of the services it pulls from have issued cease-and-desist letters to Clearview. They each have provisions explicitly spelled out in their user agreements to prevent this type of misuse.

"YouTube's Terms of Service explicitly forbid collecting data that can be used to identify a person," YouTube spokesperson Alex Joseph told Business Insider in an email on Wednesday morning. "Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter."

Twitter sent a similar letter in late January, and Facebook sent one this week as well.

Facial recognition technology has existed for years, but searchable databases tied to facial recognition are something new. APPhoto/Mike Derer

Clearview AI CEO Hoan Ton-That argues that his company's software isn't doing anything illegal, and doesn't need to delete any of the images it has stored, because it's protected under US law. "There is a First Amendment right to public information," he told CBS This Morning in an interview published on Wednesday morning. "The way that we have built our system is to only take publicly available information and index it that way."

As for his response to the cease-and-desist letters? "Our legal counsel has reached out to them, and are handling it accordingly."

Ton-That said that Clearview's software is being used by "over 600 law enforcement agencies across the country" already. Contracts to use the service cost as much as $50,000 for a two-year deal.

Clearview AI's lawyer, Tor Ekeland, told Business Insider in an emailed statement, "Clearview is a photo search engine that only uses publicly available data on the Internet. It operates in much the same way as Google's search engine. We are in receipt of Google and YouTube's letter and will respond accordingly."

Read the rest here:

Controversial facial recognition software being used to identify child victims of sexual... - Business Insider - Business Insider

AI and Robotics Trends: Experts Predict – Datamation

Many experts in the field firmly believe 2017 will be a breakout year for both artificial intelligence and robotics, since the two often go together. Spoiler alert: it's all good.

AI Makes Robots Smarter

Robots use an increasing number of sensing modalities including taste, smell, sonar, IR, haptic feedback, tactile sensors, and range of motion sensors. They are also becoming better at picking up on facial expressions and gestures, so their interactions with humans become more natural, said Kevin Curran, IEEE senior member and professor of cyber security at Ulster University.

"Basically, AI is crucial for all their learning and adaptive behavior so they can adapt existing capabilities to cope with environmental changes. AI is key to helping them learn new tasks on the fly by sequencing existing behaviors," he said.

Karsten Schmidt, head of technology at the Innovation Center Silicon Valley for SAP Labs echoed this sentiment. "In 2017, we will see AI gain greater acceptance and momentum as humans come to increasingly rely, trust and depend more on AI-driven decisions and question them less. This will happen as a direct result of improved AI learning due to more usage and a broader user base, and as the quality and usefulness of AI software in turn improves," he said.

Meet Your AI Co-Worker

Many people fear losing their jobs to robots, but more than likely you will have a robot for a co-worker. Then again, if you've been in the workforce long enough, you've probably already had a robot for a co-worker, just in human form.

"In 2017, we are seeing a growing emergence of robots designed to operate alongside people in everyday human environments. Autonomous service robots that assist workers in warehouses, deliver supplies in hospitals, and maintain inventory of items in grocery stores are emerging onto the market," said Sonia Chernova, assistant professor at Georgia Tech College of Computing.

These systems need humans because one thing robotics researchers are still struggling with is robotic arms. There's no substitute for the human arm to pick things up and manipulate objects. "[Robot arms] have of course been used successfully for decades in manufacturing, but current techniques work reliably only in controlled factory environments, and are not yet robust enough for the real world," said Chernova.

This could lead to the rise of "AI Supervisors," said Tomer Naveh, CTO of Adgorithms, an AI-based digital marketing platform. Robots already have taken on many labor-intensive, manual (read: boring) tasks we do in our everyday life but robots will get smarter, and need AI to do it, he said.

"AI systems will get better at communicating their decisions and reasoning to their operators, and those operators will respond with new rules, business logic, and feedback that make it more and more useful in practice over time. As a result we will see people shifting from doing tasks by themselves, to supervising AI software on how to do it for them," he said.

That's actually a disturbing thought.

Changing Retail

AI and robotics will slowly move into another area where human error is common: retail. To some degree there is already automation in optical scanners and retail tracking used by stores to manage inventory, but it will be considerably improved.

The retail industry, for example, has been unable to address the problem of non-scanned items at checkout, which accounts for 30% of retailers annual losses. They only discover the loss in inventory well after the fact.

"AI is stepping in to address issues of this caliber across industries, and as a result, its often gathering just as much data as its processing. This resulting data is becoming a secondary benefit to businesses that use AI. AI Apps created to detect these non-scans are now also providing retailers with information about their origins, whether theyre fraudulent or accidental, and how customers and cashiers are gaming the system," said Alan OHerlihy, CEO of Everseen, developer of AI products for point of sale systems.

And as consumers have positive experiences with drone deliveries, public opinion may go a long way towards opening up regulations for further drone use, said Jake Rheude, director of business development for Red Stag Fulfillment, an eCommerce fulfillment provider.

"Consumers are already fully on board with the concept of drone delivery. According to The Walker Sands Future of Retail 2016 Study, 79% of US consumers said they would be 'very likely' or 'somewhat likely' to request drone delivery if their package could be delivered within an hour. And 73% of respondents said that they would pay up to $10 for a drone delivery. This is an unprecedented level of acceptance for new technology with so little real word experience from consumers," he said.

AI in Your Home

Another prediction made by umpteen science fiction movies usually with an alarmist tone is that AI will come into the home in a big way. It already has if you have an iPhone, with Siri, or use Windows 10 and Cortana. Gradually it will move into other devices, the experts predict.

"Alexa, Cortana and Siri are great, but they still lack the sophistication and accuracy to be relied upon as a utility. In 2017, advances in natural language processing and natural language generation will transform what digital assistants understand and how they analyze and respond with legitimately useful information. The era of just opening a related Wikipedia page are over," said Matt Gould, AI expert and co-founder of Arria NLG, which develops technology that translates data into language.

To make these devices work optimally, they need to develop an emotional quotient, or an EQ, predicts Dr. Rana el Kaliouby, CEO and co-founder, Affectiva, which develops facial recognition software. "We expect to see Emotion AI really come to the fore this year, and once AI systems develop social skills and rapport, AI interfaces will be more engaging and sticky, and less frustrating for their users, driving even wider adoption of the technology," she said.

She predicts that in the future, all of our devices will be equipped with a chip that can adapt our experiences to our emotions in real time, by reading facial expressions, analyzing tone of voice and possessing built-in emotion awareness. "The ability of technology to adapt to our mood and preferences could enhance experiences ranging from driving a car to ordering a pizza," she said.

And this should mean less typing, said Scott Webb, president of Avionos. "Physical interaction with hand-to-keyboard commands will give way to more organic input methods like voice and physical response as we move forward," he said.

Better Security

It's been said before but is worth repeating that AI will improve security because, like in so many other cases, security AI won't be prone to human failings of boredom, fatigue, illness and disinterest that often causes a security lapse. It will also have much faster reaction times and much better recognition of unusual patterns.

"Machine learning and the models generated through processes around machine learning are helping enterprises analyze massive amounts of data and identify trends, anomalies, and things not detectable through standard modeling. Machine learning algorithms are helping security researchers dynamically identify threats, airlines improve maintenance and reliability of their aircraft, and provide the back bone for self-driving cars to analyze data in real-time to make decisions," said David Dufour, senior director of engineering at antimalware vendor WebRoot.

That immediacy is needed with catching data breaches, as well. The average time to discover a network attacker is about five months, giving attackers plenty of time to achieve their goals, said Peter Nguyen, director of technical services at LightCyber, which does behavior based security software.

"Finding signs of an attacker is difficult and demands the use of AI. Instead of trying to encounter, identify and block threats by their known characteristics, the way to find an active attacker is through their operational activities. Using machine learning, its possible to learn the good behavior of all users and devices and then find anomalies. Then, AI can be focused to find those anomalies that are truly indicative of an active attack," he said.

Continued here:

AI and Robotics Trends: Experts Predict - Datamation

Visit the cutting edge in AI: Transform 2020 Expo (July 15-17) – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

To say that the speed of technological change in AI is fast-firing is an understatement. As engineers and data scientists unlock more of the potential of AI and machine learning, ever-more innovative solutions continue to advance the goals of business leaders.

At Transform 2020 next week, youll have a chance to see those solutions for yourself. Transform 2020 Expo (July 15-17) will showcase some of the most cutting edge AI companies, from large tech giants like Intel and Dell to some of the most innovative growth companies and startups like Dataiku, Cloudera, and Modzy.

This means youll be able to get an up-close look at some of the most advanced solutions spanning AI security, automation, conversational AI, explainable AI, training data, as well as solutions for specialized areas, such as customer experience, and specific industries, such as wealth management.

Each exhibitor will host a virtual expo booth, where you can interact, engage in live Q&A and chat, as well as book private meetings in order to have more in-depth discussions about your needs and how they can be met.

Once the event begins, just click on the Expo Booths icon in the menu displayed on the event platform. From there, youll be able to earmark and select all those youd like to follow up with.

And if you happen to be a company thats disrupting the AI space? We may still be able to slip you in, if you get in touch asap. Just head to our registration page and select the Digital Expo Booth option.

This, combined with our special 1-1 meeting feature for business decision makers, will make Transform one of the best networking events of the year for business executives looking to implement AI.

See you there!

See the article here:

Visit the cutting edge in AI: Transform 2020 Expo (July 15-17) - VentureBeat

Sci-fi perpetuates a misogynistic view of AI Heres how we can fight it – The Next Web

Fiction helps us imagine the future of AI and the impact that it will have on our lives. But it also perpetuatesstereotypes forgenerations to come.

From Greek mythology to contemporary sci-fi,robots are constantly anthropomorphized. The female androids are typically portrayed as beautiful, subservient, and sexually passive or deceitful killers on the rampage. Warrior machines, meanwhile, are normally gendered male, whether theyre protecting humans like Robocop, or trying to wipe them out like the Terminator.

These stereotypes endure long after the story ends. They help foster Silicon Valleystech-bro culture and the productsthat it creates.

[Read:Microsofts AI editor uses photo of wrong mixed-race popstar in story about racism]

Take the tendency to give voice assistants female voices and names. Weve been conditionedto prefer synthesized female voices as they sound warmer. Once that prejudice is embedded in the tech, its sustained by our interactions with it.

As AI researcherKanta Dihal noted at the CogX conference this week:

Because people get used to a feminine, servile Alexa, theyll continue to associate women with servile roles. And these roles relate not only to jobs, such as the servant in the case of women,or the soldier in the case of men, but also to social roles and roles within relationships and hierarchies.

Its not only stereotypes of gender that fiction reinforces. The realexperiences of people of color have also been sidelined or sublimated.

In Dihalsresearch on depictions ofintelligent machines, shes uncovered misleadingallegories of slavery in stories of AI rebelling against humans:

Those narratives also in a way perform a dehumanizing function. Because by drawing on existing narratives of black slaves and transposing them onto narratives of robots that are very often racialized as white, this is a way of reappropriating a literary history and erasing these black voices from that non-fictional history.

Excluding BAME, trans, and female voices from these stories help sustainbiases intech. But we can still challenge these narratives,by diverting attention from tech bro fantasies towards marginalized voices. FromNnedi Okorafosvision of childbirth in a future Nigeria toCassandra Rose Clarkstale of a girls love affair with an android, there are already plenty of alternatives to try.

But the first step towards promoting diverse perspectives in fiction is acknowledging the homogeneity of theWestern canon. As Dihals colleague Kate Devlin put it:

If you examine the narratives, then you have a chance of disrupting the narratives.

Published June 10, 2020 18:18 UTC

Original post:

Sci-fi perpetuates a misogynistic view of AI Heres how we can fight it - The Next Web

An AI hiring firm says it can predict job hopping based on your interviews – MIT Technology Review

The firm in question is Australia-based PredictiveHire, founded in October 2013. It offers a chatbot that asks candidates a series of open-ended questions. It then analyzes their responses to assess job-related personality traits like drive, initiative, and resilience. According to the firms CEO, Barbara Hyman, its clients are employers that must manage large numbers of applications, such as those in retail, sales, call centers, and health care. As the Cornell study found, it also actively uses promises of fairer hiring in its marketing language. On its home page, it boldly advertises: Meet Phai. Your co-pilot in hiring. Making interviews SUPER FAST. INCLUSIVE, AT LAST. FINALLY, WITHOUT BIAS.

As weve written before, the idea of bias-free algorithms is highly misleading. But PredictiveHires latest research is troubling for a different reason. It is focused on building a new machine-learning model that seeks to predict a candidates likelihood of job hopping, the practice of changing jobs more frequently than an employer desires. The work follows the companys recent peer-reviewed research that looked at how open-ended interview questions correlate with personality (in and of itself a highly contested practice). Because organizational psychologists have already shown a link between personality and job hopping, Hyman says, the company wanted to test whether they could use their existing data for the prediction. Employee retention is a huge focus for many companies that we work with given the costs of high employee churn, estimated at 16% of the cost of each employees salary, she adds.

The study used the free-text responses from 45,899 candidates who had used PredictiveHires chatbot. Applicants had originally been asked five to seven open-ended questions and self-rating questions about their past experience and situational judgment. These included questions meant to tease out traits that studies have previously shown to correlate strongly with job-hopping tendencies, such as being more open to experience, less practical, and less down to earth. The company researchers claim the model was able to predict job hopping with statistical significance. PredictiveHires website is already advertising this work as a flight risk assessment that is coming soon.

PredictiveHires new work is a prime example of what Nathan Newman argues is one of the biggest adverse impacts of big data on labor. Newman, an adjunct associate professor at the John Jay College of Criminal Justice, wrote in a 2017 law paper that beyond the concerns about employment discrimination, big-data analysis had also been used in myriad ways to drive down workers wages.

Machine-learning-based personality tests, for example, are increasingly being used in hiring to screen out potential employees who have a higher likelihood of agitating for increased wages or supporting unionization. Employers are increasingly monitoring employees emails, chats, and other data to assess which might leave and calculate the minimum pay increase needed to make them stay. And algorithmic management systems like Ubers are decentralizing workers away from offices and digital convening spaces that allow them to coordinate with one another and collectively demand better treatment and pay.

None of these examples should be surprising, Newman argued. They are simply a modern manifestation of what employers have historically done to suppress wages by targeting and breaking up union activities. The use of personality assessments in hiring, which dates back to the 1930s in the US, in fact began as a mechanism to weed out people most likely to become labor organizers. The tests became particularly popular in the 1960s and 70s once organizational psychologists had refined them to assess workers for their union sympathies.

In this context, PredictiveHires fight-risk assessment is just another example of this trend. Job hopping, or the threat of job hopping, points out Barocas, is one of the main ways that workers are able to increase their income. The company even built its assessment on personality screenings designed by organizational psychologists.

Barocas doesnt necessarily advocate tossing out the tools altogether. He believes the goal of making hiring work better for everyone is a noble one and could be achieved if regulators mandate greater transparency. Currently none of them have received rigorous, peer-reviewed evaluation, he says. But if firms were more forthcoming about their practices and submitted their tools for such validation, it could help hold them accountable. It could also help scholars engage more readily with firms to study the tools impacts on both labor and discrimination.

Despite all my own work for the past couple of years expressing concerns about this stuff, he says, I actually believe that a lot of these tools could significantly improve the current state of affairs.

Continue reading here:

An AI hiring firm says it can predict job hopping based on your interviews - MIT Technology Review

Precision Medicine pushes demand for HPC at the Edge: AI on the Fly Delivers – insideHPC

In this special guest feature, Tim Miller from One Stop Systems writes that by bringing specialized, high performance computing capabilities to the edge through AI on the Fly, OSS is helping the industry deliver on the enormous potential of precision medicine.

Source: Shutterstock

Advances in high performance computing equipped with rapid diagnostic tools, advanced imaging devices, and genetic sequencers are enabling a growing trend toward precision medicine where healthcare is personalized based on an individuals genetic make-up. Artificial intelligence can deliver transformative insights that enhance and accelerate this trend. This is a dramatic example of how high performance computing at the edge can improve the quality of life of millions of people around the world.

To enable this revolution, the most powerful, high performance computing technologies historically associated with centralized enterprise and cloud data centers need to move out to the edge, often embedded directly in specialized medical devices or co-located with the primary data sources in the field at hospitals and clinics.

Three key elements are fundamental in many of these edge devices; from MRI or CT imaging devices, to genetic sequencers and cell analysis systems. First, there is the requirement to acquire and shape massive amounts of high speed incoming data. Second, edge devices need to efficiently store this raw data and then move it to compute and analysis engines. Third, these engines then transform the transferred data to actionable intelligence. All of these capabilities need to be designed and integrated to meet the specialized size, power, and environmental constraints of the edge based application.

Increasingly, the power required in these compute engines is being delivered through NVIDIA Tensor Core GPUs where each NVIDIA GPU has thousands of computational cores that can process massive data in parallel, as well as the NVIDIA end-to-end software solution stack. An illustrative example is secondary genomic analysis used in precision medicine. Modern genomics involves the rapid production of vast amounts of raw sequencing data using next-generation sequencers (NGS) coupled with massive computing for secondary analysis that converts the data into useful results. The most popular toolkit for doing this secondary analysis is GATK4 Best Practices pipeline. Traditionally, this work has been done by large numbers of CPUs. Parabricks, a company recently acquired by NVIDIA, has changed the paradigm and developed a GPU based solution that executes genomic analysis best practices workflows on NVIDIA GPUs. Parabricks is built using NVIDIA CUDA-Z and benefits from CUDA, cuDNN and Tensor TR inference software and is now available through the NVIDIA GPU Cloud (NGC), a software hub for accelerated computing applications. Deploying this capability integrated with sequencing platforms in the hospital and clinic environment is a critical requirement for making precision medicine a reality.

One Stop Systems, Inc. (OSS) has developed the technology and expertise required to work with OEMs to build this next generation of highly intelligent medical devices ready for field deployment. In fact, its AI on the Fly platforms and building blocks are being used by OEMs to build and deliver these medical solutions to the market today. AI on the Fly puts computing and storage resources for AI and HPC workflows, not in the datacenter, but on the edge near the sources of data. Applications are emerging for this new paradigm not only in precision medicine, but in diverse areas including autonomous vehicles, battlefield command and control, and industrial automation. The common elements of these solutions are high data rate acquisition, high speed low latency storage, and efficient high performance compute analytics. With OSS, all of these building block elements are connected seamlessly with memory mapped PCI Express interconnect configured and customized as appropriate, to meet the specific environmental requirements of in the field installations.

OSS building blocks include high slot count PCI Express expansion systems capable of acquiring 100 GB per second of data, NVMe storage nodes providing up to a petabyte of high speed, low latency solid state storage, as well as platforms capable of housing up to 16 of the latest NVIDIA GPUs or other HPC/AI accelerators for high end compute engine requirements. OSS also provides the customization capabilities to provide all of these elements in unique form factors or designed for specialized environmental and ruggedized conditions.

OSS is working with genomic sequencer OEMs to embed its data acquisition, storage, and compute engine platforms directly in genomic analysis solutions deployed in medical facilities worldwide. In 2019, OSS worked with Parabricks to benchmark genomic analysis results on its compute accelerator utilizing 16 NVIDIA V100S GPUs in parallel. When executed on the OSS platform, execution speed for secondary analysis was increased by a factor of 40X with a full human genome analysis reduced from taking two-days to taking less than one hour. OSS is working with other medical OEMs to deploy AI on the Fly solutions in live cell analysis systems which captures and analyzes images for cell therapy, oncology, and immunology research. Additional deployed applications include robotic eye surgery and high speed blood analysis systems.

By bringing specialized, high performance computing capabilities to the edge through AI on the Fly, OSS is helping the industry deliver on the enormous potential of precision medicine.

Disclaimer: This article may contain forward-looking statements based on One Stop Systems current expectations and assumptions regarding the companys business and the performance of its products, the economy and other future conditions and forecasts of future events, circumstances and results.

Tim Miller from OSS

About the Author

Tim Miller is Vice President of Strategy at One Stop Systems. Tim has over 33 years of experience in high tech operations, management, marketing, business development, and sales. He previously was the CEO of Dolphin Interconnect Solutions and CEO and founder of StarGen, Inc. Tim holds a Bachelor of Science in Engineering from Cornell University, a Masters of Business Administration from Wharton, and a Masters in Computer Science from the University of Pennsylvania.

Read the original here:

Precision Medicine pushes demand for HPC at the Edge: AI on the Fly Delivers - insideHPC

DARPA Is Working to Make AI More Trustworthy – Futurism

In Brief In order to probe the AI mind, DARPA is funding research by Oregon State University that will try to understand the reasoning behind decisions made by AI systems. DARPA hopes that this will make AI more trustworthy. Cracking the AI Black Box Artificial intelligence (AI) has grown by leaps and bounds over the past years. Now there are AI systems capable of driving carsand making medical diagnoses, as well as numerous other choices which people makeon a day-to-day basis. Except that when it comes to humans, we actually can understand the reasoning behind such decisions (to a certain extent).

When it comes to AI, however, theres a certain black box behind decisions that makes it so that even AI developers themselves dont quite understand or anticipate the decisions an AI is making. We do know that neural networks are taught to make these choices by exposing them to a huge data set. From there, AIs train themselves into applying what they learn. Its ratherdifficult to trust what one doesnt understand.

The U.S. Defense Advanced Research Projects Agency (DARPA) wants to break this black box, and the first step is to fund eight computer science professors from Oregon State University (OSU) with a $6.5 million research grant. Ultimately, we want these explanations to be very natural translating these deep network decisions into sentences and visualizations, OSUs Alan Fern, principal investigator for the grant, said in a press release.

The DARPA-OSU program, set to run for four years, will involve developing a system that will allow AI to communicate with machine learning experts. They would start developing this system by plugging AI-powered players into real-time strategy games like StarCraft. The AI players would be trained to explain to human players the reasoning behind their in-game choices. This isnt the first project that puts AIs into video game environments. Googles DeepMind has also chosen StarCraft as a training environment for AI. Theres also that controversial Doom-playing AI bot.

Results from this research project would then be applied by DARPA to their existing work with robotics and unmanned vehicles. Obviously, the potential applications of AI in law enforcement and the militaryrequire these systems to be ethical.

Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust, and having an explanation capability is one important way of building trust, Fern said. Thankfully, this DARPA-OSU project isnt the only one working on humanizing AI to make it more trustworthy.

View post:

DARPA Is Working to Make AI More Trustworthy - Futurism

Analyzing Impacts of Covid-19 on Cognitive System and Artificial Intelligence (AI) Systems Market Effects, Aftermath and Forecast To 2026 – Cole of…

The global Cognitive System and Artificial Intelligence (AI) Systems market focuses on encompassing major statistical evidence for the Cognitive System and Artificial Intelligence (AI) Systems industry as it offers our readers a value addition on guiding them in encountering the obstacles surrounding the market. A comprehensive addition of several factors such as global distribution, manufacturers, market size, and market factors that affect the global contributions are reported in the study. In addition the Cognitive System and Artificial Intelligence (AI) Systems study also shifts its attention with an in-depth competitive landscape, defined growth opportunities, market share coupled with product type and applications, key companies responsible for the production, and utilized strategies are also marked.

This intelligence and 2026 forecasts Cognitive System and Artificial Intelligence (AI) Systems industry report further exhibits a pattern of analyzing previous data sources gathered from reliable sources and sets a precedented growth trajectory for the Cognitive System and Artificial Intelligence (AI) Systems market. The report also focuses on a comprehensive market revenue streams along with growth patterns, analytics focused on market trends, and the overall volume of the market.

Download PDF Sample of Cognitive System and Artificial Intelligence (AI) Systems Market report @ https://hongchunresearch.com/request-a-sample/25074

The study covers the following key players:BrainasoftBrighterionAstute SolutionsKITT.AIIFlyTekGoogleMegvii TechnologyNanoRep(LogMeIn)IDEAL.comIntelSalesforceAlbert TechnologiesMicrosoftAda SupportIpsoftSAPYseopIBMWiproH2O.aiBaidu

Moreover, the Cognitive System and Artificial Intelligence (AI) Systems report describes the market division based on various parameters and attributes that are based on geographical distribution, product types, applications, etc. The market segmentation clarifies further regional distribution for the Cognitive System and Artificial Intelligence (AI) Systems market, business trends, potential revenue sources, and upcoming market opportunities.

Market segment by type, the Cognitive System and Artificial Intelligence (AI) Systems market can be split into,On-PremiseCloud-based

Market segment by applications, the Cognitive System and Artificial Intelligence (AI) Systems market can be split into,Voice ProcessingText ProcessingImage Processing

The Cognitive System and Artificial Intelligence (AI) Systems market study further highlights the segmentation of the Cognitive System and Artificial Intelligence (AI) Systems industry on a global distribution. The report focuses on regions of North America, Europe, Asia, and the Rest of the World in terms of developing business trends, preferred market channels, investment feasibility, long term investments, and environmental analysis. The Cognitive System and Artificial Intelligence (AI) Systems report also calls attention to investigate product capacity, product price, profit streams, supply to demand ratio, production and market growth rate, and a projected growth forecast.

In addition, the Cognitive System and Artificial Intelligence (AI) Systems market study also covers several factors such as market status, key market trends, growth forecast, and growth opportunities. Furthermore, we analyze the challenges faced by the Cognitive System and Artificial Intelligence (AI) Systems market in terms of global and regional basis. The study also encompasses a number of opportunities and emerging trends which are considered by considering their impact on the global scale in acquiring a majority of the market share.

The study encompasses a variety of analytical resources such as SWOT analysis and Porters Five Forces analysis coupled with primary and secondary research methodologies. It covers all the bases surrounding the Cognitive System and Artificial Intelligence (AI) Systems industry as it explores the competitive nature of the market complete with a regional analysis.

Brief about Cognitive System and Artificial Intelligence (AI) Systems Market Report with [emailprotected] https://hongchunresearch.com/report/cognitive-system-and-artificial-intelligence-ai-systems-market-25074

Some Point of Table of Content:

Chapter One: Cognitive System & Artificial Intelligence(AI) Systems Market Overview

Chapter Two: Global Cognitive System & Artificial Intelligence(AI) Systems Market Landscape by Player

Chapter Three: Players Profiles

Chapter Four: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue (Value), Price Trend by Type

Chapter Five: Global Cognitive System & Artificial Intelligence(AI) Systems Market Analysis by Application

Chapter Six: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import by Region (2014-2019)

Chapter Seven: Global Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue (Value) by Region (2014-2019)

Chapter Eight: Cognitive System & Artificial Intelligence(AI) Systems Manufacturing Analysis

Chapter Nine: Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter Ten: Market Dynamics

Chapter Eleven: Global Cognitive System & Artificial Intelligence(AI) Systems Market Forecast (2019-2026)

Chapter Twelve: Research Findings and Conclusion

Chapter Thirteen: Appendix continued

Check [emailprotected] https://hongchunresearch.com/check-discount/25074

List of tablesList of Tables and Figures

Figure Cognitive System & Artificial Intelligence(AI) Systems Product PictureTable Global Cognitive System & Artificial Intelligence(AI) Systems Production and CAGR (%) Comparison by TypeTable Profile of On-PremiseTable Profile of Cloud-basedTable Cognitive System & Artificial Intelligence(AI) Systems Consumption (Sales) Comparison by Application (2014-2026)Table Profile of Voice ProcessingTable Profile of Text ProcessingTable Profile of Image ProcessingFigure Global Cognitive System & Artificial Intelligence(AI) Systems Market Size (Value) and CAGR (%) (2014-2026)Figure United States Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Europe Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Germany Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure UK Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure France Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Italy Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Spain Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Russia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Poland Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure China Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Japan Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure India Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Southeast Asia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Malaysia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Singapore Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Philippines Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Indonesia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Thailand Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Vietnam Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Central and South America Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Brazil Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Mexico Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Colombia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Middle East and Africa Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Saudi Arabia Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure United Arab Emirates Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Turkey Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Egypt Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure South Africa Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Nigeria Cognitive System & Artificial Intelligence(AI) Systems Revenue and Growth Rate (2014-2026)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Status and Outlook (2014-2026)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production by Player (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production Share by Player (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Share by Player in 2018Table Cognitive System & Artificial Intelligence(AI) Systems Revenue by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Price by Player (2014-2019)Table Cognitive System & Artificial Intelligence(AI) Systems Manufacturing Base Distribution and Sales Area by PlayerTable Cognitive System & Artificial Intelligence(AI) Systems Product Type by PlayerTable Mergers & Acquisitions, Expansion PlansTable Brainasoft ProfileTable Brainasoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Brighterion ProfileTable Brighterion Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Astute Solutions ProfileTable Astute Solutions Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table KITT.AI ProfileTable KITT.AI Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IFlyTek ProfileTable IFlyTek Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Google ProfileTable Google Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Megvii Technology ProfileTable Megvii Technology Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table NanoRep(LogMeIn) ProfileTable NanoRep(LogMeIn) Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IDEAL.com ProfileTable IDEAL.com Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Intel ProfileTable Intel Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Salesforce ProfileTable Salesforce Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Albert Technologies ProfileTable Albert Technologies Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Microsoft ProfileTable Microsoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Ada Support ProfileTable Ada Support Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Ipsoft ProfileTable Ipsoft Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table SAP ProfileTable SAP Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Yseop ProfileTable Yseop Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table IBM ProfileTable IBM Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Wipro ProfileTable Wipro Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table H2O.ai ProfileTable H2O.ai Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Baidu ProfileTable Baidu Cognitive System & Artificial Intelligence(AI) Systems Production, Revenue, Price and Gross Margin (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production by Type (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Production Market Share by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Market Share by Type in 2018Table Global Cognitive System & Artificial Intelligence(AI) Systems Revenue by Type (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Revenue Market Share by Type in 2018Table Cognitive System & Artificial Intelligence(AI) Systems Price by Type (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Growth Rate of On-Premise (2014-2019)Figure Global Cognitive System & Artificial Intelligence(AI) Systems Production Growth Rate of Cloud-based (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption by Application (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption Market Share by Application (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Voice Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Text Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption of Image Processing (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption by Region (2014-2019)Table Global Cognitive System & Artificial Intelligence(AI) Systems Consumption Market Share by Region (2014-2019)Table United States Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Europe Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table China Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Japan Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table India Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Southeast Asia Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)Table Central and South America Cognitive System & Artificial Intelligence(AI) Systems Production, Consumption, Export, Import (2014-2019)continued

About HongChun Research:HongChun Research main aim is to assist our clients in order to give a detailed perspective on the current market trends and build long-lasting connections with our clientele. Our studies are designed to provide solid quantitative facts combined with strategic industrial insights that are acquired from proprietary sources and an in-house model.

Contact Details:Jennifer GrayManager Global Sales+ 852 8170 0792[emailprotected]

Original post:

Analyzing Impacts of Covid-19 on Cognitive System and Artificial Intelligence (AI) Systems Market Effects, Aftermath and Forecast To 2026 - Cole of...

Face it, AI is better at data-analysis than humans – TNW

Its time we stopped pretending that were computers and let the machines do their jobs.Anodot, a real-time analytics company, is using advanced machine-learning algorithms to overcome the limitations that humans bring to data analysis.

AI can chew up all your data and spit out more answers than youve got questions for, and the e-commerce businesses that dont integrate machine-learning into data analysis will lose money.

Weve all been there before: youve just launched a brand new product after spending millions on the development cycle and, for no discernible reason, your online marketplace crashes in a major market and you dont find out for several hours. All those sales: lost, like an unsaved file.

Okay, maybe we havent all been there but weve definitely been on the other end. Error messages on checkouts, product listings that lead nowhere, and worst of all shortages. If we dont get what we want when we want it well get it somewhere else. Anomalies in the market, and the ability to respond to them, can be the difference between profits and shutters to any business.

Data analysis isnt a popular water-cooler topic anywhere, presumably even at companies that specialize in it. Rebecca Herson, Vice President of Marketing for Anodot, explains the need for AI in the field:

Theres just so much data being generated, theres no way for a human to go through it all. Sometimes, when we analyze historical data for businesses were introducing Anodot to, they discern things they never knew where happening. Obviously businesses know if servers go down, but if you have a funnel leaking in a few different places it can be difficult to find all the problems.

The concern isnt just lost sales; theres also product-supply disruption and customer satisfaction to worry about. In numerous case studies Anodot found an estimated 80 percent of the anomalies its machine-learning software uncovered were negative factors, as opposed to positive opportunities. These companies were losing money because they werent aware of specific problems.

Weve seen data-analysis software before, but Anodots use of machine-learning is an entirely different application. Anodot is using unsupervised AI, which accesses deep-learning, to autonomously find new ways to categorize and understand data.

With customers like Microsoft, Google Waze, and Comcast, it would appear as though this software is prohibitvely complex and designed for the tech-elite, but Herson explains:

This is something that, while data scientist is the new sexy profession, you wont need one to use this. Its got the data scientist baked-in. If you have one, they can leverage this to provide immediate results. An e-commerce strategist can leverage the data and provide real-time analysis. This isnt something that requires a dedicated staff, your existing analysts can use this.

While we ponder the future of AI, companies like Anodot are applying it in all the right ways (see: non-lethal and money-saving). Automating data-analysis isnt quite as thrilling as an AI that can write speeches for the President, but its far more useful.

Read next: iTunes oversight practically confirms 4K Apple TV is imminent

See the article here:

Face it, AI is better at data-analysis than humans - TNW

Companies Work on AI-Based Sensors, Weapons for Use in Image Processing, Target Identification – ExecutiveBiz

artificial intelligence

Defense contractors are working on artificial intelligence-powered sensors and other partially autonomous machines that could help the U.S. Army process images and identify targets, Breaking Defense reported Thursday.

Vern Boyle, vice president for advanced capabilities at Northrop Grumman, said companies are developing sensors that can identify features and share data with other systems without requiring a lot of command and control back into physical systems.

An example of a weapon system that can see, share and record data is the Ripsaw robotic tank demonstrator from Textron Systems, Howe & Howe and FLIR Systems. This combat vehicle features a Skyraider quadcopter drone and a ground robot.

The quality of image processing by sensors and other machines relies on the quality of collected data and industry executives said companies should train algorithms on weird images and data to ensure their accuracy in target identification.

Should we bias training data towards the weird stuff?, said Patrick Biltgen of Perspecta. If theres a war, were almost certain to see weird things weve never seen before.

See the original post:

Companies Work on AI-Based Sensors, Weapons for Use in Image Processing, Target Identification - ExecutiveBiz

Othot develops AI software to help universities predict whether students will succeed – NEXTpittsburgh

By Jamie Schuman

A Green Tree startup is using artificial intelligence to help students succeed in college.

Othots software is designed to improve the admissions process, increase student retention and graduation rates and boost job placement. Its newest product uses data and machine learning to identify students who are at risk of dropping out, the reasons why, and what schools can do to help them.

Its all about optimizing the students situation, says Andy Hannah, co-founder and CEO of Othot, which specializes in artificial intelligence and analytic solutions for colleges and universities. We want that student to graduate, and with as low a debt as possible.

The Othot platform is a cloud-based software that can be accessed 24 hours a day from a web browser. (Othot derives its name from combining original and thought.)

It uses a large variety of data points, such as a students high school and college grades, financial circumstances and co-curricular activities, as well as census numbers and other information, to understand people at a very deep level, Hannah says.

Andy Hanna, co-founder and CEO of Othot

The software can predict if a student will struggle academically and suggest ways to help them succeed. Administrators may get suggestions to increase a students financial aid or when to start academic counseling.

The University of Pittsburgh uses Othots AI-driven recommendations to help students choose study abroad programs, which are important for student retention, says Stephen Wisniewski, Pitts vice provost for data and information.

We know that an engaged student is more likely to persist and at a much higher rate, Wisniewski says. Study abroad programs at Pitt are a centerpiece of that measure of engagement.

Othot has worked for the past year and a half to develop its student retention tool, and Pitt is one of a handful of universities already using it. The company is now ready to roll out the product more broadly, Hannah says.

Hannah says the tool is useful because as college costs increase, administrators have a duty to make sure that students succeed and graduate. And as enrollments are projected to decrease in coming years due to lower birth rates, universities will compete against each other to recruit and retain freshmen, he says.

Hannah says the new tool is incredibly accurate because it uses a nonlinear model, which looks at the relationship between thousands of variables, whereas other products may focus predominantly on grades.

We are in a different era related to the use of technology and the understanding of the individual, Hannah says. Its just refreshing to me to see that power being used to help students reach their desired endpoints, which is graduating and getting great jobs with debt that they can manage.

Andy Hannahartificial intelligenceothotuniversity of pittsburgh

More:

Othot develops AI software to help universities predict whether students will succeed - NEXTpittsburgh

The Impact of Artificial Intelligence on Workspaces – Forbes

Intelligent and intelligible office buildings

It is a truth universally acknowledged that artificial intelligence will change everything. In the next few decades, the world will become intelligible, and in many ways, intelligent. But insiders suggest that the world of big office real estate will get there more slowly - at least in the worlds major cities.

The real estate industry in London, New York, Hong Kong and other world cities moves in cycles of 10 or 15 years. This is the period of the lease. After a tense renewal negotiation, and perhaps a big row, landlord and tenant are generally happy to leave each other alone until the next time. This does not encourage innovation, or investment in new services in between the renewals. There are alternatives to this arrangement. In Scandinavia, for instance, lease durations are shorter - often three years or so. This encourages a more collegiate working relationship, where landlord and tenant are more like business partners.

Another part of the pathology of major city real estate is the landmark building. With the possible exception of planners, everyone likes grand buildings: certainly, architects, developers, and the property managers and CEOs of big companies do. A mutual appreciation society is formed, which is less concerned about the impact on a business than about appearing in the right magazines, and winning awards.

Outside the big cities, priorities are different. To attract a major tenant to Dixons old headquarters in Hemel Hempstead, for instance, the landlord will need to seduce with pragmatism rather than glamour.

Tim Oldman is the founder and CEO of Leesman, a firm which helps clients understand how to manage their workspaces in the best interests of their staff and their businesses. He says there is plenty of opportunity for AI to enhance real estate, and much of the impetus for it to happen will come from the employees who work in office buildings rather than the developers who design and build them. Employees, the actual users of buildings, will be welcoming AI into many corners of their lives in the coming years and decades, often without realising it. They will expect the same convenience and efficiency at work that they experience at home and when travelling. They will demand more from their employers and their landlords.

Christina Wood is responsible for two of Emap'sconferences on the office sector: Property Week's annual flagship event WorkSpace, and AV Magazine's new annual event AVWorks, which explores the changing role of AV in the workspace. She says that workspaces are undergoing an evolution that increasingly looks like a revolution, powered by technology innovation and driven by workforce demands for flexibility, connectivity, safety and style.

Buildings should be smart, and increasingly they will be. Smart buildings will be a major component of smart cities, a phenomenon which we have been hearing about since the end of the last century, and which will finally start to become a reality in the coming decade, enabled in part by 5G.

Buildings should know what load they are handling at any given time. They should provide the right amount of heat and light: not too little and not too much. The air conditioning should not go off at 7pm when an after-hours conference is in full flow. They should monitor noise levels, and let occupants know where the quiet places are, if they ask. They should manage the movement of water and waste intelligently. All this and much more is possible, given enough sensors, and a sensible approach to the use of data.

Imagine we are colleagues who usually work in different buildings. Today we are both in the head office, and our calendars show that we have scheduled a meeting. An intelligent building could suggest workspaces near to each other. Tim Oldman calls this assisted serendipity.

Generation Z is coming into the workplace. They are not naive about data and the potential for its mis-use, but they are more comfortable with sharing it in return for a defined benefit. Older generations are somewhat less trusting. We expect our taxi firm to know when we will be exiting the building, and to have a car waiting. But we are suspicious if the building wants to know our movements. Employees in Asian countries show more trust than those in France and Germany, say, with the US and the UK in between.

Robotic process automation, or RPA, can make mundane office interactions smoother and more efficient. But we will want it to be smart. IT helpdesks should not be rewarded for closing a ticket quickly, but for solving your problem in a way which means you wont come back with the same problem a week later and neither will anyone else.

That said, spreadsheet-driven efficiency is not always the best solution. Face-to-face genius bar-style helpdesks routinely deliver twice the level of customer satisfaction as the same service delivered over the phone, even when they use exactly the same people, the same technology, and the same infrastructure. There is a time and place for machines, and a time and a place for humans.

Rolls Royce is said to make more money from predictive maintenance plans than it makes by selling engines. Sensors in their engines relay huge volumes of real-time data about each engine component to headquarters in Derby. If a fault is developing, they can often have the relevant spare part waiting at the next airport before the pilot even knows theres a problem. One day, buildings will operate this way too.

The technology to enable these services is not cheap today, and an investment bank or a top management consultancy can offer their employees features which will not be available for years to workers in the garment industry in the developing world. There will be digital divides, but the divisions will be constantly changing, with laggards catching up, and sometimes overtaking, as they leapfrog legacy infrastructures. China is a world leader in smartphone payment apps partly because its banking infrastructure was so poor.

Covid will bring new pressure to bear on developers and landlords. Employees will demand biosecurity measures such as the provision of air which is fresh and filtered air, not re-circulated. They may want to know how many people are in which parts of the building, to help them maintain physical distancing. This means more sensors, and more data.

The great unplanned experiment in working from home which we are all engaged in thanks to covid-19 will probably result in a blended approach to office life in the future. Working from home suits some people very well, reducing commuting time, and enabling them to spend more time with their families. But others miss the decompression that commuting allows, and many of us dont have good working environments at home. In the winter, many homes are draughty, and the cost of heating them all day long can be considerable.

Tim Oldman thinks the net impact on demand for office space will probably be a slight reduction overall, and a new mix of locations. There are indications that companies will provide satellite offices closer to where their people live, perhaps sharing space with workers from other firms. This is the same principle as the co-working facilities provided by WeWork and Regus, but whereas those companies have buildings in city centres, there will be a new demand for space on local High Streets.

Retail banks have spotted this as an opportunity, a way of using the branch network which they have been shrinking as people shift to online banking. Old bank branches can be transformed into safe and comfortable satellite offices, and restore some life to tired suburban streets. Companies will have to up their game to co-ordinate this more flexible approach, and landlords will need to help them. They will need to collect and analyse information about where their people are each day, and develop and refine algorithms to predict where they will be tomorrow.

Some employers will face a crisis of trust as we emerge from the pandemic. Millions of us have been been trusted to work from home, and to the surprise of more than a few senior managers, it has mostly worked well. Snatching back the laptop and demanding that people come straight back to the office is not a good idea. Companies will adopt different approaches, and some will be more successful than others. Facebook has told its staff they can work from wherever they want, but their salary will be adjusted downwards if they leave the Bay Area. Google has simply offered every employee $1,000 to make their home offices more effective.

The way we work is being changed by lessons learned during the pandemic, and by the deployment of AI throughout the economy. Builders and owners of large office buildings must not get left behind.

Read the rest here:

The Impact of Artificial Intelligence on Workspaces - Forbes

What is Artificial Intelligence (AI)? | IBM

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

A number of definitions of artificial intelligence (AI) have surfaced over the last few decades. John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM)): "It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

However, decades before this definition, the artificial intelligence conversation began with Alan Turing's 1950 work "Computing Machinery and Intelligence" (PDF, 89.8 KB) (link resides outside of IBM). In this paper, Turing, often referred to as the "father of computer science", asks the following question: "Can machines think?"From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publication, it remains an important part of the history of AI.

One of the leading AI textbooks is Artificial Intelligence: A Modern Approach(link resides outside IBM, [PDF, 20.9 MB]), by Stuart Russell and Peter Norvig. In the book, they delve into four potential goals or definitions of AI, which differentiate computer systems as follows:

Human approach:

Ideal approach:

Alan Turings definition would have fallen under the category of systems that act like humans.

In its simplest form, artificial intelligence is a field that combines computer science and robust datasets to enable problem-solving. Expert systems, an early successful application of AI, aimed to copy a humans decision-making process. In the early days, it was time-consuming to extract and codify the humans knowledge.

AI today includes the sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms that typically make predictions or classifications based on input data. Machine learning has improved the quality of some expert systems, and made it easier to create them.

Today, AI plays an often invisible role in everyday life, powering search engines, product recommendations, and speech recognition systems.

There is a lot of hype about AI development, which is to be expected of any emerging technology. As noted in Gartners hype cycle (link resides outside IBM), product innovations like self-driving cars and personal assistants follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes (01:08:15) (link resides outside IBM) in his 2019 MIT lecture, we are at the peak of inflated expectations, approaching the trough of disillusionment.

As conversations continue around AI ethics, we can see the initial glimpses of the trough of disillusionment. Read more about where IBM stands on AI ethics here.

Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some powerful applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.

Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial General Intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equal to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, AI researchers are exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the rogue computer assistant in 2001: A Space Odyssey.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.

The way in which deep learning and machine learning differ is in how each algorithm learns. "Deep" machine learning can use labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. Deep learning can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the set of features which distinguish different categories of data from one another. This eliminates some of the human intervention required and enables the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman notes in the same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the set of features to understand the differences between data inputs, usually requiring more structured data to learn.

Deep learning (like some machine learning) uses neural networks. The deep in a deep learning algorithm refers to a neural network with more than three layers, including the input and output layers. This is generally represented using the following diagram:

The rise of deep learning has been one of the most significant breakthroughs in AI in recent years, because it has reduced the manual effort involved in building AI systems. Deep learning was in part enabled by big data and cloud architectures, making it possible to access huge amounts of data and processing power for training AI solutions.

There are numerous, real-world applications of AI systems today. Below are some of the most common examples:

Computer vision: This AI technology enables computers to derive meaningful information from digital images, videos, and other visual inputs, and then take the appropriate action. Powered by convolutional neural networks, computer vision has applications in photo tagging on social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.

Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This approach is used by online retailers to make relevant product recommendations to customers during the checkout process.

Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.

Fraud detection: Banks and other financial institutions can use machine learning to spot suspicious transactions. Supervised learning can train a model using information about known fraudulent transactions. Anomaly detection can identify transactions that look atypical and deserve further investigation.

Since the advent of electronic computing, some important events and milestones in the evolution of artificial intelligence include the following:

While Artificial General Intelligence remains a long way off, more and more businesses will adopt AI in the short term to solve specific challenges. Gartner predicts (link resides outside IBM) that 50% of enterprises will have platforms to operationalize AI by 2025 (a sharp increase from 10% in 2020).

Knowledge graphs are an emerging technology within AI. They can encapsulate associations between pieces of information and drive upsell strategies, recommendation engines, and personalized medicine. Natural language processing (NLP) applications are also expected to increase in sophistication, enabling more intuitive interactions between humans and machines.

IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:

IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions

Sign up for an IBMid and create your IBM Cloud account.

Go here to read the rest:

What is Artificial Intelligence (AI)? | IBM

AI tool suggests ways to improve your outfit – Futurity: Research News

Share this Article

You are free to share this article under the Attribution 4.0 International license.

A new artificial intelligence system can look at a photo of an outfit and suggest helpful tips to make it more fashionable.

Suggestions may include tweaks such as selecting a sleeveless top or a longer jacket.

We thought of it like a friend giving you feedback, says Kristen Grauman, a professor of computer science at the University of Texas at Austin whose previous research has largely focused on visual recognition for artificial intelligence.

Its also motivated by a practical idea: that we can work with a given outfit to make small changes so its just a bit better.

The tool, named Fashion++, uses visual recognition systems to analyze the color, pattern, texture, and shape of garments in an image. It considers where edits will have the most impact. It then offers several alternative outfits to the user.

Researchers trained Fashion++ using more than 10,000 images of outfits shared publicly on online sites for fashion enthusiasts. Finding images of fashionable outfits was easy, says graduate student Kimberly Hsiao. Finding unfashionable images proved challenging. So, she came up with a workaround. She mixed images of fashionable outfits to create less-fashionable examples and trained the system on what not to wear.

As fashion styles evolve, the AI can continue to learn by giving it new images, which are abundant on the internet, Hsiao says.

As in any AI system, bias can creep in through the data sets for Fashion++. The researchers point out that vintage looks are harder to recognize as stylish because training images came from the internet, which has been in wide use only since the 1990s. Additionally, because the users submitting images were mostly from North America, styles from other parts of the world dont show up as much.

Another challenge is that many images of fashionable clothes appear on models, but bodies come in many sizes and shapes, affecting fashion choices. Next up, Grauman and Hsiao are working toward letting the AI learn what flatters different body shapes so its recommendations can be more tailored.

We are examining the interaction between how a persons body is shaped and how the clothing would suit them. Were excited to broaden the applicability to people of all body sizes and shapes by doing this research, Grauman says.

Grauman and Hsiao will present a paper on their approach at next weeks International Conference on Computer Vision in Seoul, South Korea.

Additional researchers from Cornell Tech, Georgia Tech, and Facebook AI Research contributed to the work.

Source: UT Austin

Original Study

See the original post:

AI tool suggests ways to improve your outfit - Futurity: Research News

For AI, data are harder to come by than you think – The Economist

Jun 11th 2020

AMAZONS GO STORES are impressive places. The cashier-less shops, which first opened in Seattle in 2018, allow app-wielding customers to pick up items and simply walk out with them. The system uses many sensors, but the bulk of the magic is performed by cameras connected to an AI system that tracks items as they are taken from shelves. Once the shoppers leave with their goods, the bill is calculated and they are automatically charged.

Doing that in a crowded shop is not easy. The system must handle crowded stores, in which people disappear from view behind other customers. It must recognise individual customers as well as friends or family groups (if a child puts an item into a family basket, the system must realise that it should charge the parents). And it must do all that in real-time, and to a high degree of accuracy.

Teaching the machines required showing them a lot of training data in the form of videos of customers browsing shelves, picking up items, putting them back and the like. For standardised tasks like image recognition, AI developers can use public training datasets, each containing thousands of pictures. But there was no such training set featuring people browsing in shops.

Some data could be generated by Amazons own staff, who were allowed into test versions of the shops. But that approach took the firm only so far. There are many ways in which a human might take a product from a shelf and then decide to choose it, put it back immediately or return it later. To work in the real world, the system would have to cover as many of those as possible.

In theory, the world is awash with data, the lifeblood of modern AI. IDC, a market-research firm, reckons the world generated 33 zettabytes of data in 2018, enough to fill seven trillion DVDs. But Kathleen Walch of Cognilytica, an AI-focused consultancy, says that, nevertheless, data issues are one of the most common sticking-points in any AI project. As in Amazons case, the required data may not exist at all. Or they might be locked up in the vaults of a competitor. Even when relevant data can be dug up, they might not be suitable for feeding to computers.

Data-wrangling of various sorts takes up about 80% of the time consumed in a typical AI project, says Cognilytica. Training a machine-learning system requires large numbers of carefully labelled examples, and those labels usually have to be applied by humans. Big tech firms often do the work internally. Companies that lack the required resources or expertise can take advantage of a growing outsourcing industry to do it for them. A Chinese firm called MBH, for instance, employees more than 300,000 people to label endless pictures of faces, street scenes or medical scans so that they can be processed by machines. Mechanical Turk, another subdivision of Amazon, connects firms with an army of casual human workers who are paid a piece rate to perform repetitive tasks.

Cognilytica reckons that the third-party data preparation market was worth more than $1.5bn in 2019 and could grow to $3.5bn by 2024. The data-labelling business is similar, with firms spending at least $1.7bn in 2019, a number that could reach $4.1bn by 2024. Mastery of a topic is not necessary, says Ron Schmelzer, also of Cognilytica. In medical diagnostics, for instance, amateur data-labellers can be trained to become almost as good as doctors at recognising things like fractures and tumours. But some amount of what AI researchers call domain expertise is vital.

The data themselves can contain traps. Machine-learning systems correlate inputs with outputs, but they do it blindly, with no understanding of broader context. In 1968 Donald Knuth, a programming guru, warned that computers do exactly what they are told, no more and no less. Machine learning is full of examples of Mr Knuths dictum, in which machines have followed the letter of the law precisely, while being oblivious to its spirit.

In 2018 researchers at Mount Sinai, a hospital network in New York, found that an AI system trained to spot pneumonia on chest x-rays became markedly less competent when used in hospitals other than those it had been trained in. The researchers discovered that the machine had been able to work out which hospital a scan had come from. (One way was to analyse small metal tokens placed in the corner of scans, which differ between hospitals.)

Since one hospital in its training set had a baseline rate of pneumonia far higher than the others, that information by itself was enough to boost the systems accuracy substantially. The researchers dubbed that clever wheeze cheating, on the grounds that it failed when the system was presented with data from hospitals it did not know.

Bias is another source of problems. Last year Americas National Institute of Standards and Technology tested nearly 200 facialrecognition algorithms and found that many were significantly less accurate at identifying black faces than white ones. The problem may reflect a preponderance of white faces in their training data. A study from IBM, published last year, found that over 80% of faces in three widely used training sets had light skin.

Such deficiencies are, at least in theory, straightforward to fix (IBM offered a more representative dataset for anyone to use). Other sources of bias can be trickier to remove. In 2017 Amazon abandoned a recruitment project designed to hunt through CVs to identify suitable candidates when the system was found to be favouring male applicants. The post mortem revealed a circular, self-reinforcing problem. The system had been trained on the CVs of previous successful applicants to the firm. But since the tech workforce is already mostly male, a system trained on historical data will latch onto maleness as a strong predictor of suitability.

Humans can try to forbid such inferences, says Fabrice Ciais, who runs PwCs machine-learning team in Britain (and Amazon tried to do exactly that). In many cases they are required to: in most rich countries employers cannot hire on the basis of factors such as sex, age or race. But algorithms can outsmart their human masters by using proxy variables to reconstruct the forbidden information, says Mr Ciais. Everything from hobbies to previous jobs to area codes in telephone numbers could contain hints that an applicant is likely to be female, or young, or from an ethnic minority.

If the difficulties of real-world data are too daunting, one option is to make up some data of your own. That is what Amazon did to fine-tune its Go shops. The company used graphics software to create virtual shoppers. Those ersatz humans were used to train the machines on many hard or unusual situations that had not arisen in the real training data, but might when the system was deployed in the real world.

Amazon is not alone. Self-driving car firms do a lot of training in high-fidelity simulations of reality, where no real damage can be done when something goes wrong. A paper in 2018 from Nvidia, a chipmaker, described a method for quickly creating synthetic training data for self-driving cars, and concluded that the resulting algorithms worked better than those trained on real data alone.

Privacy is another attraction of synthetic data. Firms hoping to use AI in medicine or finance must contend with laws such as Americas Health Insurance Portability and Accountability Act, or the European Unions General Data Protection Regulation. Properly anonymising data can be difficult, a problem that systems trained on made-up people do not need to bother about.

The trick, says Euan Cameron, one of Mr Ciaiss colleagues, is ensuring simulations are close enough to reality that their lessons carry over. For some well-bounded problems such as fraud detection or credit scoring, that is straightforward. Synthetic data can be created by adding statistical noise to the real kind. Although individual transactions are therefore fictitious, it is possible to guarantee that they will have, collectively, the same statistical characteristics as the real data from which they were derived. But the more complicated a problem becomes, the harder it is to ensure that lessons from virtual data will translate smoothly to the real world.

The hope is that all this data-related faff will be a one-off, and that, once trained, a machine-learning model will repay the effort over millions of automated decisions. Amazon has opened 26 Go stores, and has offered to license the technology to other retailers. But even here there are reasons for caution. Many AI models are subject to drift, in which changes in how the world works mean their decisions become less accurate over time, says Svetlana Sicular of Gartner, a research firm. Customer behaviour changes, language evolves, regulators change what companies can do.

Sometimes, drift happens overnight. Buying one-way airline tickets was a good predictor of fraud [in automated detection models], says Ms Sicular. And then with the covid-19 lockdowns, suddenly lots of innocent people were doing it. Some facial-recognition systems, used to seeing uncovered human faces, are struggling now that masks have become the norm. Automated logistics systems have needed help from humans to deal with the sudden demand for toilet roll, flour and other staples. The worlds changeability means more training, which means providing the machines with yet more data, in a never-ending cycle of re-training. AI is not an install-and-forget system, warns Mr Cameron.

This article appeared in the Technology Quarterly section of the print edition under the headline "Not so big"

See the original post:

For AI, data are harder to come by than you think - The Economist