Is the Brain Augmentation Hype Justified? Here’s an Expert’s Take – Singularity Hub

Despite bold predictions from several tech firms about the future of neural interfaces, the science of brain augmentation is still in the early days. So, what do academics think of all the hype coming out of Silicon Valley?

Mikhail Lebedev, a neuroscientist who works on brain-machine interfaces (BMI) at Duke University, recently won a $100,000 prize from the open-access academic publisher Frontiers for a collection of papers on brain augmentation, curated over the last four years.

The prize money is designed to help him and fellow editors Ioan Opris (University of Miami neuroscientist) and Manuel Casanova (University of South Carolina medical doctor) to set up an international conference on the topic next year. So, I took the opportunity to speak to Lebedev about the state of the field.

In the next 10 years we will see realistic visual prostheses of different kinds and a lot of technologies for rehabilitation of stroke and spinal cord injury. How its described in these hype articleslike having people typing from the brain or millions of electrodes implanted in the brainwill be realized, but maybe in 20 years.

I may be wrong, because once new technologies start to come to this field it can really develop fast. If 10 years ago it was fine to insert a half-millimeter-thick electrode into the brain, now there are nanoscale electrodes. Of course, decoding brain activity will still be a problem for quite a while.

We have some basic understanding. We know some areas of the brain are more cognitive compared to others. So, if you want to extract more advanced information from the brain, you should place your electrodes inside or over these areas. But the representation of thought is not well advanced, so I dont think in the next 10 years we will be able to decode free-floating thoughts.

I think it is very realistic, but the first success will come from augmented reality (AR), where you use your normal senses, which are quite good, to interface to this AI, or lets call it an exo-brain. So, interfacing directly is a really good idea, but its still limited by the number of channels for such interfacing. The major problem is that we dont really understand the brain code, so we dont really know how to make this interface very efficient.

But my memory is limited, so AR glasses could really help, like if some AI guides me through an environment. You can imagine a computer and the brain working together. So the brain gives examples and the computer then learns, and the brain takes advantage of the computing power of an external device.

Take any brain function, and you can try to enhance it. In sensory functions, you can add new sensors to the brain. For example, you can add a sensor of electromagnetic fields that we cannot sense normally, and youd have this new sense. You can place these new sensors around the perimeter of the head and then youd have panoramic vision. Of course, I would experiment first in animals for this kind of application!

You can also try to micro-stimulate certain areas in the brain, but so far the majority of papers show you can suppress certain processing steps, not really improve. But if you think this suppression is maybe helpful, then you can come up with some ideas. For example, imagine certain tasks that a person is solving, and the computer knows the right answerso it sends a suppressive impulse to certain parts of the brain and biases the brains decision.

There are two major branches. The first is non-invasive devices, which are very easy to implement, and they kind of work. The problem is that the quality of signals they provide is limited. If you look at electroencephalogram (EEG) systems, theyre composed of the activity of huge numbers of neurons, and the strongest EEGs are recorded during sleep. So all the activities related to, say, fine motor control, become really small and you cannot detect them in EEGs. On top of that, EEGs suffer from artifacts of all kinds.

Of course, EEG devices are not the only ones that use non-invasive approaches. Functional near-infrared spectroscopy (fNIR) is actually a very good non-invasive method. They do well in detecting certain activities, but they work very slowly.

The potential of invasive approaches hasnt been realized at all. What we have now is the ability to record from, say, 100 neurons. So in the future, when we record from millions of neurons, we can think about all kinds of decoding ideas. Basically right now the obstacle to that is the invasive surgery needed to implant such a device.

Pharmacology is not my exact field, but drug developers are doing amazing work. They can develop molecules for some specific purpose that can work for one brain receptor, but not another, or one brain area, but not another. So in principle, all these methods can be improved and become targeted for particular problems.

You can even modify brain cells genetically, like in optogenetics, where they make cells that are sensitive to light. This has not been fully realized because there are many more possibilities. The cells can be sensitive to magnetic fields, to stretch, you can probably make mechanosensitive neurons by genetic engineering. Or you can try to implant some cells from another organism in the brain. Any science fiction idea you can find nowadays is being realized, so I wont say no to anything!

Im optimistic, so I see mostly upsides. We really want to improve; we want to become less primitive people. The main downside is probably the same as drug use. So, lets imagine a person implanting himself in the pleasure centre of the brain and then just constantly stimulating himself. Probably you dont want this, but it may be difficult to avoid.

Interfering with the brains motivation and pleasure systems, this can be a problem, and of course, you can imagine militaries getting hold of itand making soldiers they can control. In fact, any BMI interface can also act as a lie detector. You can really detect some things that normally you dont want to expose, that you want to keep private.

I dont worry about this because what will probably happen is the rich people will get the first brain augmentation systems that will be very expensive, very cumbersome, and work really badly. But as the technology develops it will become cheap, then everybody will get access. So, this particular part I dont think is a problem in a capitalistic society.

Editors note: This interview has been edited for length and clarity.

Stock media provided by HighwayStarz/Pond5.com

See original here:

Is the Brain Augmentation Hype Justified? Here's an Expert's Take - Singularity Hub

6 Things Quantum Computers Will Be Incredibly Useful For – Singularity Hub

Computers dont exist in a vacuum. They serve to solve problems, and the type of problems they can solve are influenced by their hardware. Graphics processors are specialized for rendering images; artificial intelligence processors for AI; and quantum computers designed forwhat?

While the power of quantum computing is impressive, it does not mean that existing software simply runs a billion times faster. Rather, quantum computers have certain types of problems which they are good at solving, and those which they arent. Below are some of the primary applications we should expect to see as this next generation of computers becomes commercially available.

A primary application for quantum computing is artificial intelligence (AI). AI is based on the principle of learning from experience, becoming more accurate as feedback is given, until the computer program appears to exhibit intelligence.

This feedback is based on calculating the probabilities for many possible choices, and so AI is an ideal candidate for quantum computation. It promises to disrupt every industry, from automotives to medicine, and its been said AI will be to the twenty-first century what electricity was to the twentieth.

For example, Lockheed Martin plans to use its D-Wave quantum computer to test autopilot software that is currently too complex for classical computers, and Google is using a quantum computer to design software that can distinguish cars from landmarks. We have already reached the point where AI is creating more AI, and so its importance will rapidly escalate.

Another example is precision modeling of molecular interactions, finding the optimum configurations for chemical reactions. Such quantum chemistry is so complex that only the simplest molecules can be analyzed by todays digital computers.

Chemical reactions are quantum in nature as they form highly entangled quantum superposition states. But fully-developed quantum computers would not have any difficulty evaluating even the most complex processes.

Google has already made forays in this field by simulating the energy of hydrogen molecules. The implication of this is more efficient products, from solar cells to pharmaceutical drugs, and especially fertilizer production; since fertilizer accounts for 2 percent of global energy usage, the consequences for energy and the environment would be profound.

Most online security currently depends on the difficulty of factoring large numbers into primes. While this can presently be accomplished by using digital computers to search through every possible factor, the immense time required makes cracking the code expensive and impractical.

Quantum computers can perform such factoring exponentially more efficiently than digital computers, meaning such security methods will soon become obsolete. New cryptography methods are being developed, though it may take time: in August 2015 the NSA began introducing a list of quantum-resistant cryptography methods that would resist quantum computers, and in April 2016 the National Institute of Standards and Technology began a public evaluation process lasting four to six years.

There are also promising quantum encryption methods being developed using the one-way nature of quantum entanglement. City-wide networks have already been demonstrated in several countries, and Chinese scientists recently announced they successfully sent entangled photons from an orbiting quantum satellite to three separate base stations back on Earth.

Modern markets are some of the most complicated systems in existence. While we have developed increasingly scientific and mathematical tools to address this, it still suffers from one major difference between other scientific fields: theres no controlled setting in which to run experiments.

To solve this, investors and analysts have turned to quantum computing. One immediate advantage is that the randomness inherent to quantum computers is congruent to the stochastic nature of financial markets. Investors often wish to evaluate the distribution of outcomes under an extremely large number of scenarios generated at random.

Another advantage quantum offers is that financial operations such as arbitrage may require many path-dependent steps, the number of possibilities quickly outpacing the capacity of a digital computer.

NOAA Chief Economist Rodney F. Weiher claims(PowerPoint file)that nearly 30 percent of the US GDP ($6 trillion) is directly or indirectly affected by weather, impacting food production, transportation, and retail trade, among others. The ability to better predict the weather would have enormous benefit to many fields, not to mention more time to take cover from disasters.

While this has long been a goal of scientists, the equations governing such processes contain many, many variables, making classical simulation lengthy. As quantum researcher Seth Lloyd pointed out, Using a classical computer to perform such analysis might take longer than it takes the actual weather to evolve! This motivated Lloyd and colleagues at MIT to show that the equations governing the weather possess a hidden wave nature which are amenable to solution by a quantum computer.

Director of engineering at Google Hartmut Neven also noted that quantum computers could help build better climate models that could give us more insight into how humans are influencing the environment. These models are what we build our estimates of future warming on, and help us determine what steps need to be taken now to prevent disasters.

The United Kingdoms national weather service Met Office has already begun investing in such innovation to meet the power and scalability demands theyll be facing in the 2020-plus timeframe, and released a report on its own requirements for exascale computing.

Coming full circle, a final application of this exciting new physics might be studying exciting new physics. Models of particle physics are often extraordinarily complex, confounding pen-and-paper solutions and requiring vast amounts of computing time for numerical simulation. This makes them ideal for quantum computation, and researchers have already been taking advantage of this.

Researchers at the University of Innsbruck and the Institute for Quantum Optics and Quantum Information (IQOQI) recently used a programmable quantum system to perform such a simulation. Published in Nature, the team used a simple version of quantum computer in which ions performed logical operations, the basic steps in any computer calculation. This simulation showed excellent agreement compared toactual experiments of the physics described.

These two approaches complement one another perfectly, says theoretical physicist Peter Zoller. We cannot replace the experiments that are done with particle colliders. However, by developing quantum simulators, we may be able to understand these experiments better one day.

Investors are now scrambling to insert themselves into the quantum computing ecosystem, and its not just the computer industry: banks, aerospace companies, and cybersecurity firms are among those taking advantage of the computational revolution.

While quantum computing is already impacting the fields listed above, the list is by no means exhaustive, and thats the most exciting part. As with all new technology, presently unimaginable applications will be developed as the hardware continues to evolve and create new opportunities.

Image Credit:IQOQI Innsbruck/Harald Ritsch

Continued here:

6 Things Quantum Computers Will Be Incredibly Useful For - Singularity Hub

Two Conjectures Collide, Endangering the Naked Singularity – WIRED

F(zVL/{%:-K-YmSUS$$q}9>bU03?R_2{G$sN"beQ}W>~WOUNSQ','_Y(QPjTW,G2)7*/&a,-r7/h3@ WA^@d`.&1dy&EqZA.SGtqSGV_B|Xk'oLg1akyn2,(ixYxL,@O9. 4pp& ^"i=& Bvo?nj,g?Lx0w`_?c0ah1D?[BH``Ak>@LU]M0d|w.>,a.c-JhShL(Wq1^cO-~y@M@}r?qD?- b7PS 21/g)}YqZ-cgkN/fb ,<4R2r]h4b ApJXLzMqgO&pG:M|e/___hv@i14a=EF=9`?VLq4~y2V)r%W~'~?TbMl~n/gOE?]9~ooFHr<)ynt6A (+4u#MjqYg)86$X`PVAoe?IgoQ=[!2PjZf 15+/W p$cfoX "-Y,S}U u#xw[je[7w6wokC.1^( 4< =GGyQsM/W?F<"4U9OZdZOp"z l/|WTk=ra7poHx[MT*b)l)cw7~k#u% J_uc`hw'w&nUW0/&<@_

hFV "&$~)YL0*dNM`PUpv_]tAbkCyS{ t f6@Z-929Hudy69FUE,Y^ , O2._LTE1@A3 Ly?~{m6u5GEV!oo^_>}/Pn2O$Vaah5{`kOCwuTE*2*i:@^$nPAC`E4`PeG|kd[=`!=P&Lnw0>H4F,YhM S(Z)GG;= xPx4h^+ -UPq,9l2 70Aa oP=JY<}x{*bG7F:m,PF.|N(I6Q`el0]L2#& D+;lqOxEQ;M8%BWm@wH plO5_eo> vf8nij$g ]=a[ >]m*qW*Bjy;-jON9sl1[RcWokYW4$t9`sA qRk[ZGh[V9,cT7q,i;1GXuk; yU3e6Gg+a2y}.o;=b=]7] $/By| GJ+>i5G&qvG,uX"(EX13*~o@ wrBt0}yiSi"5y5/l0]7aV5G2aUWSg`@bch%N;B*EY]^9WE~WZ4Al 1:UwqFyt}a93V7EtK`9 9Xcgy6^<8j<=)Lx_1aF_ALN[?o{JjOrq"Op2rtcSWYxr;2$R/& [i8"K=:sW(*qliFz?_4 ldX2NZG&^Ag7-)Xh8<3w4]A{`gu N,dNf[T72.=3V>3V*WXyV+y ] ~5A_XSE,}.`D7?TAVah-Uk9yDd6OZL?T?nay[q,xl?].[[#PE&?'U01KA-xqp8$" =0Y w>.jx8L?~ b,7Beo4 fen4[3^~fikBxAqCs /,kcWA1v9#voJ9!{?8J HmakCaNzoG[u:q-qet:q+n~;NYap[ &6m"?(|WksbM~p 3;yTvSnw{yvBp14#LaOu+#8 ]{iJ>sPvCekrv$:3WW5Yz(1vryz8~_,4Y, MVsJ}!o.[+=LruZC$ = H8oP4N8S$ u : H 1h%@D OGhKRH"1<[4l0,$XG YJh6nD#;4@[g mi@- uhKRB[whuGm tmi@n_B[1/&4iV);4cI`E+hXM$-cmk.- H "iDrh G 1 C[w|i6GIb_zKKg mi@- Ht4 %ViRB^4p45>]$ %$K#I KPsthR 'Kq%B.M%'!OB^z4NXhKR"~/= G$b^M=+N7- H h0V";4_='| Ou%o|_piw_B+i_B+i_"= XP> (_"I%>>4~]1I`cZ6&z:bc"2gJsGZ$Qdi2h'lLTBD@OD@e&{9SIh+Q9S&SuH#*;h:KD@On`c +A$"It"IHD@%mzKcc"Gu1 P 6&*Rhb02+uV*D.&4@et$4%t]w(f4iKLK%#Mn Q"QW"uQW"uIt230bf)fD^6&*|2quLD.#hy'4oG|DENlL9SLe^jLe2# BcJ}2*I2*I@cJuML)m6#!OM"))6"m)m)m)k2u0s e DuR:QtqDWSF133S6&*4euS"e)M-7Ec2q$Q2$ hLT Z4tKFs4K6*e0"uYD3`"O-EU1?EGV-!nQ qUqd.S(_'2m6Qb-Q8S-]+LMcf(f6bfmmL|r.S:_'2)6QJLu:#D[&&JeA6Q0HvNT(D+GrHwLp 8#Q Os1 PGG$jbc2IQR':6^Lu2gj3'^m/LMr(9#ayDOGt<"IPNT)0}/L'V8'J#eDw}SCc2>[1jL' 4@eeRm}T[_&ZE+|2>}z:QkG{}f[8*W0H{](r6Uvco=X7LHkGa(36'J ,bdY_7.el8K0 X^Sd)K({U$-8ey@M9K.&,x^@FlDJafy06GUu&b2whhml~qQoV fh Z='dl5lQfj$.(Th{xwR0r 3RGL[L3Rkn}D[*nlK:RN*#7C^<%_hznFwh+ MTO[uU?[9ps`iEo |IwzE(bi Xax c0|3, W5-N:~:~`=DT- xzk#Sh 91?gXz,uAG85Umv?Uc=0wADD,mLMTwaAAqlsMSoI70JJKX+>SZbZQpN'Q/?;LD. ~u=:f7yM:EF9LJ{KW<[Bi4<[Six*mz6MY=zDxm-6M[Msqg%B^6M0l(McT%uF[%y4e8Uq%`|A1Q]cG&Cs]0QyaGWdj,LTXp:4nCG8CS C.G"Sb&*%uF[w$*94KX4@&!H|bLm-A$6 HA$SZ#a 84L j />JV J.J>.JTsitiZ)JfL+Psij}"mI@JHF5JT3ti> T?&*~io,%)h{qOx1/&IX-S"'2xzXE=+PDK|}|5%>;4:De} Ou%o|_piw_B+i_B+i_"= XP> (_"I) MT4~]F% MB4@e>$}HRF% MC$5+74oPi2A2h2>|B#|fK|uSGL>|J#JD@e L!.9D.JpN2@i>|['.D.mcHHwt2+Kcc"i2u]:IuoD2 aD2]D+QWt"uEN-sTu W6&*{i K Qht:QKGb{ %%84*D!D!D!Q9SLeNT Wqd85/0d& oi2gKTmY)[q$AhLYORSF5TPSF5TPS hLTYNRI)q$I5e"&QQM"eQM"eQM"e{M"WbNTaDLtR.S*]'2Wj(f&bfJ"D@eLnJbc2eu%t2uuB0D"ZYJAnh`ITw4@FX23cf9A$%U+" GH2ur%#-*!.A=*0Te Du[&&Jl%jabc"gJT^e+6w1rl&RlM)UeJDu[&%&J)q$4|a{D^L0& g82+% E1PiWRNT*Q6&*A2D5a 9dDuuB:QrbqdVJtMNTYS*qd(J6H1sd3H1sdHINT^"9D3GFu^j:QmQW"uQW"u#D~$W( {Da/W&EKre DeJDuW&ZJW&KreH..)Uq$#*"I|`36*DQlLT&#J*d:Kk=[Lm3bFTt]d^QI%q$6@y2:G# *8OXVD r:QpG9Di$oJ0|hLT|KBBc2?Qp D!__LOjD+|h/s_'/#SZ'*x `oL"{zG}IxOEUqn 5Gi ?!<4!e{D2Pse]L~1EMg&A,eibaGy~Sgv? ea'}nM OMC?>QU?RKuxg:%+x@oYXV (>JA^$8 xUYCXz`4@gg5X)fY^T*J T%G9`pT([4>Psa.Uz#BNzu3?q'E~leyLK|?Z`O8f;$2Su@xgU&*|tmOZp0%un< r@YlXi/gA5q@G*'U~<+Yj&e/)`/fC`)Mj6d58D)Arst6TWJ*5J{X@{*OwZnV8U%lrdB~*s`A!d5CIl}$5S9P=IlXh)_Ya'P(U_yEn:X}qZU Il31@ 9#jPmpMsN _sF-W3tmJA-Zj{g[$A}3fuVK,GrlluZRz4/ tp53WM*~%Bjvl%,}E3IP6Zp]rp7z`ZPk"t*@3zizg=h+_5SaJ/.TX2^pZ. $ vS;k:3dmWFrJLA2(~ whIl"h/[8wZ :gDeYg{Wqq+;G=oq/$q{y1C)e,z7Ep_& ^:AlYE#6;=;=fKx1#Y0EA?W. (1p Vo@YBzu_zlEB=Ey)7^c=5=LP5i(aV[]&]p`@PbsSV 'b3qSgl@8Q_;wCQi@3ix?AI5lT9tN'I_Y=i'o(T2F56K*;sYt@BP)b{=i)n1Jm5A"vpi0 nhO@C+y7K/Aao _&fKu.azq,:N HpzBQn:I?;[|;^(e(Z'mOD6UX9.e0P!cj| =gY8759v=H.y;lxHyGN_m%'6~Y1VO6V4Qm30vwF>C~0HSOa .k3lWF5*HS-d2+nKz24$#KD )OSd! iC3kT M?y8 "Z;~6+BN-FY#.%2b"vUr?:Z=A7H2`&0tq"rBATW9ZB0v?+Y,4,-GvpF8I?$:ZmAID>XoPIM'zu&$p KHsLC3'al;}cPR;Iy7ud8(D+4xOXPPh=Y+dj!Ra,0c/Ahp4@]5';Ho*@o rm;AUXv+a5hXHN7Oq26coK s<2V38~%I^Q Gh-z1roP%ZN53CM,58f7M t(~Dnj}RLStmqm}@4 iw}u[5i%hop| _{z@Jp2M m8"wW]160;`EH:/fs)}Fi[wSNQ3]1ht`o@-~guP XF5-I.."9{(wMZ1-5AZqs-G {#Zc O$fi91J'dA`}4Xz`<0>az-+foyjKX4}*_Mg:- ;{y(cRUtGBkhOF(J%pf~U8&npzZfY,NN4NgPaw)Pi`W"E*wsNt.s i9Wns!j) ZkHCsGt&aB$ G68LfCMqko[iE[-:@Y9}olR~$7$"PZ:7M,_nLNshoc^VLOa[ DDO&r 8g!Z3 X]}u/uv644>,n|4W);$wv'l "gxki{^E~Sx`j{o 38>AFYo9#Qg"HjW+'8jJ1$!XV#X=lFK>$thp5;RlL?Inu4D< >G)pC`;hn9NJ81t4]5e{A ^"m'IvSV!S/8k J*0ZVf645q&VvvnmxxQM8;q"tmoq>r Gb5`@L~ N6;%lx1(D93 wN7M=^y* 8r'Fv g'.*`6 OlHn X3~e]{=W_^UX&%r}ZVwC`0qg Z$u*Z"au8@h nYJ!7{^JbPaNcBGDYK^ j=Bg]8xN BrYQ" ^y;$@JM]f;zYg[o19}.j^0UXL:)hUAJ;xBx#Y_$#u'^9!ZXggw55fe'- :Q!72}r]S>?hgWo :!~viQ>UYIpZSq:)Q$<~2e( 3:|&./" {wH3.Uo{k;k&@%bm ?#Qz9(3L foh<]s@@p xnK[l 5/.iK/To &^x5X bHXN9Lq-Qbj?SR%Fu3OX=B#^ch$I_0? 4OQ'5OfWTABAo{P2N9+P- tT~e]/IT]7=0o&XvNgK|IYa;PJ,l.@j_xqRy$BF#DvRo,YSwWrg5V]G" ^;X;'3OBf>%u}8cd|29 R.*8b*9bH=W z~b d1LPEbw`a4Z;Q*dgiTdgi#dgi@v/hYP &OFwLsE1@x}(j< ckdwp^5>k'5+kwV:F73n nUEJwTuZ:wr%fkjy!?%n'z%'N{k7 WY4M#:^fyV=~A'/]Xuvp:rsu$c}qYZ)tVZu<4XA W;P~Z B#usy(W,44/VUgDvc =x|*D%:I7g oV1J)AQ,qmPEBRPzbwwwV]4]]lf 4 urW&ldN$x.6iD G aRC:%:xtCc).w{/-u{RXRtJ^N5<{2wge0EZqOSwbxofCxXX`G`Zhb"XBsz@`>J>ul7 j>04CZy|h5S@8]c,5dA6n2{Lo%r1um%:g?ntojgqZ/xz={j{CEXvE~8|=uv#Qds]7/x[1FW`0*4K29-}`%7x9[[;ksw~^ Q/Ad}_yZE c#h2ySkD.Kd0fYvz8nInm_d[F?YvcYX|NUsI:'G]yJN] .:tv[>: hEsX)@g;SUE1U)/^=iIu0}ro:X"|Xjd7 s@V"{|@?A_$>:6OVM6X>T< X$]Pi~j-8avY &D)2-~uvVM*Na2yu]2.=w ovC4{iwlL;}5n)n [7hxIt`b Ml->U8f,+zt=^)2[$:v}V,NCr_[t>UYf-p *< B(]uSplc%o_((d{Er&PLzD~)G"+Cda>0mTU=ikn'+qk I1]sy'57z^C+G}zh3"#rBga3XGow41~-?~+$*_|`l/P,1R,<vOUYUBUa]CO<{uWb"@rbRoXq)G^^|SEy^(Tp_'{$m{*.(f8B-xP{(~o0:H7o9Y@K HZ75lx.tuzl_k{ 1GvY&o|=xOU-M=k[}p|5t{z|si_kXrXi/g`~bk|Z}F_LZXUoTRiWCve#iW}=VAZ Z's v:E&&} 6VF1A J+ew|t^_S41; -ebX-EL`bg/8yit $@n )V7!dHZt# wg2f{$..#c?|G!|DhXa)e/+AlrD&)eb"_f]LYk)C;)y-KPrO+1)!1}5:HAk-dFUf}|n|T1ASiR8`B+^"["4#$aAT+:C1lyqorR#qR4k J2s4X,2SM&+u%2829&m!1G|yQL&d7WM,J:E:{L$;{6O'C,k8p`c2rRy'.@BK2jl}cfHj(WZO0Y(OVfl{Xq^a0y`4QW5^ "_>_-kRr0clOeEX b&F2wN>$R w115U~?5C#C}?/A,?3j,CZWXI6Uq;f:;z~&fe,: po?97[`a[ %N+ x#A/4PRra1EhNV'`RoP8Y-cc=Ap V_ch^E XM[QVA^qWA}Ldk`nx0iyEU%doj8wggk~>ce%TbL~ %w% @U#%:aZ~)Xa@T 7(OQXiZ9afLp-,-3Yz_]m}5;'8*_l "w1_S7n!GwQy 0VR 2L @aY0wyG_N[i"1wgo?1I~94yYCXyg/yf /M]oSqX|c.[1-a2TWQzZ|QX-UC ijj,'8b]T?>p07CCU} >c6a( ?aIFg&c8]}/}O,io0i@b '*S`]X(,?#{HX?>/S 4}>N!(_cqH_8>]E8 rcsXpgf9yo$^7R`M'uy6{s6:/FWq=Xb}UzO "~%qLgq3f@h+@J@[!Q%-0DLH~0$U2kfe^7YM(1]RByl#)~fJdRFF|<"0I.D>u>[DN:Bz.OKAy/!F0J*0] j2/o?gaJAB/%~<8hYLo^wE*! zg'_KcWz(dw7a0|7l VoDl@s=W<*HAq;^:iP/j5fDiv%az~?]zqZ~~~b~}{Hwv/;N|*op<.o7yEx)rP_@HYI_b! Y9$a#~f7p/zf GRmM[!6c+b5$kJ`b4y(HtXx6(pu{+Q_GA'P: gh-q3~O /9F@EwG5M,=_VyHJb=@D^'?J%%MIXp 6_n^0=?>E:Fw 4/p(X*86)N(#]?!O,(%yRCDh+~k),KYAguN"nexvp&$"3|2%41*@qNw}<;~zSW+a1DRJjn#Rt U?s %mDo=r=H|x@FX -P2+JB2yHp7HopWsVGm-4N#yNcCMgm.#XcIVo I~e@CH{nj^-T 6DQxkM+Zi;kMkW pV/SuIbK^d&LR"uRgQ& uh|*:gN&UidpSC*&1Bo?R+[L!/$P2} Q;V5 8JJr'|6JWUnBjAK1P/-X/EtOP[]G]<0z%Za(EE&0T>])*WuF@A ^Ik[ tkLEsI?huy]j2/74g :Q`gwz*+/Ajmf9K|*2j#o6Y |9uubj,?S@5GRL2I &K2Zpy7MkoA6tc;?r r_Td+ElaoUh%x s Zk(g74Pe$= '/*Un.rsa5 @7L/7BJ[EfCOFXlYkAJ .=zPKr!t]&H.j'QdMH=GPt*t:JF7%p`wwJrP]XZ{0 g'R2b>![>2T~0f [Stf>L=HeSA4Wkilscu6+].4$[NHf`jJw+t.x3%iZr${jftj6Zp)T@!}pOH pCH,bYn0)nV7udyalQ]U*;A8lb6.w.A*@yln)qLhw~l=h/;NMvVt}cd2ZJ#mhsx5V-2(eEC5+e?nQ iP)E5QY2C^B|sqP!h8a""1B^6P%;QAl N>jzKU=fr"R@C.)U[?+ 67UP|<[RL`Z8Q PfbKa-_Se{LUn^4Qra,DW$ IQ fcc El`5 q$n1]LuQaTcZ+wj07Ra!^5LhxU1(@@,@-L L ;1+ChY{(b"+.}#!ukblgTu ;&FWLESZr"v/UNpG,3w{ZQ4R&KD F18@X5lVlB#`IIKTN_uVe(o3I5a}8uTQ]t}E{E ;:55+wK,P'VFjB1]=k+2w"u%NvTBbrr(Ea'4il!kC,z,{Vks~XttIx "eCQ%$S74Cr8ZP0 X:GR64IR8 {O@P74&MCFv1( ntZW"D*/"IBcdHUS/>t!Z-I[k#TD ]PCrNNLTmF `A06r5uAH=(WQDrxVZEcyy xf1TP"AiZrlADrpdAXFd7`kE[a0RvU qlCXAIUFu( :(b6BNUP1Y(dfUy* [Ji-LWebVnCU5 Rd;XflHjL[I?(RXo$f 0 D-eY~J2C$B9]a(A8Cl)S.Fh~(rlv,cMLZ-cVZ1[@69?:YpMzW4f. *q}H]8l&d^RsIFd@XktlpDDZ(Yg+++V1]BgYi;/@G*Mc#_1C(]JBZWfli)iQX#K:(3xo_`ekaB]z@jaSV1'zcGSIX(ZL[qCZ)%l*GVE xR[(o0`:%b!EVIRW(8 B|IL$R369Ad0J)16ujL!59HS{K1h+>yI 5*hc|k)*v&ilFD+b"$Uj9 2o* @pnf~8F=#bVIZZZ164!P%-o_FG=n hUaR #r2.;RErY)/vHhj0R}eY"bAhX7kcyv{A?AK*{?h$<'h-i:Z0h#_m_K@,3Z9.~fDlJ.@'b<0 sLH"+l*c13a&Nh"zn}2Uhr TBPv=6JJrklkn,-srelw0c[m_>/$b[/HvmncYq-9NQ|hQ:N)M syC@+6eT:]75bbr;[zkmQs^di^5]H5b`jkv*)q!T(XeN(B#BDr8/XNYER[PDcRD@y9t^>:(X|#/'25.BNiwS:7WSU[ x%_iC;qz?i?`> o<3Q'O`vR$JP<8|<,b< O<^SbAd[v:53,<$grv{`/1ZCXQ+KL'd0NF,Z-I9/8{KI~0OIwjj,=eBtY~WrR )"nT>4{Z#?QVRzJ)M {^ sWg=h 'Wj@z{w3?=>GVpsMgGh~a;}}l)zc#i{nWoWm u7L{E+]4t_>a5 f>X0T#RM_/+2*mimnlK|T:Y.$DDT:TOM9kGjo*f#{Unru'ZX4GawY&{oK]{dFR6-v-K2 n"8DT|bjP]b|I53_z~E@R/@rwA~/;^JfoF.@[hF19[Z8]SIb]*1Kk:)CB Dl)5He;aZIX_WZuL8'k6Qh7P#jNDu.[lWbT1Lcq_u n-/;KT1" !Mj&gPwoH" iisFu%_kAYC_8>FEs$qf8i^SetQ;D Cp`+ "1TqoXX4KHOe>H]!2`$ %>T*J'2H*NKuhnrJ %)IgFx|$$+nE$u9V!AZdojZT/?=x`: zAwhL, fRyrG,aZN)$;'N=#o&WBA$$'j@vp,}pK=9BqE*O*@-m V /Qq6t[hGWB QjSP]c Ee+UyjoV`S$]dA8M9#|No{.TTr!.T>ew),dD]kZU9nL.T*f( +%;^Me,AH`qU*TQ5`; k[?X,JtG:m6Rp&9P=8~C]*.cPIu"iVNG#H:Cht 0M//3Cf?wj>;n3:Owg=P1=C{ w/~xo&i&]:%jD|:sgv7mVr^~<9R4x:,"{0"vTbTNRqUmW*gT:v%`JXTS%2wtZ+aMAZyKJkh%'IKIl]XP|[v5tY_~:y~!)u{U_;N9=7cMHL1vZpnROEw{I/8"2Q.ziBs$5po][DUs iczJTE)H3>yT1nZ"w(0WVCX1Mj1Ff|"ttCD a{)QS5Q[4wb2,%cR,+Y4PrIwFW [AIn

Read the original post:

Two Conjectures Collide, Endangering the Naked Singularity - WIRED

This Tech Could Charge Electric Cars While They Drive – Singularity Hub

The global auto industry is worth $2 trillion, but electric and hybrid cars currently make up less than one percent of that figure. However, experts are predicting an explosion in electric car adoption.

Financial services company UBS predicted demand for electric cars will reach an inflection point in 2018 as their cost shrinks to equal (and eventually undercut) the cost of internal combustion engine vehicles. China saw a 53 percent increase in electric car sales from 2015 to 2016, and India is aiming to sell only electric cars by 2030.

Even though theyll be affordable, and theyll keep the air cleaner, though, electric cars will still have one major limitation, and thatsthe fact that theyre electric. Electric things run on batteries, and if batteries dont get recharged every so often, they die.

Teslas Model 3 will go 200 miles on one charge, and Chevys new Bolt goes 238 miles. These are no small distances, especially when compared to the Volts 30-mile range just three years ago. Even so, once the cars batteries are drained, recharging them takes hours.

Researchers at Stanford University just took a step toward solving this problem. In a paper published last week in Nature, the team described a new technique that wirelessly transmits electricity to a moving object within close range.

Wireless power transfer works using magnetic resonance coupling. An alternating magnetic field in a transmitter coil causes electrons in a receiver coil to oscillate, with the best transfer efficiency occurring when both coils are tuned to the same frequency and positioned at a specific angle.

That makes it hard to transfer electricity while an object is moving though. To bypass the need for continuous manual tuning, the Stanford team removed the radio-frequency source in the transmitter and replaced it with a voltage amplifier and a feedback resistor.

The system calibrates itself to the required frequency for different distances. Using this system, the researchers were able to wirelessly transmit a one-milliwatt charge of electricity to a moving LED light bulb three feet away. No manual tuning was needed, and transfer efficiency remained stable.

One milliwatt is a far cry from the tens of kilowatts an electric car needs. But now that theyve established that an amplifier will do the trick, the team is working on ramping up the amount of electricity that can be transferred using this system.

Switching out the amplifier itself could make a big differencefor this test, they used a general-purpose amplifier with about ten percent efficiency, but custom-made amplifiers could likely boost efficiency to over 90 percent.

It will still be a while before electric cars can get zapped with infusions of charge while cruising down the highway, but thats the future some energy experts envision.

In theory, one could drive for an unlimited amount of time without having to stop to recharge, said Shanhui Fan, professor of electrical engineering and senior author of the study. The hope is that youll be able to charge your electric car while youre driving down the highway. A coil in the bottom of the vehicle could receive electricity from a series of coils connected to an electric current embedded in the road.

Embedding power lines in roads would be a major infrastructure project, and it wouldnt make sense to undertake it until electric car adoption was widespreadwhen, for example, electric cars accounted for at least 50 percent of total vehicles on the road, or more. If charging was easier, though, more drivers might choose to go electric.

Tesla has already made electric car ownership a bit easier by investing heavily in its Supercharger network. There are currently 861 Supercharger stations around the world with 5,655 chargers, and hundreds more are in the works. The stations charge Tesla vehicles for free in a half hour or hourinstead of multiple hours.

Ripping up roads to embed power lines that can charge cars while theyre moving seems unnecessary as technologies like the Superchargers continue to proliferate. But as electric vehicles proliferate too, drivers will want their experiences to be as seamless as possible, and that couldinclude not having to stop to charge your car.

Despite the significant hurdles left to clear, charging moving cars is the most exciting potential of the Stanford teams wireless transfer system. But there are also smaller-scale applications like cell phones and personal medical implants, which will likely employ the technology before its used on cars. Fan even mentioned that the system may untether robotics in manufacturing.

Image Credit: Shutterstock

Read more:

This Tech Could Charge Electric Cars While They Drive - Singularity Hub

Is There a Multidimensional Mathematical World Hidden in the … – Singularity Hub

Two thousand years ago, the ancient Greeks looked into the night sky and saw geometric shapes emerge among the stars: a hunter, a lion, a water vase.

In a way, they used these constellations to make sense of the random scattering of stars in the fabric of the universe. By translating astronomy into shapes, they found a way to seek order and meaning in a highly complex system.

As it turns out, the Greeks were wrong: most stars in a constellation dont have much to do with one another. But their approach lives on.

This week, the Blue Brain Project proposed a fascinating idea that may explain the complexities of the human brain. Using algebraic topology, a type of mathematics that projects complex connections into graphs, they mapped out a path for complex functions to emerge from the structure of neural networks.

And get this: while the brain physically inhabits our three-dimensional world, its inner connectionsmathematically speakingoperate on a much higher dimensional space. In human speak: the assembly and disassembly of neural connections are massively complex, more so than expected. But now we may have a language to describe them.

We found a world that we had never imagined, says Dr. Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland who led the study.

This may be why its been so difficult to understand the brain, he says. The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.

When we think about the brain, branchy neurons and gooey tissue come to minddefinitely 3D objects. Physically speaking, there are no high-dimensional mini-brains hidden within our own, and our neurons dont jump into a higher plane of existence when they fire away.

Outside of physics, dimension is really just a fancy way of describing complexity. Take a group of three neurons that work together (A, B, and C), for example. Now think about how many ways they can connect together. Because information is generally only passed one way from a neuron to its downstream partner, A can only link to B or C. In topology speak, the dimension here is two.

Similarly, a group of four neurons has dimension three, five neurons dimension four and so on. The more neurons in a group, the higher the dimensionand so the system gets increasingly complex.

In our study, dimension does not describe spatial dimensions, but rather the topological dimension of the geometric objects we are describing. A 7- or 11-dimensional simplex is still embedded in the physical three-dimensional space, explains study author Max Nolte, a graduate student at EPFL, to Singularity Hub.

To begin parsing out the organization of the brain, the team started with functional building blocks called simplices. Each simplex is a special group of neurons connected with each other in a very specific order.

One neuron is very influential and speaks first, one listens to all neurons, and others listen to a few neurons and speak to the ones theyre not listening to, says Nolte. This specific structure makes sure that the listening neurons can really understand the speaking neurons in a brain where always millions of neurons are talking at the same time, like in a crowded stadium.

As before, dimensions describe the complexity of a simplex.

In six different virtual brains, each reconstructed from experimental data obtained in rats, the team looked for signs of these abstract mathematical objects. Incredibly, the virtual brains contained extremely complex simplicesup to dimension sevenand roughly 80 million lower dimensional neuron groups.

The enormous amount of simplices hidden inside the brain suggests that each neuron is a part of an immense number of functional groups, much more than previously thought, says Nolte.

If simplices are building blocks, then how do they come together to form even more complicated networks?

When the team exposed their virtual brain to a stimulus, the neurons assembled into increasingly intricate networks, like blocks of Lego building a castle.

Again, its not necessarily a physical connection. Picture groups of neurons linking to others like a social graph, and the graphs associating into a web or other high-dimensional structure.

The fit wasnt perfect: in between the higher-dimensional structures were holes, places where some connections were missing to make a new web.

Like simplices, holes also have dimensions. In a way, says Nolte, the dimension of a hole describes how close the simplices were to reaching a higher dimension, or how well the building blocks associated with each other.

The appearance of progressively higher dimensional holes tells us that neurons in the network respond to stimuli in an extremely organized manner, says Dr. Ran Levi at the University of Aberdeen, who also worked on the paper.

When we look at the reaction of the brain over time to a stimulus, we see abstract geometric objects forming and then falling apart as it builds functional networks, says Levi.

The brain first recruits simpler neural networks to build a 1D frame. These networks then associate into 2D walls with holes in between. Fast-forward and increasingly higher dimensional structures and holes form, until they reach peak organizationwhatever connections the neurons need to get the job done.

Once there, the entire structure collapses, freeing up the simplices for their next tasks, like sand castles materializing and then disintegrating away.

We dont knowwhat the brain is doing when it forms these cavities, says Levi to Singularity Hub.

Whats clear, however, is that neurons have to fire in a fantastically ordered manner for these high-dimensional structures to occur.

It is quite clear that this hyper-organized activity is not just a coincidence. This could be the key to understanding what is going on when the brain is active, says Levi.

The team also worked out how neurons in the same or different groups talked to one another after a stimuli.

It really depends on where they are in the high-dimensional structure and their own groups.

Imagine two stranger neurons chatting away, says Nolte. Theyll probably say many unrelated things, because they dont know each other.

Now, imagine after a stimulus they form high-dimensional networks. Like Twitter, the network allows one neuron to hear the other, and they may begin repeating some of the things the other one said. If they both follow dozens of other people, their tweets may be even more similar because their thoughts are influenced by a shared crowd.

Using simplices, we dont only count how many shared people they are following, but also how these people they are following are connected to each other, says Nolte. The more interconnected two neurons arethat is, the more simplices they are a part ofthe more they fire to a stimulus in the same way.

It really shows the importance of the functional structure of the brain, in that structure guides the emergence of correlated activity, says Levi.

Previous studies have found that the physical structure of neurons and synapses influence activity patterns; now we know that their connections in high-dimensional space also factor in.

Going forward, the team hopes to understand how these complicated, abstract networks guide our thinking and behaviors.

Its like finding a dictionary that translates a totally obscure language to another language that we are actually familiar with, even if we dont necessarily understand all stories written in this language, says Levi.

Now its time to decipher those stories, he adds.

Image credit: Shutterstock

Read more:

Is There a Multidimensional Mathematical World Hidden in the ... - Singularity Hub

How Real-Life Bionic Man’s ‘Eyeborg’ Cam Represents First Step Toward Singularity – TheWrap

Just like the Bionic Man, Rob Spence has a prosthetic eye. But instead of fighting crime, he uses it to make films. Rather than use a GoPro or Facebook Live to document his surroundings, Spence can do so with his own eye. Hes been stumping at TED Talks, boasting the technology, which has robot enthusiasts calling it a first step toward technological singularity e.g. the marrying of tech with the human body.

After a childhood accident with a shotgun a la A Christmas Story left him blind in his right eye, Spencedecided not to let it slow him down inhis career as a filmmaker. So in 2007, he enlistedthe help of a team of engineers to design him a prosthetic eye with a specialattachment: a video camera, which Spence calls the Eyeborg. The device fits snugly into Spences eye socket. Although he cant see out of it since it isnt connected to his brain the prosthetic contraption allows him to film his surroundings for short periods of time.

Also Read: Why Amazon's Whole Foods Acquisition Makes Sense

Time Magazine named the Eyeborg among the top 50 inventions of 2009.Since then, Spence has put the tech to use, filming a documentary. (To pay the bills, he does commercial work for brands such as Ford, Salesforce and Absolut Vodka.) Spence has also given TED talks on cybernetics and the future of human bodily modification.

Most recently, Spence appeared at the FutureWorld techconference in Toronto, where he showed off his fancy ocular gadget to a crowd of robotics enthusiasts at the Ontario College of Art and Design, reports Vice.

Spence told TheWrap that he plans to employ theEyeborg as more of a toy than a filmmaking tool in the future. The Canadian filmmaker and tech enthusiast said he currently uses hisprosthetic eye camera as the worlds most absurd toy for one-eyed filmmakers. In fact, he doesnt actually use it for his work. Spence said, Thats like trying to be a journalist but your style of writing is stream-of-consciousness. Among the top challenges that come with using the tiny tech: I get blinking, glancing, and the picture is 320240 with analog dropouts. As for the reason why he transitioned out of making documentaries to commercial content: Documentary is an expensive hobby.

You can keep up with Rob Spence, his Eyeborg, and his upcoming projects on his website.

Steven Spielberg's tech-heavy "Minority Report," starring Tom Cruise, is now 15 years old. Considered one of the most prescient sci-fi movies to grace the big screen, it predicted multiple future innovations, including facial recognition, personalize advertising and predictive crime fighting. In honor of the movie's anniversary, click through here to revisit 18 more movies that accuratelypeered into the future of technology:

We're so used to touch screens at this point -- we use them every day on our smart phones, and even at McDonald's-- that it's easy to forget that Tom Cruise used the technology in "Minority Report."

Long before Siri, there was HAL. The ominous yet soft-spoken computer system was the antagonist in 1968's "2001: A Space Odyssey." Stanley Kubrick's sinister talking computer ended up turning on itscrew in a Siri user's worst nightmare.

Tech giant Elon Musk is at the helm of SpaceX, which will send two tourists to space in 2018. But "2001: A Space Odyssey" imaginedcommercial space travel decades ago.

Elon Musk, Google and Uber have been duking it out to bring self-driving cars to the masses, but Arnold Schwarzenegger might have jumpstarted the competition when he took a robot-controlled ride in 1990's "Total Recall."

"The Terminator" predicted military drones in 1984 -- long before they were introduced to police forces and militaries.

Virtual reality is taking over the tech scene.You can play games in VR, watch movies and experience Coachellaall from the comfort of your living room. But Hollywood predicted we'd have VR more than 20 years ago in 1992's "Lawnmower Man."

The 1982 cult classic "Blade Runner," starring Harrison Ford, predicted digital billboards, which you can see now all over the country, from Times Square in New York to the Vegas strip.

Remember when the TSA rolled out invasive body scanners and a lot of people freaked out? "Airplane II: The Sequel" imagined airport scanners that revealed a person's naked body to agents.

Woody Allen's "Sleeper" had robots assisting surgeons by offering advice during surgery. Today, doctors use robotics to add precision to procedures.

The beloved 1960s cartoon "The Jetsons" -- which was made into a movie in 1990 -- predicted the use of robots to clean homes. They had a robotic vacuum and a robotic maid.Can you say Roomba?

In vitro fertilization and at-home genetic testing are common place these days. "Gattaca," with Uma Thurman and Ethan Hawke, predicted this tech in 1997.

FaceTime, and Skype before it, are commonplace today. But it was cool new technology in 1989's "Back to the Future Part II."

There are a ton of different options out there for smart watches. This was predicted in 1990's "Dick Tracy."

It's so easy to order Domino's online -- you can even watch how far along in the process your pizza is. In 1995's "The Net" with Sandra Bullock, they showed ordering pizza online for the first time.

Tinder, Bumble and OKCupid are only a few of the many, many online dating options out there. But Meg Ryan and Tom Hanks were on the forefront of the online dating trend in "You've Got Mail."

VR porn is growing in popularity. Or as it's called in 1993's "Demolition Man" -- "digitized transference of sexual energies."

From robotic vacuums to smart watches, Hollywood got these tech trends right

Steven Spielberg's tech-heavy "Minority Report," starring Tom Cruise, is now 15 years old. Considered one of the most prescient sci-fi movies to grace the big screen, it predicted multiple future innovations, including facial recognition, personalize advertising and predictive crime fighting. In honor of the movie's anniversary, click through here to revisit 18 more movies that accuratelypeered into the future of technology:

Excerpt from:

How Real-Life Bionic Man's 'Eyeborg' Cam Represents First Step Toward Singularity - TheWrap

Bill Nye Disses Ray Kurzweil’s Singularity Prediction – Inverse

On Wednesday, Bill Nye said that he is not afraid that machines will take over and mocked Ray Kurzweils predictions for how fast artificial intelligence will improve.

In an interview with the Singularity.FM podcast, Nye said that he thinks that the machine revolution will not be as incredible as predicted. Since humans are making the machines, we dont need to worry about a sudden onset of artificial intelligence taking over and replacing us, despite what Ray Kurzweil and Elon Musk worry about. Looking at where technology is in the world today and the timelines predicted for the rise of artificial intelligence leaves Nye dubious of Kurzweils predictions.

Im skeptical, especially about these extraordinary timelines 2029? What is that, 12 years from now? No! No. Nye said. Im not concerned, because humans make the machines. Sooner or later, to put it in old terms, somebodys got to shovel the coal to make the electricity run the machine.

Ray Kurzweil has predicted that the singularity, when humans merge with computer super-intelligence will happen in 2045. At that time, hes said that there will be an explosion of art, humor, and people will be sexier. Nye mocked Kurzweil for this prediction, and that he thinks computers will be able to beat a Turing test in 2029.

Thats where, The machines are going to create machines, that are going to like provide the electricity and everything is going to work perfectly and its going to be really good and its going to happen in the next nine years. Ray, really? Really? said Nye, using an exaggerated tone while imitating Kurzweil. Isnt that when youre going to be 80, and so thats when you predicted it, hoping your brain would go in some electronic receptacle. Dude, no.

Nye points out that there are a billion people in the world today who have never even made a phone call. This leaves him pretty skeptical that an artificial intelligence revolution would dramatically change things most places. And since humans design the machines, hes not convinced that were on the brink of computers that can surpass human intelligence. Im not concerned, he said.

You can watch the whole interview here, with the bit on AI and Kurzweil starting at 9:45.

Read the original here:

Bill Nye Disses Ray Kurzweil's Singularity Prediction - Inverse

Where Gravity Is Weak and Naked Singularities Are Verboten – Quanta Magazine

Physicists have wondered for decades whether infinitely dense points known as singularities can ever exist outside black holes, which would expose the mysteries of quantum gravity for all to see. Singularities snags in the otherwise smooth fabric of space and time where Albert Einsteins classical gravity theory breaks down and the unknown quantum theory of gravity is needed seem to always come cloaked in darkness, hiding from view behind the event horizons of black holes. The British physicist and mathematician Sir Roger Penrose conjectured in 1969 that visible or naked singularities are actually forbidden from forming in nature, in a kind of cosmic censorship. But why should quantum gravity censor itself?

Now, new theoretical calculations provide a possible explanation for why naked singularities do not exist in a particular model universe, at least. The findings indicate that a second, newer conjecture about gravity, if it is true, reinforces Penroses cosmic censorship conjecture by preventing naked singularities from forming in this model universe. Some experts say the mutually supportive relationship between the two conjectures increases the chances that both are correct. And while this would mean singularities do stay frustratingly hidden, it would also reveal an important feature of the quantum gravity theory that eludes us.

Its pleasing that theres a connection between the two conjectures, said John Preskill of the California Institute of Technology, who in 1991 bet Stephen Hawking that the cosmic censorship conjecture would fail (though he actually thinks its probably true).

The new work, reported in May in Physical Review Letters by Jorge Santos and his student Toby Crisford at the University of Cambridge and relying on a key insight by Cumrun Vafa of Harvard University, unexpectedly ties cosmic censorship to the 2006 weak gravity conjecture, which asserts that gravity must always be the weakest force in any viable universe, as it is in ours. (Gravity is by far the weakest of the four fundamental forces; two electrons electrically repel each other 1 million trillion trillion trillion times more strongly than they gravitationally attract each other.) Santos and Crisford were able to simulate the formation of a naked singularity in a four-dimensional universe with a different space-time geometry than ours. But they found that if another force exists in that universe that affects particles more strongly than gravity, the singularity becomes cloaked in a black hole. In other words, where a perverse pinprick would otherwise form in the space-time fabric, naked for all the world to see, the relative weakness of gravity prevents it.

Santos and Crisford are running simulations now to test whether cosmic censorship is saved at exactly the limit where gravity becomes the weakest force in the model universe, as initial calculations suggest. Such an alliance with the better-established cosmic censorship conjecture would reflect very well on the weak gravity conjecture. And if weak gravity is right, it points to a deep relationship between gravity and the other quantum forces, potentially lending support to string theory over a rival theory called loop quantum gravity. The unification of the forces happens naturally in string theory, where gravity is one vibrational mode of strings and forces like electromagnetism are other modes. But unification is less obvious in loop quantum gravity, where space-time is quantized in tiny volumetric packets that bear no direct connection to the other particles and forces. If the weak gravity conjecture is right, loop quantum gravity is definitely wrong, said Nima Arkani-Hamed, a professor at the Institute for Advanced Study who co-discovered the weak gravity conjecture.

The new work does tell us about quantum gravity, said Gary Horowitz, a theoretical physicist at the University of California, Santa Barbara.

In 1991, Preskill and Kip Thorne, both theoretical physicists at Caltech, visited Stephen Hawking at Cambridge. Hawking had spent decades exploring the possibilities packed into the Einstein equation, which defines how space-time bends in the presence of matter, giving rise to gravity. Like Penrose and everyone else, he had yet to find a mechanism by which a naked singularity could form in a universe like ours. Always, singularities lay at the centers of black holes sinkholes in space-time that are so steep that no light can climb out. He told his visitors that he believed in cosmic censorship. Preskill and Thorne, both experts in quantum gravity and black holes (Thorne was one of three physicists who founded the black-hole-detecting LIGO experiment), said they felt it might be possible to detect naked singularities and quantum gravity effects. There was a long pause, Preskill recalled. Then Stephen said, You want to bet?

The bet had to be settled on a technicality and renegotiated in 1997, after the first ambiguous exception cropped up. Matt Choptuik, a physicist at the University of British Columbia who uses numerical simulations to study Einsteins theory, showed that a naked singularity can form in a four-dimensional universe like ours when you perfectly fine-tune its initial conditions. Nudge the initial data by any amount, and you lose it a black hole forms around the singularity, censoring the scene. This exceptional case doesnt disprove cosmic censorship as Penrose meant it, because it doesnt suggest naked singularities might actually form. Nonetheless, Hawking conceded the original bet and paid his debt per the stipulations, with clothing to cover the winners nakedness. He embarrassed Preskill by making him wear a T-shirt featuring a nearly-naked lady while giving a talk to 1,000 people at Caltech. The clothing was supposed to be embroidered with a suitable concessionary message, but Hawkings read like a challenge: Nature Abhors a Naked Singularity.

The physicists posted a new bet online, with language to clarify that only non-exceptional counterexamples to cosmic censorship would count. And this time, they agreed, The clothing is to be embroidered with a suitable, truly concessionary message.

The wager still stands 20 years later, but not without coming under threat. In 2010, the physicists Frans Pretorius and Luis Lehner discovered a mechanism for producing naked singularities in hypothetical universes with five or more dimensions. And in their May paper, Santos and Crisford reported a naked singularity in a classical universe with four space-time dimensions, like our own, but with a radically different geometry. This latest one is in between the technical counterexample of the 1990s and a true counterexample, Horowitz said. Preskill agrees that it doesnt settle the bet. But it does change the story.

The new discovery began to unfold in 2014, when Horowitz, Santos and Benson Way found that naked singularities could exist in a pretend 4-D universe called anti-de Sitter (AdS) space whose space-time geometry is shaped like a tin can. This universe has a boundary the cans side which makes it a convenient testing ground for ideas about quantum gravity: Physicists can treat bendy space-time in the cans interior like a hologram that projects off of the cans surface, where there is no gravity. In universes like our own, which is closer to a de Sitter (dS) geometry, the only boundary is the infinite future, essentially the end of time. Timeless infinity doesnt make a very good surface for projecting a hologram of a living, breathing universe.

Despite their differences, the interiors of both AdS and dS universes obey Einsteins classical gravity theory everywhere outside singularities, that is. If cosmic censorship holds in one of the two arenas, some experts say you might expect it to hold up in both.

Horowitz, Santos and Way were studying what happens when an electric field and a gravitational field coexist in an AdS universe. Their calculations suggested that cranking up the energy of the electric field on the surface of the tin can universe will cause space-time to curve more and more sharply around a corresponding point inside, eventually forming a naked singularity. In their recent paper, Santos and Crisford verified the earlier calculations with numerical simulations.

But why would naked singularities exist in 5-D and in 4-D when you change the geometry, but never in a flat 4-D universe like ours? Its like, what the heck! Santos said. Its so weird you should work on it, right? There has to be something here.

In 2015, Horowitz mentioned the evidence for a naked singularity in 4-D AdS space to Cumrun Vafa, a Harvard string theorist and quantum gravity theorist who stopped by Horowitzs office. Vafa had been working to rule out large swaths of the 10500 different possible universes that string theory naively allows. He did this by identifying swamplands: failed universes that are too logically inconsistent to exist. By understanding patterns of land and swamp, he hoped to get an overall picture of quantum gravity.

Working with Arkani-Hamed, Lubo Motl and Alberto Nicolis in 2006, Vafa proposed the weak gravity conjecture as a swamplands test. The researchers found that universes only seemed to make sense when particles were affected by gravity less than they were by at least one other force. Dial down the other forces of nature too much, and violations of causality and other problems arise. Things were going wrong just when you started violating gravity as the weakest force, Arkani-Hamed said.The weak-gravity requirement drowns huge regions of the quantum gravity landscape in swamplands.

Weak gravity and cosmic censorship seem to describe different things, but in chatting with Horowitz that day in 2015, Vafa realized that they might be linked. Horowitz had explained Santos and Crisfords simulated naked singularity: When the researchers cranked up the strength of the electric field on the boundary of their tin-can universe, they assumed that the interior was classical perfectly smooth, with no particles quantum mechanically fluctuating in and out of existence. But Vafa reasoned that, if such particles existed, and if, in accordance with the weak gravity conjecture, they were more strongly coupled to the electric field than to gravity, then cranking up the electric field on the AdS boundary would cause sufficient numbers of particles to arise in the corresponding region in the interior to gravitationally collapse the region into a black hole, preventing the naked singularity.

Subsequent calculations by Santos and Crisford supported Vafas hunch; the simulations theyre running now could verify that naked singularities become cloaked in black holes right at the point where gravity becomes the weakest force. We dont know exactly why, but it seems to be true, Vafa said. These two reinforce each other.

The full implications of the new work, and of the two conjectures, will take time to sink in. Cosmic censorship imposes an odd disconnect between quantum gravity at the centers of black holes and classical gravity throughout the rest of the universe. Weak gravity appears to bridge the gap, linking quantum gravity to the other quantum forces that govern particles in the universe, and possibly favoring a stringy approach over a loopy one. Preskill said, I think its something you would put on your list of arguments or reasons for believing in unification of the forces.

However, Lee Smolin of the Perimeter Institute, one of the developers of loop quantum gravity, has pushed back, arguing that if weak gravity is true, there might be a loopy reason for it. And he contends that there is a path to unification of the forces within his theory a path that would need to be pursued all the more vigorously if the weak gravity conjecture holds.

Given the apparent absence of naked singularities in our universe, physicists will take hints about quantum gravity wherever they can find them. Theyre as lost now in the endless landscape of possible quantum gravity theories as they were in the 1990s, with no prospects for determining through experiments which underlying theory describes our world. It is thus paramount to find generic properties that such quantum gravity theories must have in order to be viable, Santos said, echoing the swamplands philosophy.

Weak gravity might be one such property a necessary condition for quantum gravitys consistency that spills out and affects the world beyond black holes. These may be some of the only clues available to help researchers feel their way into the darkness.

Excerpt from:

Where Gravity Is Weak and Naked Singularities Are Verboten - Quanta Magazine

Deep Learning at the Speed of Light on Nanophotonic Chips – Singularity Hub

Deep learning has transformed the field of artificial intelligence, but the limitations of conventional computer hardware are already hindering progress. Researchers at MIT think their new nanophotonic processor could be the answer by carrying out deep learning at the speed of light.

In the 1980s, scientists and engineers hailed optical computing as the next great revolution in information technology, but it turned out that bulky components like fiber optic cables and lenses didnt make for particularly robust or compact computers.

In particular, they found it extremely challenging to make scalable optical logic gates, and therefore impractical to make general optical computers, according to MIT physics post-doc Yichen Shen. One thing light is good at, though, is multiplying matricesarrays of numbers arranged in columns and rows. You can actually mathematically explain the way a lens acts on a beam of light in terms of matrix multiplications.

This also happens to be a core component of the calculations involved in deep learning. Combined with advances in nanophotonicsthe study of lights behavior at the nanometer scalethis has led to a resurgence in interest in optical computing.

Deep learning is mainly matrix multiplications, so it works very well with the nature of light, says Shen. With light you can make deep learning computing much faster and thousands of times more energy-efficient.

To demonstrate this, Shen and his MIT colleagues have designed an all-optical chip that can implement artificial neural networksthe brain-inspired algorithms at the heart of deep learning.

In a recent paper in Nature, the group describes a chip made up of 56 interferometerscomponents that allow the researchers to control how beams of light interfere with each other to carry out mathematical operations.

The processor can be reprogrammed by applying a small voltage to the waveguides that direct beams of light around the processor, which heats them and causes them to change shape.

The chip is best suited to inference tasks, the researchers say, where the algorithm is put to practical use by applying a learned model to analyze new data, for instance to detect objects in an image.

It isnt great at learning, because heating the waveguides is relatively slow compared to how electronic systems are reprogrammed. So, in their study, the researchers trained the algorithm on a computer before transferring the learned model to the nanophotonic processor to carry out the inference task.

Thats not a major issue. For many practical applications its not necessary to carry out learning and inference on the same chip. Google recently made headlines for designing its own deep learning chip, the TPU, which is also specifically designed for inference and most companies that use a lot of machine learning split the two jobs.

In many cases they update these models once every couple of months and the rest of the time the fixed model is just doing inference, says Shen. People usually separate these tasks. They typically have a server just doing training and another just doing inference, so I dont see a big problem making a chip focused on inference.

Once the model has been programmed into the chip, it can then carry out computations at the speed of light using less than one-thousandth the energy per operation compared to conventional electronic chips.

There are limitations, though. Because the chip deals with light waves that operate on the scale of a few microns, there are fundamental limits to how small these chips can get.

"The wavelength really sets the limit of how small the waveguides can be. We wont be able to make devices significantly smaller. Maybe by a factor of four, but physics will ultimately stop us, says MIT graduate student Nicholas Harris, who co-authored the paper.

That means it would be difficult to implement neural nets much larger than a few thousand neurons. However, the vast majority of current deep learning algorithms are well within that limit.

The system did achieve a significantly lower accuracy on the task than a standard computer implementing the same deep learning model, correctly identifying 76.7 percent of vowels compared to 91.7 percent.

But Harris says they think this was largely due to interference between the various heating elements used to program the waveguides, and that it should be easy to fix by using thermal isolation trenches or extra calibration steps.

Importantly, the chips are also built using the same fabrication technology as conventional computer chips, so scaling up production should be easy. Shen said the group has already had interest in their technology from prominent chipmakers.

Pierre-Alexandre Blanche, a professor of optics at the University of Arizona, said hes very excited by the paper, which he said complements his own work. But he cautioned against getting too carried away.

This is another milestone in the progress toward useful optical computing. But we are still far away to be competitive with electronics, he told Singularity Hub in an email. The argumentation about scalability, power consumption, speed etc. [in the paper] use a lot of conditional tense and assumptions which demonstrate that, if there is potential indeed, there is still a lot of research to be done.

In particular, he pointed out that the system was only a partial solution to the problem. While the vast majority of neuronal computation involves multiplication of matrices, there is another component: calculating a non-linear response.

In the current paper this aspect of the computation was simulated on a regular computer. The researchers say in future models this function could be carried out by a so-called saturable absorber integrated into the waveguides that absorbs less light as the intensity increases.

But Blanche notes that this is not a trivial problem and something his group is actually currently working on. It is not like you can buy one at the drug store, he says. Bhavin Shastri, a post-doc at Princeton whose group is also working on nanophotonic chips for implementing neural networks, said the research was important, as enabling matrix multiplications is a key step to enabling full-fledged photonic neural networks.

Overall, this area of research is poised to usher in an exciting and promising field, he added. Neural networks implemented in photonic hardware could revolutionize how machines interact with ultrafast physical phenomena. Silicon photonics combines the analog device performance of photonics with the cost and scalability of silicon manufacturing.

Stock media provided by across/Pond5.com

See the article here:

Deep Learning at the Speed of Light on Nanophotonic Chips - Singularity Hub

Inaugural Singularity University Summit to be held in SA – Disrupt Africa

Singularity University, a global community using exponential technologies to tackle the worlds greatest challenges, has announced it will hold its first international summit on the African continent in Johannesburg, South Africa on August 23-24.

The two-day SingularityU South Africa Summit is being hosted in collaboration with Standard Bank, and with key strategic partners such as Deloitte, MTN and SAP.

The event will convene exponential thought leaders, Singularity University faculty, and organisations from around the world to provide participants with insights into emerging exponential technologies and how they can be used to create positive change and economic growth in the region.

Singularity University is proud to be working with Standard Bank and Mann Made Media to host this first-ever SingularityU South Africa Summit, and to connect with Africas leaders and organizations shaping the future, said Rob Nail, associate founder and chief executive officer (CEO) of Singularity University.

South Africa represents a microcosm of the challenges facing humanity worldwide and is fast gaining a solid reputation as a global centre. Through this Summit, we hope to connect and inspire leaders in the region to effect global impact.

Mic Mann, organiser of the SingularityU South Africa Summit, said South Africa has a unique opportunity to play a vital role in shaping an abundant future for all Africans.

Our ability to leverage and develop accelerating technologies in the coming years, will allow us to leapfrog legacy systems and compete in the global economy and have a massive impact on our growth and economic health. It is of the utmost importance for us to bring Singularity University to South Africa to educate, empower and inspire leaders and future leaders in Africa, he said.

See the original post here:

Inaugural Singularity University Summit to be held in SA - Disrupt Africa

Big Bang Theory

The Big Bang theory is an effort to explain what happened at the very beginning of our universe. Discoveries in astronomy and physics have shown beyond a reasonable doubt that our universe did in fact have a beginning. Prior to that moment there was nothing; during and after that moment there was something: our universe. The big bang theory is an effort to explain what happened during and after that moment.

According to the standard theory, our universe sprang into existence as "singularity" around 13.7 billion years ago. What is a "singularity" and where does it come from? Well, to be honest, we don't know for sure. Singularities are zones which defy our current understanding of physics. They are thought to exist at the core of "black holes." Black holes are areas of intense gravitational pressure. The pressure is thought to be so intense that finite matter is actually squished into infinite density (a mathematical concept which truly boggles the mind). These zones of infinite density are called "singularities." Our universe is thought to have begun as an infinitesimally small, infinitely hot, infinitely dense, something - a singularity. Where did it come from? We don't know. Why did it appear? We don't know.

After its initial appearance, it apparently inflated (the "Big Bang"), expanded and cooled, going from very, very small and very, very hot, to the size and temperature of our current universe. It continues to expand and cool to this day and we are inside of it: incredible creatures living on a unique planet, circling a beautiful star clustered together with several hundred billion other stars in a galaxy soaring through the cosmos, all of which is inside of an expanding universe that began as an infinitesimal singularity which appeared out of nowhere for reasons unknown. This is the Big Bang theory.

Big Bang Theory - Common Misconceptions There are many misconceptions surrounding the Big Bang theory. For example, we tend to imagine a giant explosion. Experts however say that there was no explosion; there was (and continues to be) an expansion. Rather than imagining a balloon popping and releasing its contents, imagine a balloon expanding: an infinitesimally small balloon expanding to the size of our current universe.

Another misconception is that we tend to image the singularity as a little fireball appearing somewhere in space. According to the many experts however, space didn't exist prior to the Big Bang. Back in the late '60s and early '70s, when men first walked upon the moon, "three British astrophysicists, Steven Hawking, George Ellis, and Roger Penrose turned their attention to the Theory of Relativity and its implications regarding our notions of time. In 1968 and 1970, they published papers in which they extended Einstein's Theory of General Relativity to include measurements of time and space.1, 2 According to their calculations, time and space had a finite beginning that corresponded to the origin of matter and energy."3 The singularity didn't appear in space; rather, space began inside of the singularity. Prior to the singularity, nothing existed, not space, time, matter, or energy - nothing. So where and in what did the singularity appear if not in space? We don't know. We don't know where it came from, why it's here, or even where it is. All we really know is that we are inside of it and at one time it didn't exist and neither did we.

Big Bang Theory - Evidence for the Theory What are the major evidences which support the Big Bang theory?

Big Bang Theory - The Only Plausible Theory? Is the standard Big Bang theory the only model consistent with these evidences? No, it's just the most popular one. Internationally renown Astrophysicist George F. R. Ellis explains: "People need to be aware that there is a range of models that could explain the observations.For instance, I can construct you a spherically symmetrical universe with Earth at its center, and you cannot disprove it based on observations.You can only exclude it on philosophical grounds. In my view there is absolutely nothing wrong in that. What I want to bring into the open is the fact that we are using philosophical criteria in choosing our models. A lot of cosmology tries to hide that."4

In 2003, Physicist Robert Gentry proposed an attractive alternative to the standard theory, an alternative which also accounts for the evidences listed above.5 Dr. Gentry claims that the standard Big Bang model is founded upon a faulty paradigm (the Friedmann-lemaitre expanding-spacetime paradigm) which he claims is inconsistent with the empirical data. He chooses instead to base his model on Einstein's static-spacetime paradigm which he claims is the "genuine cosmic Rosetta." Gentry has published several papers outlining what he considers to be serious flaws in the standard Big Bang model.6 Other high-profile dissenters include Nobel laureate Dr. Hannes Alfvn, Professor Geoffrey Burbidge, Dr. Halton Arp, and the renowned British astronomer Sir Fred Hoyle, who is accredited with first coining the term "the Big Bang" during a BBC radio broadcast in 1950.

Big Bang Theory - What About God? Any discussion of the Big Bang theory would be incomplete without asking the question, what about God? This is because cosmogony (the study of the origin of the universe) is an area where science and theology meet. Creation was a supernatural event. That is, it took place outside of the natural realm. This fact begs the question: is there anything else which exists outside of the natural realm? Specifically, is there a master Architect out there? We know that this universe had a beginning. Was God the "First Cause"? We won't attempt to answer that question in this short article. We just ask the question:

Does God Exist?

Footnotes:

View post:

Big Bang Theory

Ashes of the Singularity: Escalation v2.3 is a great reason to return to the battlefield – Windows Central


Windows Central
Ashes of the Singularity: Escalation v2.3 is a great reason to return to the battlefield
Windows Central
Dreadnoughts are powerful assets in Ashes of the Singularity, but now there's something more devastating to be unleashed on the battlefield. Update 2.3 for the Escalation expansion adds a new class of ship, called "Juggernauts." These are colossal ...

Read the original:

Ashes of the Singularity: Escalation v2.3 is a great reason to return to the battlefield - Windows Central

Designing Antiviral Proteins via Computer Could Help Halt the Next Pandemic – Singularity Hub

As Bill Gates sees it, there are three main threats to our species: nuclear war, climate change, and the next global pandemic.

Speaking on pandemic preparedness at the Munich Security Conference earlier this year, Gates reminded us that the fact that a deadly global pandemic has not occurred in recent history shouldnt be mistaken for evidence that a deadly pandemic will not occur in the future.

If we want to be prepared for the worst, Gates says, first and most importantly, we have to build an arsenal of new weaponsvaccines, drugs, and diagnostics.

Some scientists are now using computers to do just that.

Despite the availability of the flu shot, the World Health Organization reports that seasonal influenza is still responsible for millions of serious illnesses and as many as half a million deaths per year globally. The partial efficacy of each years flu shot, coupled with long manufacturing times and limited global availability, suggests new flu-fighting methods are still needed.

And thats just for the seasonal flu. Pandemic influenza, like the devastating 1918 Spanish flu, could again kill tens of millions of people in a single year.

Antibodies, a natural part of the immune system, are front-line soldiers in the war against viruses. The job of an antibody is to recognize and physically adhere to a foreign invader like influenza. Human antibodies are bivalent, meaning they have two hands with which they can grab onto their target.

Under a microscope, influenza looks like a tiny ball with spikes. It uses some of its surface spikes to break into human cells. By grabbing tightly to those spikes using one or both hands, antibodies can prevent flu particles from infecting human cells. But every year the rapidly evolving influenza picks up mutations in its spike proteins, causing the sticky hands of our antibodies to no longer recognize the virus.

Researchers have long sought a universal flu vaccineone that doesnt need to be readministered every year. Efforts to produce one tend to involve injecting noninfectious flu lookalikes in hopes that it will prime the immune system to mount a proper attack on whatever real strain of flu it sees next. Despite some progress, researchers have not yet been able to coax the immune system to defend against all strains of influenza, and the threat of a global pandemic still looms.

Transmission electron microscopic image of an influenza virus particle. Image credit: CDC/ Erskine. L. Palmer, Ph.D.; M. L. Martin

Computational protein design offers another way. Rather than relying on the immune system to generate an antibody protein capable of shutting down a virus like the flu, computer modeling can now help quickly create custom antiviral proteins programmed to shut down a deadly virus.

Unlike a vaccine, this class of drug could be administered to treat an existing infection or given days prior to exposure to prevent one. And because these designer proteins work independently of the immune system, their potency does not depend on having an intact immune systema useful trait, as those with weaker immune systems are at high risk for viral infection.

Computer-generated antiviral proteins work the same way some natural proteins in our immune system do. By having surfaces that are chemically complementary to their targets, antiviral proteins can stick tightly to a specific virus. If a protein sticks to a virus in just the right way, it can physically block how that virus moves, ultimately preventing infection.

By designing an antiviral protein on a computer, building it in the laboratory, and then administering it into the body, you effectively digitize part of the immune system.

In 2016, computer-generated proteins were shown to be more effective than oseltamivir (Tamiflu) in warding off death in influenza-infected mice. One dose of designer protein given intranasally was more effective than 10 doses of Tamiflu, a drug considered an essential medicine by the WHO due to its antiflu activity. Whats more, these new computer-generated antiflu proteins protected mice against diverse strains of the flu. Efforts to turn these promising results into FDA-approved drugs are underway.

In a just-published paper in Nature Biotechnology, scientists here at the Institute for Protein Design at the University of Washington went a step further and demonstrated a new way to shut down the flu: They used computer modeling to build a completely new kind of antiviral protein with three sticky hands.

Why three? It turns out many deadly envelope viruseslike influenza, Ebola, and HIVbuild their spike proteins out of three symmetric parts.

A single antiviral drug with three properly spaced hands should be able to symmetrically grab each part of a spike protein, leading to tighter binding and overall better antiviral activity. This geometric feat is beyond what the human immune system can naturally do.

Left: The tips of many viral spike proteins are built out of three symmetric parts, with one part highlighted in pink. Right: A new three-handed antiflu protein (blue) bound to influenzas HA spike.Image Credit: UW Institute for Protein Design, CC BY-ND

The design strategy worked. The best three-handed protein, called Tri-HSB.1C, was able to bind tightly to diverse strains of influenza. When given to mice, it also afforded complete protection against a lethal flu infection with only minimal associated weight lossa trait commonly used to diagnose flu severity in mice. Researchers are now applying the same tools to the Ebola spike protein.

It will be many years before this new technology is approved for use in humans for any virus. But we may not have to wait long to see some lifesaving benefits.

By coating a strip of paper with a three-handed flu binder and applying influenza samples on top, the same team was able to detect the presence of viral surface protein even at very low concentrations. This proof-of-concept detection system could be transformed into a reliable and affordable on-site diagnostic tool for a variety of viruses by detecting them in saliva or blood. Like a pregnancy test, a band on a test strip could indicate flu. Or Ebola. Or the next rapidly spreading global pandemic.

In a 2015 letter to the New England Journal of Medicine on lessons learned from the Ebola epidemic in West Africa, Bill Gates describes the lack of preparation by the global community as a global failure.

Perhaps the only good news from the tragic Ebola epidemic, Gates says, is that it may serve as a wake-up call. (The Bill and Melinda Gates Foundation funds work on protein design at the University of Washington.)

When a global viral pandemic like the 1918 Spanish flu strikes again, antivirus software of the biological kind may play an important role in saving millions of lives.

This article was originally published on The Conversation. Read the original article.

Disclosure statement: Ian Haydon is a doctoral student at the University of Washington's Institute for Protein Design, which receives funding from the Bill and Melinda Gates foundation.

Read the rest here:

Designing Antiviral Proteins via Computer Could Help Halt the Next Pandemic - Singularity Hub

Singularity RDK – Home

Announcement: A new major release, RDK 2.0, is now available! Download source code or a bootable ISO at the Releases tab, or retrieve the latest Source Code from the repository at the Source Code tab.

Project Description

The Singularity Research Development Kit (RDK) is based on the Microsoft Research Singularity Project. It includes source code, build tools, test suites, design notes, and other background materials. The Singularity RDK is for academic non-commercial use and is governed by this license.

About Singularity

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernels address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

See also: Singularity: Rethinking Dependable System Design Singularity: Rethinking the Software Stack Using the Singularity Research Development Kit

Read more here:

Singularity RDK - Home

Why Interstellar Travel Will Be Possible Sooner Than You Think – Singularity Hub

The term moonshot is sometimes invoked to denote a project so outrageously ambitious that it can only be described by comparing it to the Apollo 11 mission to land the first human on the Moon. The Breakthrough Starshot Initiative transcends the moonshot descriptor because its purpose goes far beyond the Moon. The aptly-named project seeks to travel to the nearest stars.

The brainchild of Russian-born tech entrepreneur billionaire Yuri Milner, Breakthrough Starshot was announced in April 2016 at a press conference joined by renowned physicists including Stephen Hawking and Freeman Dyson. While still early, the current vision is that thousands of wafer-sized chips attached to large, silver lightsails will be placed into Earth orbit and accelerated by the pressure of an intense Earth-based laser hitting the lightsail.

After just two minutes of being driven by the laser, the spacecraft will be traveling at one-fifth the speed of lighta thousand times faster than any macroscopic object has ever achieved.

Each craft will coast for 20 years and collect scientific data about interstellar space. Uponreachingthe planets near the Alpha Centauri star system, anthe onboard digital camera will take high-resolution pictures and send these back to Earth, providing the first glimpse of our closest planetary neighbors. In addition to scientific knowledge, we may learn whether these planets are suitable for human colonization.

The team behind Breakthrough Starshot is as impressive as the technology. The board of directors includes Milner, Hawking, and Facebook co-founder Mark Zuckerberg. The executive director is S. Pete Worden, former director of NASA Ames Research Center. A number of prominent scientists, including Nobel and Breakthrough Laureates, are serving as advisors to the project, and Milner has promised $100 million of his own funds to begin work. He will encourage his colleagues to contribute $10 billion over the next several years for its completion.

While this endeavor may sound like science fiction, there are no known scientific obstacles to implementing it. This doesnt mean it will happen tomorrow: for Starshot to be successful, a number of advances in technologies are necessary. The organizers and advising scientists are relying upon the exponential rate of advancement to make Starshot happen within 20 years.

Here are 11 key Starshot technologies and how they are expected to advance exponentially over the next two decades.

An exoplanet is a planet outside our Solar System. While the first scientific detection of an exoplanet was only in 1988, as of May, 1 2017 there have been 3,608 confirmed detections of exoplanets in 2,702 planetary systems. While some resemble those in our Solar System, many have fascinating and bizarre features, such as rings 200 times wider than Saturns.

The reason for this deluge of discoveries? A vast improvement in telescope technology.

Just 100 years ago the worlds largest telescope was the Hooker Telescope at 2.54 meters. Today, the European Southern Observatory's Very Large Telescope consists of four large 8.2-meter diameter telescopes and is now the most productive ground-based facility in astronomy, with an average of over one peer-reviewed, published scientific paper per day.

Researchers use the VLT and a special instrument to look for rocky extrasolar planets in the habitable zone (allowing liquid water) of their host stars. In May 2016, researchers using the Transiting Planets and Planetesimals Small Telescope (TRAPPIST) in Chile found not just one but seven Earth-sized exoplanets in the habitable zone.

Meanwhile, in space, NASAs Kepler spacecraft is designed specifically for this purpose and has already identified over 2,000 exoplanets. The James Webb Space Telescope, to be launched in October, 2018, will offer unprecedented insight into whether exoplanets can support life. If these planets have atmospheres, [JWST] will be the key to unlocking their secrets, according to Doug Hudgins, Exoplanet Program Scientist at NASA headquarters in Washington.

The Starshot mothership will be launched aboard a rocket and release a thousand starships. The cost of transporting a payload using one-time-only rockets is immense, but private launch providers such as SpaceX and Blue Origin have recently demonstrated success in reusable rockets which are expected to substantially reduce the price. SpaceX has already reduced costs to around $60 million per Falcon 9 launch, and as the private space industry expands and reusable rockets become more common, this price is expected to drop even further.

Each 15-millimeter-wide Starchip must contain a vast array of sophisticated electronic devices, such as a navigation system, camera, communication laser, radioisotope battery, camera multiplexer, and camera interface. The expectation well be able to compress an entire spaceship onto a small wafer is due to exponentially decreasing sensor and chip sizes.

The first computer chips in the 1960s contained a handful of transistors. Thanks to Moores Law, we can now squeeze billions of transistors onto each chip. The first digital camera weighed 8 pounds and took 0.01 megapixel images. Now, a digital camera sensor yields high-quality 12+ megapixel color images and fits in a smartphonealong with other sensors like GPS, accelerometer, and gyroscope. And were seeing this improvement bleed into space exploration with the advent of smaller satellites providing better data.

For Starshot to succeed, we will need the chips mass to be about 0.22 grams by 2030, but if the rate of improvement continues, projections suggest this is entirely possible.

The sail must be made of a material which is highly reflective (to gain maximum momentum from the laser), minimally absorbing (so that it is not incinerated from the heat), and also very light weight (allowing quick acceleration). These three criteria areextremely constrictive and there is at present no satisfactory material.

Therequired advances may come from artificial intelligence automating and accelerating materials discovery. Such automation has advanced to the point wheremachine learning techniques can generate libraries of candidate materials by the tens of thousands, allowing engineers to identify which ones are worth pursuing and testing for specific applications.

While the Starchip will use a tiny nuclear-powered radioisotope battery for its 24-year-plus journey, we will still need conventional chemical batteries for the lasers. The lasers will need to employ tremendous energy in a short span of time, meaning that the power must be stored in nearby batteries.

Battery storage has improved at 5-8% per year, though we often dont notice this benefit because appliance power consumption has increased at a comparable rate resulting in a steady operating lifetime. If batteries continue to improve at this rate, in 20 years they should have 3 to 5 times their present capacity. Continued innovation is expected to be driven from Tesla-Solar Citys big investment in battery technology. The companies have already installed close to 55,000 batteries in Kauai to power a large portion of their infrastructure.

Thousands of high-powered lasers will be used to push the lightsail to extraordinary speeds.

Lasers have obeyed Moores Law at a nearly identical rate to integrated circuits, the cost-per-power ratio halving every 18 months. In particular, the last decade has seen a dramatic acceleration in power scaling of diode and fiber lasers, the former breaking through 10 kilowatts from a single mode fiber in 2010 and the 100-kilowatt barrier a few months later. In addition to the raw power, we will also need to make advances in combining phased array lasers.

Our ability to move quickly has...moved quickly. In 1804 the train was invented and soon thereafter produced the hitherto unheard of speed of 70 mph. The Helios 2 spacecraft eclipsed this record in 1976: at its fastest, Helios 2 was moving away from Earth at a speed of 356,040 km/h. Just 40 years later the New Horizons spacecraft achieved a heliocentric speed of almost 45 km/s or 100,000 miles per hour. Yet even at these speeds it would take a long, long time to reach Alpha Centauri at slightly more than four light years away.

While accelerating subatomic particles to nearly light speed is routine in particle accelerators, never before has this been achieved for macroscopic objects. Achieving 20% speed of light for Starshot would represent a 1000x speed increase for any human-built object.

Fundamental to computing is the ability to store information. Starshot depends on the continued decreasing cost and size of digital memory to include sufficient storage for its programs and the images taken of Alpha Centauri star system and its planets.

The cost of memory has decreased exponentially for decades: in 1970, a megabyte cost about one million dollars; its now about one-tenth of a cent. The size required for the storage has similarly decreased, from a 5-megabyte hard drive being loaded via forklift in 1956 to the current availability of 512-gigabyte USB sticks weighing a few grams.

Once the images are taken the Starchip will send the images back to Earth for processing.

Telecommunications has advanced rapidly since Alexander Graham Bell invented the telephone in 1876. The average internet speed in the US is currently about 11 megabits per second. The bandwidth and speed required for Starshot to send digital images over 4 light yearsor 20 trillion mileswill require taking advantage in the latest telecommunications technology.

One promising technology is Li-Fi, a wireless approach which is 100 times faster than Wi-Fi. A second is via optical fibers which now boast 1.125 terabits per second. There are even efforts in quantum telecommunications which are not just ultrafast but completely secure.

The final step in the Starshot project is to analyze the data returning from the spacecraft. To do so we must take advantage of the exponential increase in computing power, benefiting from the trillion-fold increase in computing over the 60 years.

This dramatically decreasing cost of computing has now continued due largely to the presence of cloud computing. Extrapolating into the future and taking advantage of new types of processing, such as quantum computing, we should see another thousand-fold increase in power by the time data from Starshot returns. Such extreme processing power will allow us to perform sophisticated scientific modeling and analysis of our nearest neighboring star system.

Acknowledgements: The author would like to thank Pete Worden and Gregg Maryniak for suggestions and comments.

Image Credit:NASA/ESA/ESO

Link:

Why Interstellar Travel Will Be Possible Sooner Than You Think - Singularity Hub

These 7 Disruptive Technologies Could Be Worth Trillions of Dollars – Singularity Hub

Scientists, technologists, engineers, and visionaries are building the future. Amazing things are in the pipeline. Its a big deal. But you already knew all that. Such speculation is common. Whats less common? Scale.

How big is big?

Silicon Valley, Silicon Alley, Silicon Dock, all of the Silicons around the world, they are dreaming the dream. They are innovating, Catherine Wood said at Singularity Universitys Exponential Finance in New York. We are sizing the opportunity. That's what we do.

Wood is founder and CEO of ARK Investment Management, a research and investment company focused on the growth potential of todays disruptive technologies. Prior to ARK, she served as CIO of Global ThematicStrategies at AllianceBernstein for 12 years.

We believe innovation is key to growth, Wood said. We are not focused on the past. We are focused on the future. We think there are tremendous opportunities in the public marketplace because this shift towards passive [investing] has created a lot of risk aversion and tremendous inefficiencies.

In a new research report, released this week, ARK took a look at seven disruptive technologies, and put a number on just how tremendous they are. Heres what they found.

(Check out ARKs website and free report, Big Ideas of 2017, for more numbers, charts, and detail.)

Deep learning is a subcategory of machine learning which is itself a subcategory of artificial intelligence. Deep learning is the source of much of the hype surrounding AI today. (You know you may be in a hype bubble when ads tout AI on Sunday golf commercial breaks.)

Behind the hype, however, big tech companies are pursuing deep learningto do very practical things. And whereas the internet, which unleashed trillions in market value, transformedseveralindustriesnews, entertainment, advertising, etc.deep learning will work its way intoeven more, Wood said.

As deep learning advances, it shouldautomate and improve technology, transportation, manufacturing, healthcare, finance, and more. And as is often the case with emerging technologies, it may form entirely new businesses we have yet to imagine.

Bill Gates has said a breakthrough in machine learning would be worth 10 Microsofts. Microsoft is $550 to $600 billion, Wood said. We think deep learning is going to be twice that. We think [it] could approach $17 trillion in market capwhich would be 35 Amazons.

Wood didnt mince words about a future when cars drive themselves.

This is the biggest change that the automotive industry has ever faced, she said.

Todays automakers have a global market capitalization of a trillion dollars. Meanwhile, mobility-as-a-service companies as a whole (think ridesharing) are valued around $115 billion. If this number took into account expectations of a driverless future, itd be higher.

The mobility-as-a-service market, which will slash the cost of "point-to-point" travel, couldbe worth more than todays automakers combined, Wood said. Twice as much, in fact. As gross sales grow to something like $10 trillion in the early 2030s, her firm thinks some 20% of that will go to platform providers. It could be a $2 trillion opportunity.

Wood said a handful of companies will dominate the market, and Tesla is well positioned to be one of those companies. They are developing both the hardware, electric cars, and the software, self-driving algorithms. And although analysts tend to look at them as a just an automaker right now, thats not all theyll be down the road.

We think if [Tesla] got even 5% of this global market for autonomous taxi networks, it should be worth another $100 billion today, Wood said.

3D printing has become part of mainstream consciousness thanks, mostly, to the prospect of desktop printers for consumer prices. But these are imperfect, and the dream of an at-home replicator still eludes us. The manufacturing industry, however, is much closer to using 3D printers at scale.

Not long ago, we wrote about Carbons partnership with Adidas to mass-produce shoe midsoles. This is significant because, whereas industrial 3D printing has focused on prototyping to date, improvingcost, quality, and speed are makingitviable for finished products.

According to ARK, 3D printing may grow into a $41 billion market by 2020, and Wood noteda McKinsey forecast of as much as $490 billion by 2025. McKinsey will be right if 3D printing actually becomes a part of the industrial production process, so end-use parts, Wood said.

According to ARK, the cost of genome editing has fallen 28x to 52x (depending on reagents) in the last four years. CRISPR is the technique leading the genome editing revolution, dramatically cutting time and cost while maintaining editing efficiency. Despite its potential, Wood said she isnt hearing enough about it from investors yet.

There are roughly 10,000 monogenic or single-gene diseases. Only 5% are treatable today, she said. ARK believes treating these diseases is worth an annual $70 billion globally. Other areas of interest include stem cell therapy research, personalized medicine, drug development, agriculture, biofuels, and more.

Still,the big names in this areaIntellia, Editas, and CRISPRarent on the radar.

You can see if a company in this space has a strong IP position, as Genentech did in 1980, then the growth rates can be enormous, Wood said. Again, you don't hear these names, and that's quite interesting to me. We think there are very low expectations in that space.

By 2020, 75% of the world will own a smartphone, according to ARK. Amid smartphones' many uses, mobile payments will be one of the most impactful. Coupled with better security (biometrics) and wider acceptance (NFC and point-of-sale), ARK thinks mobile transactions couldgrow 15x, from $1 trillion today to upwards of $15 trillion by2020.

In addition, to making sharing economy transactions more frictionless, they are generally keyto financial inclusion in emergingand developed markets, ARK says. And big emerging markets, such as India and China, are at the forefront, thanks to favorable regulations.

Asia is leading the charge here, Wood said. You look at companies like Tencent and Alipay. They are really moving very quickly towards mobile and actually showing us the way.

Robots arent just for auto manufacturers anymore. Driven by continued cost declines and easier programming, more businessesare adopting robots.Amazons robot workforce in warehouses has grown from 1,000 to nearly 50,000 since 2014. And they have never laid off anyone, other than for performance reasons, in their distribution centers, Wood said.

But she understands fears over lost jobs.

This is only the beginning of a big round of automation driven by cheaper, smarter, safer, and more flexible robots. She agrees there will be a lot of displacement. Still, some commentatorsoverlook associated productivity gains. By 2035, Wood said US GDP couldbe $12 trillion more than it would have been without robotics and automationthats a $40 trillion economy instead of a $28 trillion economy.

This is the history of technology. Productivity. New products and services. It is our job as investors to figure out where that $12 trillion is, Wood said. We can't even imagine it right now. We couldn't imagine what the internet was going to do with us in the early '90s.

Blockchain-enabled cryptoassets, such as Bitcoin, Ethereum, and Steem, have caused more than a stir in recent years. In addition to Bitcoin, there are now some 700 cryptoassets of various shapes and hues. Bitcoin still rules the roostwitha market value of nearly $40 billion, up from just $3 billion two years ago, according to ARK. But its only half the total.

This market is nascent. There are a lot of growing pains taking place right now in the crypto world, but the promise is there, Wood said. Its a very hot space.

Like all young markets, ARK says, cryptoasset markets are characterized by enthusiasm, uncertainty, and speculation. The firms blockchain products lead, Chris Burniske, uses Twitterwhich is where he says the communitycongregatesto take the temperature. In a recent Twitter poll, 62% of respondents said they believed the markets total value would exceed a trillion dollars in 10 years. In a followup, more focused on thetrillion-plus crowd, 35% favored$1$5 trillion, 17% guessed $5$10 trillion, and 34% chose $10+ trillion.

Looking pastthe speculation, Wood believes theres at least one bigarea blockchain and cryptoassets are poised to break into: the $500-billion, fee-based business of sending money across borders known as remittances.

If you look at the Philippines-to-South Korean corridor, what you're seeing already is that Bitcoin is 20% of the remittances market, Wood said. The migrant workers who are transmitting currency, they don't know that Bitcoin is what's enabling such a low-fee transaction. It's the rails, effectively. They just see the fiat transfer. We think that that's going to be a very exciting market.

Stock media provided by NomadSoul1/Pond5.com

See the original post here:

These 7 Disruptive Technologies Could Be Worth Trillions of Dollars - Singularity Hub

Singularity Summit comes to SA | IT-Online – IT-Online

Singularity University (SU), a global community using exponential technologies to tackle the worlds greatest challenges, will hold its first international summit on the African continent. The two-day SingularityU South Africa Summit is being hosted in collaboration with Standard Bank, and with key strategic partners, including Deloitte, MTN, 702, and SAP and is being produced by Mann Made Media. SingularityU South Africa Summit will convene exponential thought leaders, SU faculty, and organizations from around the world to provide participants with insights into emerging exponential technologies and how they can be used to create positive change and economic growth in the region. Corporate South Africa realises the importance of change and the influence of innovation and technology across all sectors. In response, this unique summit in Johannesburg will present a display of advanced technologies, extensive debate, and collaborative discussions, offering an exchange of ideas and existing best practices in the fields of healthcare, cyberspace, AI, robotics, big data, finance, and design. In addition to expert presentations, participants will explore questions ranging from trending technological changes across the globe, to their impact on industry growth and region-specific challenges. The Summit will also showcase African entrepreneurs and innovations in the interactive exhibitor halls. Singularity University is proud to be working with Standard Bank and Mann Made Media to host this first-ever SingularityU South Africa Summit, and to connect with Africas leaders and organizations shaping the future, says Rob Nail, associate founder and CEO of Singularity University. South Africa represents a microcosm of the challenges facing humanity worldwide and is fast gaining a solid reputation as a global centre. Through this Summit, we hope to connect and inspire leaders in the region to effect global impact. SingularityU Summits are two-day conferences held around the globe to help local leaders understand how exponential technologies can be used to create positive change and economic growth in their region. Summits become an annual point of contact and inspiration for the local community, a catalyst to accelerate a local culture of innovation, and an opportunity to highlight breakthrough technologies, startups, and ideas. SingularityU Summits are attended by the general public, government officials, entrepreneurs, investors, NGOs, impact partners, and educators, and may include educational tracks for government and youth.

Original post:

Singularity Summit comes to SA | IT-Online - IT-Online

Forget Police Sketches: Researchers Perfectly Reconstruct Faces by Reading Brainwaves – Singularity Hub

Picture this: youre sitting in a police interrogation room, struggling to describe the face of a criminal to a sketch artist. You pause, wrinkling your brow, trying to remember the distance between his eyes and the shape of his nose.

Suddenly, the detective offers you an easier way: would you like to have your brain scanned instead, so that machines can automatically reconstruct the face in your mind's eye from reading your brain waves?

Sound fantastical? Its not. After decades of work, scientists at Caltech may have finally cracked our brains facial recognition code. Using brain scans and direct neuron recording from macaque monkeys, the team found specialized face patches that respond to specific combinations of facial features.

Like dials on a music mixer, each patch is fine-tuned to a particular set of visual information, which then channel together in different combinations to form a holistic representation of every distinctive face.

The values of each dial were so predictable that scientists were able to recreate a face the monkey saw simply by recording the electrical activity of roughly 200 brain cells. When placed together, the reconstruction and the actual photo were nearly indistinguishable.

This was mind-blowing, says lead author Dr. Doris Tsao.

Even more incredibly, the work completely kills the dominant theory of facial processing, potentially ushering in a revolution in neuroscience, says Dr. Rodrigo Quian Quiroga, a neuroscientist at the University of Leichester who was not involved in the work.

On average, humans are powerful face detectors, beating even the most sophisticated face-tagging algorithms.

Most of us are equipped with the uncanny ability to spot a familiar set of features from a crowd of eyes, noses and mouths. We can unconsciously process a new face in milliseconds, andwhen exposed to that face over and overoften retain that memory for decades to come.

Under the hood, however, facial recognition is anything but simple. Why is it that we can detect a face under dim lighting, half obscured or at a weird angle, but machines cant? What makes peoples faces distinctively their own?

When light reflected off a face hits your retina, the information passes through several layers of neurons before it reaches a highly specialized region of the visual cortex: the inferotemporal (IT) region, a small nugget of brain at the base of the brain. This region is home to face cells: groups of neurons that only respond to faces but not to objects such as houses or landscapes.

In the early 2000s, while recording from epilepsy patients with electrodes implanted into their brains, Quian Quiroga and colleagues found that face cells are particularly picky. So-called Jennifer Aniston cells, for example, would only fire in response to photos of her face and her face alone. The cells quietly ignored all other images, including those of her with Brad Pitt.

This led to a prevailing theory that still dominates the field: that the brain contains specialized face neurons that only respond to one or a few faces, but do so holistically.

But theres a problem: the theory doesnt explain how we process new faces, nor does it get into the nitty-gritty of how faces are actually encoded inside those neurons.

In a stroke of luck, Tsao and team blew open the black box of facial recognition while working on a different problem: how to describe a face mathematically, with a matrix of numbers.

Using a set of 200 faces from an online database, the team first identified landmark features and labeled them with dots. This created a large set of abstract dot-to-dot faces, similar to what filmmakers do during motion capture.

Then, using a statistical method called principle component analysis, the scientists extracted 25 measurements that best represented a given face. These measurements were mostly holistic: one shape dimension, for example, encodes for the changes in hairline, face width, and height of eyes.

By varying these shape dimensions, the authors generated a set of 2,000 black-and-white faces with slight differences in the distance between the brows, skin texture, and other facial features.

In macaque monkeys with electrodes implanted into their brains, the team recorded from three face patchesbrain areas that respond especially to faceswhile showing the monkeys the computer-generated faces.

As it turns out, each face neuron only cared about a single set of features. A neuron that only cares about hairline and skinny eyebrows, for example, would fire up when it detects variations in those features across faces. If two faces had similar hairlines but different mouths, those hairline neurons stayed silent.

Whats more, cells in different face patches processed complementary information. The anterior medial face patch, for example, mainly responded to distances between features (what the team dubs appearance). Other patches fired up to information about shapes, such as the curvature of the nose or length of the mouth.

In a way, these feature neurons are like compasses: they only activate when the measurement is off from a set point (magnetic north, for a compass). Scientists arent quite sure how each cell determines its set point. However, combining all the set points generates a face spacea sort of average face, or a face atlas.

From there, when presented with a new face, each neuron will measure the difference between a feature (distance between eyes, for example) and the face atlas. Combine all those differences, and voilyou have a representation of a new face.

Once the team figured out this division of labor, they constructed a mathematical model to predict how the patches process new faces.

Heres the cool part: the medley of features that best covered the entire shape and look of a face was fairly abstract, including the distance between the brows. Sound familiar? Thats because the brains preferred set of features were similar to the landmarks that the team first intuitively labeled to generate their face database.

We thought we had picked it out of the blue, says Tsao.

But it makes sense. If you look at methods for modeling faces in computer vision, almost all of them...separate out the shape and appearance, she explains. The mathematical elegance of the system is amazing.

The team showed the monkeys a series of new faces while recording from roughly 200 neurons in the face patches. Using their mathematical model, they then calculated what features each neuron encodes for and how they combine.

The result? A stunning accurate reconstruction of the faces the monkeys were seeing. So accurate, in fact, that the algorithm-generated faces were nearly indistinguishable from the original.

It really speaks to how compact and efficient this feature-based neural code is,says Tsao, referring to the fact that such a small set of neurons contained sufficient information for a full face.

Tsaos work doesnt paint the full picture. The team only recorded from two out of six face patches, suggesting that other types of information processing may be happening alongside Tsaos model.

But the study breaks the black box norm thats plagued the field for decades.

Our results demonstrate that at least one subdivision of IT cortex can be explained by an explicit, simple model, and black box explanations are unnecessary, the authors conclude (pretty sassy for an academic paper!).

While there arent any immediate applications, the new findings could eventually guide the development of brain-machine interfaces that directly stimulate the visual cortex to give back sight to the blind. They could also help physicians understand why some people suffer from face blindness, and engineer prosthetics to help.

Image Credit: Doris Tsao

Link:

Forget Police Sketches: Researchers Perfectly Reconstruct Faces by Reading Brainwaves - Singularity Hub

Get It While It’s Hot: Why Fintech Is a Goldmine for Investors – Singularity Hub

Its 1998 in Silicon Valley, and PayPal is born.

Many argue this was the moment that launched fintech as we know it. Today, fintech is comprised of roughly 15,000 startups globally, all focused on either enabling or disrupting the industry.

Fintech is still relatively new, and yet it has a remarkable amount of money flowing through it. A recent report from CB Insights found that VC-backed fintech companies raised $2.7 billion in the first quarter of 2017 alone. And the report says the global value of fintechs 22 unicorn companies amounts to $77 billion. While the pace of investment is likely to drop in the US this year, Europe saw an early spike in Q1.

The landscape is rich in opportunity for both investors and startups, from new lending, crowdfunding, and financial management platforms to novel payment, insurance, and investing services.

This week at Singularity Universitys Exponential Finance Summit in New York, Mike Sigal, partner at 500 Startups, gave the audience a snapshot of the current abundance in fintech and a look into how investors and entrepreneurs are viewing the market.

Since it was founded in 2012, 500 Startups remains one of the most active early-stage investors in the world, according to Sigal. The companys made nearly 2,000 seed investments across 50 countries and has $330 million in capital invested.

Within fintech specifically, the firm has invested in over 200 companies across 27 countries and invests in almost 40 new companies each year.

Unlike some traditional VC firms that tend to keep a tight focus on a specific industry vertical, 500 Startups prides themselves on maintaining an extremely diversified investment portfolio. So far, the results have been in their favor.

The company has four unicorns (startups valued a billion dollars) in their portfolio, including Credit Karma, Grab, Stripe, and Twilio. They also have invested in 40 companies that are now each valued between $100 million and a billion dollars.

One of the most interesting things about the financial services industry, Sigal said, is that large portions of it remain untouched by digital technology.

Less than one percent of loans, for example, originate online. This means theres a lot of demand for new digital products to transform existing financial services.

Think about just how much more could be done here, Sigal said.

In many ways, transforming financial services is the name of the game.

Over the last few years, theres been as massive shift in which particular companies are controlling customer expectations while using a financial service. In fact, the companies now controlling user expectations are no longer the banks. Instead, Amazon, Apple, Google, Uber, and Facebook have been setting the tone for customer expectations ever since they moved into financial services.

The funny twist is that thirty percent of fintech investments are still coming from banks and insurers. In short, the big guys who are being disrupted are also willing to invest a lot of money in new solutions that could help them stay competitive.

Another huge opportunity in fintech is the three billion new smartphones users projected to enter the market by 2020. Sigal points out that many current financial services cannot serve this new population with their existing offerings.

Sigal recommends a few specific tactics for early-stage investors to use while selecting companies to invest in.

Most importantly, he notes that VCs often pursue white space where theres open market opportunity. Additionally, Sigal advises investors to pattern match by finding companies that are doing a mix of the following:

To wrap things up, Sigal played a game of fintech hot or not to test how well the audience could identify which technologies are hottest to investors today.

Sigal said many of the technologies receiving the most seed investments are the ones with the most practical market applications. For example, technology for sourcing customer data and technology improving how banks sell to new customers.

Though artificial intelligence and blockchain are the craze in Silicon Valley, Sigal explains they arent necessarily the most appealing technologies to investors yet, unless they have a very clear practical market application.

Its hard to say whether the rapid pace of capital flowing into fintech will continue, but for now, it seems extremely promising to both investors and entrepreneurs.

Cheesy or not, in the case of fintech, get it while its hot.

Image Credit: Pond5

Visit link:

Get It While It's Hot: Why Fintech Is a Goldmine for Investors - Singularity Hub

At Exponential Finance, the Singularity University Explores Visionary Applications of Blockchains – Crypto Insider (press release) (blog)

The Exponential Finance Summit at the New York Marriott Marquis in Times Square, hosted by the Singularity University and started on June 7, is ending today. You might be still in time to catch the live stream. Otherwise, stay tuned for our coverage of the most interesting points.

Exponential Finance will help attendees navigate the rapid pace of change taking place in the financial sector due to exponential technologies, said Will Weisman, Executive Director for Summits at SU. Well share tools that will help participants stay at the forefront and understand where to invest, how to protect their assets, and what it will take to remain competitive and successful in this new economy.

Co-founded in 2008 by Ray Kurzweil, now Googles director of engineering, and Peter Diamandis, founder and CEO of the XPrize, the Singularity University (SU) is a unique educational and business community focused on exponential technologies able to tackle the worlds biggest challenges.

Exponentially accelerating technologies like Artificial Intelligence (AI), advanced biotech, quantum computing, and robotics, described for example in the works of SU co-founder Ray Kurzweil, promise to bring faster and faster change to all industries. The SU runs educational programs, conferences, incubators, and labs, to help accelerating corporate and social innovation.

Of course, blockchain technology is a perfect example of exponential acceleration in financial technology (fintech). In 2015 and 2016, blockchain fintech was prominently featured at Exponential Finance. This year as well, the conference program is packed with blockchain-related topics, and starts with a pre-summit technology bootcamp on blockchain and distributed ledger technologies.

Other interesting blockchain-related talks include The Boom Of ICOs & Token Offerings Has Blockchain Tech Unlocked A Pandoras Box For Venture Capitals Disruption, by DLT Education president Robert Schwentker, Blockchain, by BitNation partner Toni Lane Casserly, Blockchain in Insurance, by Deloitte principal Eric Piscini and The Institutes CEO Pete Miller, and Taking Blockchain To Production An Ongoing Perspective, by Nuco CEO Matt Spoke.

Blockchain fintech is accelerating exponentially indeed. But it can be argued that financial applications, directly related to digital money and transactions, represent only the tip of the blockchain iceberg, and the most interesting future applications could be those that leverage the power of distributed ledgers in other industries such as AI, robotics, the Internet of Things (IoT), and self-driving cars.

To have a contract with someone, you need to trust them, and you need to have an enforcement mechanism, noted Amin Toufani, Vice President of Strategic Relations and Director of strategy at SU. Blockchain technology is showing us a path to where you can bypass both. Enforcement is automatic and your can trust you counterparty. What if your car offered some bitcoins to the car in front of you? And if and when the cars clear the lane the contract is settled? Enforcement is automatic, you do not need to trust the cars in front of you.

To me, the potential to play a critical role in visionary developments with a potential to make the world a better place is the main appeal of blockchain technology.

Toufanis opening talk, titled Exonomics, explored unifying themes among disruptive trends in business strategy, financial markets, cryptocurrencies, economic policy, and risk management. Stay tuned for our continued coverage next week.

Picture from Wikimedia Commons.

Visit link:

At Exponential Finance, the Singularity University Explores Visionary Applications of Blockchains - Crypto Insider (press release) (blog)