The vast gulf between current technology and theoretical singularity

An interesting pair of news posts caught my eye this week, and theyre worth presenting for general discussion. First, VentureBeat has an interview with futurologist Ray Kurzweil, who made waves in 2005 with his book The Singularity Is Near. In it, Kurzweil posits that were approaching a point at which human intelligence will begin to evolve in ways we cannot predict.

The assumption is that our superintelligent computers (or brains) will allow us to effectively reinvent what being human means. In our present state, we are, by definition, incapable of understanding what human society would look like after such a shift.

Mrow

Meanwhile, Google is working to put its neural network technology to work on different sorts of problems. This past summer, the company taught its network how to recognize a cat by showing it YouTube videos. Specifically, it showed 16,000 processors enough cat videos that the network itself learned how to see cat without human intervention. Total visual accuracy, according to the initial paper, is about 16%. The announcement is about applying similar strategies to language processing and how computers can learn to understand the specifics of human speech.

Kurzweil, as you can see in the video at the bottom, is a persuasive speaker and Googles success with teaching a network to recognize cats really is impressive. Reading stories like these, however, I come away skeptical. Its not that I doubt the individual achievements, or that they can be improved, but focusing on specific achievements ignores the greater problem:We have no idea how to build a brain.

Kurzweil uses advances in scanning resolution and genetic engineering together as proof that at some point, well be able to either program cell structures to do the things we want far more effectively than we can currently, or that well simply be able to build mechanical analogs. On some scale, this is probably true. The nematode worm Caenorhabditis elegans has 302 neurons. We could build a neural network (or neural network analog) with 302 nodes fairly easily Googles neural node structure is far more complex than that.

Unfortunately, just having nodes isnt enough. The human brain has an estimated 100 billion neurons and 100 trillion synapses. Different neurons are designed for different tasks and they respond to different stimuli. They respond to and release an incredibly complex series of neurotransmitters, the functions of which we dont entirely understand. Its not enough to say Yes, the brain is complex the brain is complex in ways that dwarf the best CPUs we can build, and it does its work while consuming an average of 20W.

Thats a monkey brain. Weve got more.

This is where Moores Law is typically trotted out, but its a wretchedly terrible comparison. Scientists have already demonstrated transistors as small as 10 atoms wide. Your average neuron is between 4 and 100 microns. If groups of transistors equals neural networks, brains would be no problem. Its not that simple. We dont know how to build synapse networks at anything like the appropriate densities. We dont even know if consciousness is an emergent property of sufficiently dense neural structures or not.

Self-driving cars (an example Kurzweil mentions) are a sophisticated application of refined models, meshed with sensor networks on the vehicle and additional positional data gathered from orbit. Theyre an example of how being able to gather more information and correlate that information more quickly allows us to create a better program but they arent smart. Our best neural networks are single-task predictors that gather information at a glacial pace compared to the brain.

See the article here:
The vast gulf between current technology and theoretical singularity

Related Posts

Comments are closed.