Singularity – RationalWiki

Posted: July 18, 2016 at 3:37 pm

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.

A singularity is a sign that your model doesn't apply past a certain point, not infinity arriving in real life

A singularity, as most commonly used, is a point at which expected rules break down. The term comes from mathematics, where a point on a curve that has a sudden break in slope is considered to have a slope of undefined or infinite value; such a point is known as a singularity.

The term has extended into other fields; the most notable use is in astrophysics, where a singularity is a point (usually, but perhaps not exclusively, at the center a of black hole) where curvature of spacetime approaches infinity.

This article, however, is not about the mathematical or physics uses of the term, but rather the borrowing of it by various futurists. They define a technological singularity as the point beyond which we can know nothing about the world. So, of course, they then write at length on the world after that time.

It's intelligent design for the IQ 140 people. This proposition that we're heading to this point at which everything is going to be just unimaginably different - it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me, no matter what numbers he marshals in favor of it. He's very good at having a lot of curves that point up to the right.

In transhumanist belief, the "technological singularity" refers to a hypothetical point beyond which human technology and civilization is no longer comprehensible to the current human mind. The theory of technological singularity states that at some point in time humans will invent a machine that through the use of artificial intelligence will be smarter than any human could ever be. This machine in turn will be capable of inventing new technologies that are even smarter. This event will trigger an exponential explosion of technological advances of which the outcome and effect on humankind is heavily debated by transhumanists and singularists.

Many proponents of the theory believe that the machines eventually will see no use for humans on Earth and simply wipe us out their intelligence being far superior to humans, there would be probably nothing we could do about it. They also fear that the use of extremely intelligent machines to solve complex mathematical problems may lead to our extinction. The machine may theoretically respond to our question by turning all matter in our solar system or our galaxy into a giant calculator, thus destroying all of humankind.

Critics, however, believe that humans will never be able to invent a machine that will match human intelligence, let alone exceed it. They also attack the methodology that is used to "prove" the theory by suggesting that Moore's Law may be subject to the law of diminishing returns, or that other metrics used by proponents to measure progress are totally subjective and meaningless. Theorists like Theodore Modis argue that progress measured in metrics such as CPU clock speeds is decreasing, refuting Moore's Law[3]. (As of 2015, not only Moore's Law is beginning to stall, Dennard scaling is also long dead, returns in raw compute power from transistors is subjected to diminishing returns as we use more and more of them, there is also Amdahl's Law and Wirth's law to take into account, and also that raw computing power simply doesn't scale up linearly at providing real marginal utility. Then even after all those things, we still haven't taken into account of the fundamental limitations of conventional computing architecture. Moore's law suddenly doesn't look to be the panacea to our problems now, does it?)

Transhumanist thinkers see a chance of the technological singularity arriving on Earth within the twenty first century, a concept that most[Who?]rationalists either consider a little too messianic in nature or ignore outright. Some of the wishful thinking may simply be the expression of a desire to avoid death, since the singularity is supposed to bring the technology to reverse human aging, or to upload human minds into computers. However, recent research, supported by singularitarian organizations including MIRI and the Future of Humanity Institute, does not support the hypothesis that near-term predictions of the singularity are motivated by a desire to avoid death, but instead provides some evidence that many optimistic predications about the timing of a singularity are motivated by a desire to "gain credit for working on something that will be of relevance, but without any possibility that their prediction could be shown to be false within their current career".[4][5]

Don't bother quoting Ray Kurzweil to anyone who knows a damn thing about human cognition or, indeed, biology. He's a computer science genius who has difficulty in perceiving when he's well out of his area of expertise.[6]

Eliezer Yudkowsky identifies three major schools of thinking when it comes to the singularity.[7] While all share common ground in advancing intelligence and rapidly developing technology, they differ in how the singularity will occur and the evidence to support the position.

Under this school of thought, it is assumed that change and development of technology and human (or AI assisted) intelligence will accelerate at an exponential rate. So change a decade ago was much faster than change a century ago, which was faster than a millennium ago. While thinking in exponential terms can lead to predictions about the future and the developments that will occur, it does mean that past events are an unreliable source of evidence for making these predictions.

The "event horizon" school posits that the post-singularity world would be unpredictable. Here, the creation of a super-human artificial intelligence will change the world so dramatically that it would bear no resemblance to the current world, or even the wildest science fiction. This school of thought sees the singularity most like a single point event rather than a process indeed, it is this thesis that spawned the term "singularity." However, this view of the singularity does treat transhuman intelligence as some kind of magic.

This posits that the singularity is driven by a feedback cycle between intelligence enhancing technology and intelligence itself. As Yudkowsky (who endorses this view) "What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that theyd design the next generation of brain-computer interfaces." When this feedback loop of technology and intelligence begins to increase rapidly, the singularity is upon us.

There is also a fourth singularity school which is much more popular than the other three: It's all a load of baloney![8] This position is not popular with high-tech billionaires.[9]

This is largely dependent on your definition of "singularity".

The intelligence explosion singularity is by far the most unlikely. According to present calculations, a hypothetical future supercomputer may well not be able to replicate a human brain in real time. We presently don't even understand how intelligence works, and there is no evidence that intelligence is self-iterative in this manner - indeed, it is not unlikely that improvements on intelligence are actually more difficult the smarter you become, meaning that each improvement on intelligence is increasingly difficult to execute. Indeed, how much smarter it is possible for something to even be than a human being is an open question. Energy requirements are another issue; humans can run off of Doritos and Mountain Dew Dr. Pepper, while supercomputers require vast amounts of energy to function. Unless such an intelligence can solve problems better than groups of humans, its greater intelligence may well not matter, as it may not be as efficient as groups of humans working together to solve problems.

Another major issue arises from the nature of intellectual development; if an artificial intelligence needs to be raised and trained, it may well take twenty years or more between generations of artificial intelligences to get further improvements. More intelligent animals seem to generally require longer to mature, which may put another limitation on any such "explosion".

Accelerating change is questionable; in real life, the rate of patents per capita actually peaked in the 20th century, with a minor decline since then, despite the fact that human beings have gotten more intelligent and gotten superior tools. As noted above, Moore's Law has been in decline, and outside of the realm of computers, the rate of increase in other things has not been exponential - airplanes and cars continue to improve, but they do not improve at the ridiculous rate of computers. It is likely that once computers hit physical limits of transistor density, their rate of improvement will fall off dramatically, and already even today, computers which are "good enough" continue to operate for many years, something which was unheard of in the 1990s, where old computers were rapidly and obviously obsoleted by new ones.

According to this point of view, the Singularity is a past event, and we live in a post-Singularity world.

The rate of advancement has actually been in decline in recent times, as patents per-capita has gone down, and the rate of increase of technology has declined rather than risen, though the basal rate is higher than it was in centuries past. According to this point of view, the intelligence explosion and increasing rate of change already happened with computers, and now that everyone has handheld computing devices, the rate of increase is going to decline as we hit natural barriers in how much additional benefit we gain out of additional computing power. The densification of transistors on microchips has slowed by about a third, and the absolute limit to transistors is approaching - a true, physical barrier which cannot be bypassed or broken, and which would require an entirely different means of computing to create a denser still microchip.

From the point of view of travel, humans have gone from walking to sailing to railroads to highways to airplanes, but communication has now reached the point where a lot of travel is obsolete - the Internet is omnipresent and allows us to effectively communicate with people on any corner of the planet without travelling at all. From this point of view, there is no further point of advancement, because we're already at the point where we can be anywhere on the planet instantly for many purposes, and with improvements in automation, the amount of physical travel necessary for the average human being has declined over recent years. Instant global communication and the ability to communicate and do calculations from anywhere are a natural physical barrier, beyond which further advancement is less meaningful, as it is mostly just making things more convenient - the cost is already extremely low.

The prevalence of computers and communications devices has completely changed the world, as has the presence of cheap, high-speed transportation technology. The world of the 21st century is almost unrecognizable to people from the founding of the United States in the latter half of the 18th century, or even to people from the height of the industrial era at the turn of the 20th century.

Extraterrestrial technological singularities might become evident from acts of stellar/cosmic engineering. One such possibility for example would be the construction of Dyson Spheres that would result in the altering of a star's electromagnetic spectrum in a way detectable from Earth. Both SETI and Fermilab have incorporated that possibility into their searches for alien life. [10][11]

A different view of the concept of singularity is explored in the science fiction book Dragon's Egg by Robert Lull Forward, in which an alien civilization on the surface of a neutron star, being observed by human space explorers, goes from Stone Age to technological singularity in the space of about an hour in human time, leaving behind a large quantity of encrypted data for the human explorers that are expected to take over a million years (for humanity) to even develop the technology to decrypt.

No signs of extraterrestrial civilizations have been found as of 2016.

Read the rest here:

Singularity - RationalWiki

Related Posts