Page 104«..1020..102103104105

Category Archives: Singularity

Singularity Art Show Tonight In San Francisco!

Posted: January 23, 2017 at 10:11 pm

Art and Science may not always be the best of friends, but when they do get together they throw one hell of a party. The Undivided Mind art show opens tonight, Friday November 19th, in San Francisco with free admission, wine, and conversation with those interested in the Singularity, Transhumanism and technology. The show will feature oil paintings created from digital images originally designed by the mysterious and provocative Imaginary Foundation. I had a chance to talk with Micah Daigle, the "Director of Meta-Pattern Affairs" for The Undivided Mind. He promises a great evening of art and science to those who make the journey to the Fifty24SF gallery space tonight. If you can't make it, more's the pity, but the work will be on display this week (Nov 20 to Nov 28) everyday in the afternoon. As you can see in the photos below, The Undivided Mind promises to be a unique experience. Why are there chalkboard equations covering the walls? Guess I'll have to go and find out.

Those who make it to The Undivided Mind should find it full of futurists and aficionados of the Singularity. Expected attendees include Jason Silva, Michael Annissimov,and Michael Vassar. While there won't be any planned presentations or speeches, Daigle told me there would be a hunt for the Higgs boson. The person who finds the 'God particle' will win a free painting from the show. (Someone should warn the guys at CERN).

Despite the wacky particle hijinks, tonight's discussion should be a fairly pertinent one. The Imaginary Foundation has made it its mission to enable human progress through the use of art (and clothing) and The Undivided Mind is aimed at exploring how art and science could combine to guide us through the disruptive technological changes on the horizon. In other words, the art show should be enlightening as well as fun. Sounds like my cup of tea. Those who attend should feel free to post some comments on the event below.

[image credits: Imaginary Foundation] [sources: Micah Daigle, Imaginary Foundation]

Read the rest here:

Singularity Art Show Tonight In San Francisco!

Posted in Singularity | Comments Off on Singularity Art Show Tonight In San Francisco!

The Singularity Is Near – Wikipedia

Posted: January 22, 2017 at 12:04 pm

The Singularity Is Near: When Humans Transcend Biology is a 2006 non-fiction book about artificial intelligence and the future of humanity by inventor and futurist Ray Kurzweil.

The book builds on the ideas introduced in Kurzweil's previous books, The Age of Intelligent Machines (1990) and The Age of Spiritual Machines (1999). This time, however, Kurzweil embraces the term the Singularity, which was popularized by Vernor Vinge in his 1993 essay "The Coming Technological Singularity" more than a decade earlier.

Kurzweil describes his law of accelerating returns which predicts an exponential increase in technologies like computers, genetics, nanotechnology, robotics and artificial intelligence. He says this will lead to a technological singularity in the year 2045, a point where progress is so rapid it outstrips humans' ability to comprehend it.

Kurzweil predicts the technological advances will irreversibly transform people as they augment their minds and bodies with genetic alterations, nanotechnology, and artificial intelligence. Once the Singularity has been reached, Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. Afterwards he predicts intelligence will radiate outward from the planet until it saturates the universe.

Kurzweil characterizes evolution throughout all time as progressing through six epochs, each one building on the one before. He says the four epochs which have occurred so far are Physics and Chemistry, Biology and DNA, Brains, and Technology. Kurzweil predicts the Singularity will coincide with the next epoch, The Merger of Human Technology with Human Intelligence. After the Singularity he says the final epoch will occur, The Universe Wakes Up.

Kurzweil explains that evolutionary progress is exponential because of positive feedback; the results of one stage are used to create the next stage. Exponential growth is deceptive, nearly flat at first until it hits what Kurzweil calls "the knee in the curve" then rises almost vertically. In fact Kurzweil believes evolutionary progress is super-exponential because more resources are deployed to the winning process. As an example of super-exponential growth Kurzweil cites the computer chip business. The overall budget for the whole industry increases over time, since the fruits of exponential growth make it an attractive investment; meanwhile the additional budget fuels more innovation which makes the industry grow even faster, effectively an example of "double" exponential growth.

Kurzweil says evolutionary progress looks smooth, but that really it is divided into paradigms, specific methods of solving problems. Each paradigm starts with slow growth, builds to rapid growth, and then levels off. As one paradigm levels off, pressure builds to find or develop a new paradigm. So what looks like a single smooth curve is really series of smaller S curves. For example Kurzweil notes that when vacuum tubes stopped getting faster, cheaper transistors became popular and continued the overall exponential growth.

Kurzweil calls this exponential growth the law of accelerating returns, and he believes it applies to many human-created technologies such as computer memory, transistors, microprocessors, DNA sequencing, magnetic storage, the number of Internet hosts, Internet traffic, decrease in device size, and nanotech citations and patents. Kurzweil cites two historical examples of exponential growth: the Human Genome Project and the growth of the Internet. Kurzweil claims the whole world economy is in fact growing exponentially, although short term booms and busts tend to hide this trend.

Moore's Law predicts the capacity of integrated circuits grows exponentially, but not indefinitely. Kurzweil feels the increase in the capacity of integrated circuits will probably slow by the year 2020. He feels confident that a new paradigm will debut at that point to carry on the exponential growth predicted by his law of accelerating returns. Kurzweil describes four paradigms of computing that came before integrated circuits: electromechanical, relay, vacuum tube, and transistors. What technology will follow integrated circuits, to serve as the sixth paradigm, is unknown, but Kurzweil believes nanotubes are the most likely alternative among a number of possibilities:

nanotubes and nanotube circuitry, molecular computing, self-assembly in nanotube circuits, biological systems emulating circuit assembly, computing with DNA, spintronics (computing with the spin of electrons), computing with light, and quantum computing.

Since Kurzweil believes computational capacity will continue to grow exponentially long after Moore's Law ends it will eventually rival the raw computing power of the human brain. Kurzweil looks at several different estimates of how much computational capacity is in the brain and settles on 1016 calculations per second and 1013 bits of memory. He writes that $1,000 will buy computer power equal to a single brain "by around 2020" while by 2045, the onset of the Singularity, he says same amount of money will buy one billion times more power than all human brains combined today. Kurzweil admits the exponential trend in increased computing power will hit a limit eventually, but he calculates that limit to be trillions of times beyond what is necessary for the Singularity.

Kurzweil notes that computational capacity alone will not create artificial intelligence. He asserts that the best way to build machine intelligence is to first understand human intelligence. The first step is to image the brain, to peer inside it. Kurzweil claims imaging technologies such as PET and fMRI are increasing exponentially in resolution while he predicts even greater detail will be obtained during the 2020s when it becomes possible to scan the brain from the inside using nanobots. Once the physical structure and connectivity information are known, Kurzweil says researchers will have to produce functional models of sub-cellular components and synapses all the way up to whole brain regions. The human brain is "a complex hierarchy of complex systems, but it does not represent a level of complexity beyond what we are already capable of handling".

Beyond reverse engineering the brain in order to understand and emulate it, Kurzweil introduces the idea of "uploading" a specific brain with every mental process intact, to be instantiated on a "suitably powerful computational substrate". He writes that general modeling requires 1016 calculations per second and 1013 bits of memory, but then explains uploading requires additional detail, perhaps as many as 1019 cps and 1018 bits. Kurzweil says the technology to do this will be available by 2040. Rather than an instantaneous scan and conversion to digital form, Kurzweil feels humans will most likely experience gradual conversion as portions of their brain are augmented with neural implants, increasing their proportion of non-biological intelligence slowly over time.

Kurzweil believes there is "no objective test that can conclusively determine" the presence of consciousness. Therefore he says nonbiological intelligences will claim to have consciousness and "the full range of emotional and spiritual experiences that humans claim to have"; he feels such claims will generally be accepted.

Kurzweil says revolutions in genetics, nanotechnology and robotics will usher in the beginning of the Singularity. Kurzweil feels with sufficient genetic technology it should be possible to maintain the body indefinitely, reversing aging while curing cancer, heart disease and other illnesses. Much of this will be possible thanks to nanotechnology, the second revolution, which entails the molecule by molecule construction of tools which themselves can "rebuild the physical world". Finally, the revolution in robotics will really be the development of strong AI, defined as machines which have human-level intelligence or greater. This development will be the most important of the century, "comparable in importance to the development of biology itself".

Kurzweil concedes that every technology carries with it the risk of misuse or abuse, from viruses and nanobots to out-of-control AI machines. He believes the only countermeasure is to invest in defensive technologies, for example by allowing new genetics and medical treatments, monitoring for dangerous pathogens, and creating limited moratoriums on certain technologies. As for artificial intelligence Kurzweil feels the best defense is to increase the "values of liberty, tolerance, and respect for knowledge and diversity" in society, because "the nonbiological intelligence will be embedded in our society and will reflect our values".

Kurzweil touches on the history of the Singularity concept, tracing it back to John von Neumann in the 1950s and I. J. Good in the 1960s. He compares his Singularity to that of a mathematical or astrophysical singularity. While his ideas of a Singularity is not actually infinite, he says it looks that way from any limited perspective.

During the Singularity, Kurzweil predicts that "human life will be irreversibly transformed" and that humans will transcend the "limitations of our biological bodies and brain". He looks beyond the Singularity to say that "the intelligence that will emerge will continue to represent the human civilization." Further, he feels that "future machines will be human, even if they are not biological".

Kurzweil claims once nonbiological intelligence predominates the nature of human life will be radically altered: there will be radical changes in how humans learn, work, play, and wage war. Kurzweil envisions nanobots which allow people to eat whatever they want while remaining thin and fit, provide copious energy, fight off infections or cancer, replace organs and augment their brains. Eventually people's bodies will contain so much augmentation they'll be able to alter their "physical manifestation at will".

Kurzweil says the law of accelerating returns suggests that once a civilization develops primitive mechanical technologies, it is only a few centuries before they achieve everything outlined in the book, at which point it will start expanding outward, saturating the universe with intelligence. Since people have found no evidence of other civilizations, Kurzweil believes humans are likely alone in the universe. Thus Kurzweil concludes it is humanity's destiny to do the saturating, enlisting all matter and energy in the process.

As for individual identities during these radical changes, Kurzweil suggests people think of themselves as an evolving pattern rather than a specific collection of molecules. Kurzweil says evolution moves towards "greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and greater levels of subtle attributes such as love". He says that these attributes, in the limit, are generally used to describe God. That means, he continues, that evolution is moving towards a conception of God and that the transition away from biological roots is in fact a spiritual undertaking.

Kurzweil does not include an actual written timeline of the past and future, as he did in The Age of Intelligent Machines and The Age of Spiritual Machines, however he still makes many specific predictions. Kurzweil writes that by 2010 a supercomputer will have the computational capacity to emulate human intelligence and "by around 2020" this same capacity will be available "for one thousand dollars". After that milestone he expects human brain scanning to contribute to an effective model of human intelligence "by the mid-2020s". These two elements will culminate in computers that can pass the Turing test by 2029. By the early 2030s the amount of non-biological computation will exceed the "capacity of all living biological human intelligence". Finally the exponential growth in computing capacity will lead to the Singularity. Kurzweil spells out the date very clearly: "I set the date for the Singularityrepresenting a profound and disruptive transformation in human capabilityas 2045".

A common criticism of the book relates to the "exponential growth fallacy". As an example, in 1969, man landed on the moon. Extrapolating exponential growth from there one would expect huge lunar bases and manned missions to distant planets. Instead, exploration stalled or even regressed after that. Paul Davies writes "the key point about exponential growth is that it never lasts"[43] often due to resource constraints.

Theodore Modis says "nothing in nature follows a pure exponential" and suggests the logistic function is a better fit for "a real growth process". The logistic function looks like an exponential at first but then tapers off and flattens completely. For example world population and the United States's oil production both appeared to be rising exponentially, but both have leveled off because they were logistic. Kurzweil says "the knee in the curve" is the time when the exponential trend is going to explode, while Modis claims if the process is logistic when you hit the "knee" the quantity you are measuring is only going to increase by a factor of 100 more.[44]

While some critics complain that the law of accelerating returns is not a law of nature[43] others question the religious motivations or implications of Kurzweil's Singularity. The buildup towards the Singularity is compared with Judeo-Christian end-of-time scenarios. Beam calls it "a Buck Rogers vision of the hypothetical Christian Rapture".[45]John Gray says "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event".[46]

The radical nature of Kurzweil's predictions is often discussed. Anthony Doerr says that before you "dismiss it as techno-zeal" consider that "every day the line between what is human and what is not quite human blurs a bit more". He lists technology of the day, in 2006, like computers that land supersonic airplanes or in vitro fertility treatments and asks whether brain implants that access the internet or robots in our blood really that unbelievable.[47]

In regard to reverse engineering the brain, neuroscientist David J. Linden writes that "Kurzweil is conflating biological data collection with biological insight". He feels that data collection might be growing exponentially, but insight is increasing only linearly. For example the speed and cost of sequencing genomes is also improving exponentially, but our understanding of genetics is growing very slowly. As for nanobots Linden believes the spaces available in the brain for navigation are simply too small. He acknowledges that someday we will fully understand the brain, just not on Kurzweil's timetable.[48]

Paul Davies wrote in Nature that The Singularity is Near is a "breathless romp across the outer reaches of technological possibility" while warning that the "exhilarating speculation is great fun to read, but needs to be taken with a huge dose of salt."[43]

Anthony Doerr in The Boston Globe wrote "Kurzweil's book is surprisingly elaborate, smart, and persuasive. He writes clean methodical sentences, includes humorous dialogues with characters in the future and past, and uses graphs that are almost always accessible."[47] while his colleague Alex Beam points out that "Singularitarians have been greeted with hooting skepticism"[45]Janet Maslin in The New York Times wrote "The Singularity is Near is startling in scope and bravado", but says "much of his thinking tends to be pie in the sky". She observes that he's more focused on optimistic outcomes rather than the risks.[49]

In 2006, Barry Ptolemy and his production company Ptolemaic Productions licensed the rights to The Singularity Is Near from Kurzweil. Inspired by the book, Ptolemy directed and produced the film Transcendent Man, which went on to critical and commercial success in 2009,[50][bettersourceneeded] bringing more attention to the book.

Kurzweil has also directed his own adaptation, called The Singularity is Near, which mixes documentary with a science-fiction story involving his robotic avatar Ramona's transformation into an artificial general intelligence. It was screened at the World Film Festival, the Woodstock Film Festival, the Warsaw International FilmFest, the San Antonio Film Festival in 2010 and the San Francisco Indie Film Festival in 2011. The movie was released generally on July 20, 2012.[51] It is available on DVD or digital download[52] and a trailer is available.[53]

The 2014 film Lucy is roughly based upon the predictions made by Kurzweil about what the year 2045 will look like, including the immortality of man.[54]

Read the original here:

The Singularity Is Near - Wikipedia

Posted in Singularity | Comments Off on The Singularity Is Near – Wikipedia

Technological singularity – Wikipedia

Posted: December 15, 2016 at 7:02 pm

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

John von Neumann first uses the term "singularity" in the context of technological progress causing accelerating change: The accelerating progress of technology and changes in the mode of human life, give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, can not continue.[6] Subsequent authors have echoed this view point.[2][3]I. J. Good's "intelligence explosion", predicted that a future superintelligence would trigger a singularity.[4]Science fiction author Vernor Vinge said in his essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[4]

At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial general intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040.[5]

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[6]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of superintelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.[4][7]

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[8][9][10] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[4] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore's Law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[9][11]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose Moore's Law is often cited in support of the concept.[12][13][14]

The exponential growth in computing technology suggested by Moore's Law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's Law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[15] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[16]) increases exponentially, generalizing Moore's Law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[17] Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.[18]

Kurzweil reserves the term "singularity" for a rapid increase in intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[19] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[20]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Ulam tells of a conversation with the late John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[21]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "Law of Accelerating Returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[22] Kurzweil believes that the singularity will occur by approximately 2045.[23] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".[3][24]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[25]

Steven Pinker stated in 2008:

(...) There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. (...)[12]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally [], no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. [] [T]he machinery has no beliefs, desires, [or] motivations.[26]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[27] postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine".[28]

Jared Diamond, in Collapse: How Societies Choose to Fail or Succeed, argues that cultures self-limit when they exceed the sustainable carrying capacity of their environment, and the consumption of strategic resources (frequently timber, soils or water) creates a deleterious positive feedback loop that leads eventually to social collapse and technological retrogression.[improper synthesis?]

Theodore Modis[29][30] and Jonathan Huebner[31] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[32] While Kurzweil used Modis' resources, and Modis' work was around accelerating change, Modis distanced himself from Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor.[30]

Others[who?] propose that other "singularities" can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[33][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.[34]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[35]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[14] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[36] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[31] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process."[37] He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."[37]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[38]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.[39] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[40]

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[41][42] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[43][44] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[41]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, "the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5x10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1x10^19 bytes. The digital realm stored 500 times more information than this in 2014 (...see Figure)... The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3x10^37 base pairs, equivalent to 1.325x10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[18] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".[45]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[46]

Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[46]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[47][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[48] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[49]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".[50]Singularitarianism has also been likened to a religion by John Horgan.[51]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."[21]

In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence. In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[3][52]

In 1983, Vinge greatly popularized Good's intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines:[53][54] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.

Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",[4] spread widely on the internet and helped to popularize the idea.[55] This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[4]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[24]

In 2005, Kurzweil published The Singularity is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart.[56]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.[9][57] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.[9]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."[58] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[59][60][61]

The president of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[62]

One thing that we havent talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people arent spending a lot of time right now worrying about singularitythey are worrying about Well, is my job going to be replaced by a machine?

The singularity is referenced in innumerable science-fiction works. In Greg Bear's sci-fi novel Blood Music (1983), a singularity occurs in a matter of hours.[4]David Brin's Lungfish (1987) proposes that AI be given humanoid bodies and raised as our children and taught the same way we were.[63] In William Gibson's 1984 novel Neuromancer, artificial intelligences capable of improving their own programs are strictly regulated by special "Turing police" to ensure they never exceed a certain level of intelligence, and the plot centers on the efforts of one such AI to circumvent their control.[63][64] In Greg Benford's 1998 Me/Days, it is legally required that an AI's memory be erased after every job.[63]

The entire plot of Wally Pfister's Transcendence centers on an unfolding singularity scenario.[65] The 2013 science fiction film Her follows a man's romantic relationship with a highly intelligent AI, who eventually learns how to improve herself and creates an intelligence explosion. The 1982 film Blade Runner, and the 2015 film Ex Machina, are two mildly dystopian visions about the impact of artificial general intelligence. Unlike Blade Runner, Her and Ex Machina both attempt to present "plausible" near-future scenarios that are intended to strike the audience as "not just possible, but highly probable".[66]

Follow this link:

Technological singularity - Wikipedia

Posted in Singularity | Comments Off on Technological singularity – Wikipedia

Singularity University – Wikipedia

Posted: November 29, 2016 at 1:30 am

Singularity University (abbreviated SU) is a Silicon Valley think tank that offers educational programs and a business incubator.[2][3] According to its website, it focuses on scientific progress and "exponential" technologies.[4] It was founded in 2008 by Peter Diamandis and Ray Kurzweil at the NASA Research Park in California, United States.[5]

Singularity University initially offered an annual 10-week summer program and has since added conference series, classes, and a business incubator for startups and corporate teams.[6]

Instruction is offered in eleven areas.[7][8] Singularity University was created in 2009 based on Ray Kurzweil's theory of "technological singularity." Kurzweil believes that emerging technologies like nanotechnology and biotechnology will massively increase human intelligence over the next two decades, and fundamentally reshape the economy and society.[9] In 2012, Singularity University the non-profit began the process for conversion to a benefit corporation, to include non-profit as well as for-profit aspects.[10] In 2013, the new for-profit corporation incorporated as "Singularity Education Group" and acquired the descriptive "Singularity University" as its trade name.[11]

In 2015, Singularity University and Yunus Social Business (YSB) announced a partnership at the World Economic Forum to use "accelerating technologies" and social entrepreneurship for global development in developing areas of the world where YSB is active.[12][13]

Singularity University also partners with organizations to sponsor annual "Global Impact Competitions", based on a theme and geography.[14][15]

Singularity University is overseen by a Board of Trustees.[16] Rob Nail, one of the organization's Associate Founders, was named CEO of Singularity University in October, 2011.[17] Director of "Global Grand Challenges" in 2013 is Nicholas Haan.

Corporate founding partners and sponsors include Google,[18]Nokia,[19]Autodesk,[20]IDEO,[citation needed]LinkedIn,[citation needed]ePlanet Capital,[21] the X Prize Foundation, the Kauffman Foundation and Genentech.[22]

Students at Singularity University's "Global Solutions Program" (GSP, formerly the "Graduate Studies Program") learn about new technologies, and work together over the summer to start companies.[23] In 2012, the Global Solutions Program class had 80 students, with an average age of 30.[24] In 2015, Google agreed to provide $1.5 million annually for two years to make the program free to participants.[25] The 80 students are selected from over 3,000 applicants each year.[23] A substantial portion of the GSP class comes from the winners of SU's sponsored "Global Impact Competitions".[25]

The Executive Program is targeted to corporate leaders, and focuses on how rapid changes in technology will impact businesses.[23]

In 2013, Singularity University announced a three-year partnership with Deloitte and XPRIZE called the "Innovation Partnership Program" (IPP). The program consists of a multi-year series of events where Fortune 500 executives partner with startups.[26] The program consists of an array of workshops on crowdsourcing, the advancement of "exponential" technologies, and how to innovate through incentivized competitions. Executives from 30 large companies, including Google, Shell, Qualcomm, The Hershey Company and Sprint, met for the first four-day executive summit.[26]

Singularity University has an "Exponential Regional Partnership" with SingularityU The Netherlands. This partnership program serves to help prepare European society and European companies for exponential technologies and give them the tools to use these technologies to meet Global Grand Challenges. The Netherlands was chosen as a starting point for international expansion because of the social, creative and innovative environment with rapid adoption rates for new technologies.[27] Water, food, healthcare and mobility, traditional strengths of the Dutch economy, are the main focal points.

SingularityU The Netherlands has its own local faculty. This faculty consists of European scientists and domain experts who have been selected because SingularityU considers them to be at the top of their respective fields.

In 2016, SingularityU The Netherlands organized a Global Impact Competition to find the most innovative Dutch entrepreneurs with ideas that leverage exponential technologies to enhance the lives of refugees. [28] Danny Wagemans, a 21-year old nanophysics student, won the first prize to participate in the 10 week Global Solutions Program. He demonstrated how clean water and energy can be derived from urine by combining a microbial fuel cell and a graphene filter in a water bottle.[29]

An Innovation Hub that allows people to experience exponential technologies has been started in Eindhoven as part of the Exponential Regional Partnership. This Innovation Hub was officially opened in Eindhoven by Queen Mxima of the Netherlands, in the presence of numerous representatives of the corporate community, government and innovators. Eindhoven was chosen for this hub as it is the heart of the Brainport region, one of Europe's most important tech clusters.[30]

Singularity University hosts annual conferences focused on "exponentially accelerating technologies", and their impact on fields such as finance, medicine and manufacturing.[31] The conferences are produced with Deloitte,[31] as well as CNBC for the "Exponential Finance" conference.[32]

Singularity Hub is a science and tech media website published by Singularity University.[33] Singularity Hub was founded in 2008 [33] with the mission of "providing news coverage of sci/tech breakthroughs that are rapidly changing human abilities, health, and society".[34] It was acquired by Singularity University in 2012, to make content produced by Singularity University more accessible.[34]

SU Labs is a seed accelerator by Singularity University, targeting startups which aim to "change the lives of a billion people"[35]

The company "Made In Space", which has developed a 3D printer adapted to the constraints of space travel, was founded at Singularity University. The first prototype of Made in Space, the "Zero-G Printer", was developed with NASA and sent into space in September, 2014.[36]

In 2011, a Singularity University group launched Matternet, a startup that aims to harness drone technology to ship goods in developing countries that lack highway infrastructure. Other startups from SU are the peer-to-peer car-sharing service Getaround, and BioMine, which uses mining technologies to extract value from electronic waste.[7]

In 2013, Singularity University and the U.S. Fund for UNICEF announced a partnership to create technologies to improve the lives of vulnerable people in developing countries.[37][38]

Coordinates: 372455N 1220346W / 37.415229N 122.062650W / 37.415229; -122.062650

Read the original post:

Singularity University - Wikipedia

Posted in Singularity | Comments Off on Singularity University – Wikipedia

Singularity | Singularity

Posted: October 31, 2016 at 2:54 am

Singularity enables users to have full control of their environment. This means that a non-privileged user can swap out the operating system on the host for one they control. So if the host system is running RHEL6 but your application runs in Ubuntu, you can create an Ubuntu image, install your applications into that image, copy the image to another host, and run your application on that host in its native Ubuntu environment!

Register your Cluster Add a Publication

Singularity also allows you to leverage the resources of whatever host you are on. This includes HPC interconnects, resource managers, file systems, GPUs and/or accelerators, etc. Singularity does this by enabling several key facets:

Jump in and get started.

Greg Kurtzer comments on the rise of containers, and how Singularity is ideal for scientists. Read the news article at...

It is with great pleasure that I announce the general availability of Singularity version 2.2! Heres whats in store for...

We are happy to announce that the new Singularity site is underway! We will be adding the following: Updated...

Go here to see the original:

Singularity | Singularity

Posted in Singularity | Comments Off on Singularity | Singularity

What is Singularity (the)? – Definition from WhatIs.com

Posted: at 2:54 am

The Singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond that possible for humans. Should the Singularity occur, technology will advance beyond our ability to foresee or control its outcomes and the world will be transformed beyond recognition by the application of superintelligence to humans and/or human problems, including poverty, disease and mortality.

Revolutions in genetics, nanotechnology and robotics (GNR) in the first half of the 21st century are expected to lay the foundation for the Singularity. According to Singularity theory, superintelligence will be developed by self-directed computers and will increase exponentially rather than incrementally.

Lev Grossman explains the prospective exponential gains in capacity enabled by superintelligent machines in an article in Time:

Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks...

Proposed mechanisms for adding superintelligence to humans include brain-computer interfaces, biological alteration of the brain, artificial intelligence (AI) brain implants and genetic engineering. Post-singularity, humanity and the world would be quite different. A human could potentially scan his consciousness into a computer and live eternally in virtual reality or as a sentient robot. Futurists such as Ray Kurzweil (author of The Singularity is Near) have predicted that in a post-Singularity world, humans would typically live much of the time in virtual reality -- which would be virtually indistinguishable from normal reality. Kurzweil predicts, based on mathematical calculations of exponential technological development, that the Singularity will come to pass by 2045.

Most arguments against the possibility of the Singularity involve doubts that computers can ever become intelligent in the human sense. The human brain and cognitive processes may simply be more complex than a computer could be. Furthermore, because the human brain is analog, with theoretically infinite values for any process, some believe that it cannot ever be replicated in a digital format. Some theorists also point out that the Singularity may not even be desirable from a human perspective because there is no reason to assume that a superintelligence would see value in, for example, the continued existence or well-being of humans.

Science-fiction writer Vernor Vinge first used the term the Singularity in this context in the 1980s, when he used it in reference to the British mathematician I.J. Goods concept of an intelligence explosion brought about by the advent of superintelligent machines. The term is borrowed from physics; in that context a singularity is a point where the known physical laws cease to apply.

See also: Asimovs Three Laws of Robotics, supercomputer, cyborg, gray goo, IBMs Watson supercomputer, neural networks, smart robot

Neil deGrasse Tyson vs. Ray Kurzweil on the Singularity:

This was last updated in February 2016

See original here:

What is Singularity (the)? - Definition from WhatIs.com

Posted in Singularity | Comments Off on What is Singularity (the)? – Definition from WhatIs.com

Downloads – Singularity Viewer

Posted: August 25, 2016 at 4:31 pm

Please pay attention to the following vital information before using Singularity Viewer.

Singularity Viewer only supports SSE2 compliant CPUs. All computers manufactured 2004 and later should have one.

Warning: RLVa is enabled by default, which permits your attachments to take more extensive control of the avatar than default behavior of other viewers. Foreign, rezzed in-world, non-worn objects can only take control of your avatar if actively permitted by corresponding scripted attachments you wear. Please refer to documentation of your RLV-enabled attachments for details, if you have any.

Singularity Viewer 1.8.7(6861) Setup

Compatible with 64-bit version of Windows Vista, Windows 7, Windows 8 and newer. Known limitation is the lack of support for the Quicktime plugin which means that certain types of parcel media will not play. Streaming music and shared media (MoaP) are not affected and are fully functional.

Compatible with OS X 10.6 and newer, Intel CPU.

Make sure you have 32-bit versions of gstreamer-plugins-base, gstreamer-plugins-ugly and libuuid1 installed. The package has been built on DebianSqueezeand should work on a variety of distributions.

For voice to work, minimal support for running 32-bit binaries is necessary. libasound_module_pcm_pulse.so may be needed. Possible package names: lib32asound2-plugins (squeeze), alsa-plugins-pulseaudio.i686 (fedora),libasound2-plugins:i386 (debian/ubuntu).

If you receive "The following media plugin has failed: media_plugin_webkit" you may need to install the package containing libpangox-1.0.so.0for your distribution (could bepangox-compat).

To add all the skins, extract this package into the viewer install directory, that's usually C:Programs FilesSingularity on Windows, /Applications/Singularity.app/Contents/Resources/ on Mac, and wherever you extracted the tarball to on Linux. Just merge the extracted skins directory with the existing skins directory, there should be no conflicts.

Read the original here:

Downloads - Singularity Viewer

Posted in Singularity | Comments Off on Downloads – Singularity Viewer

Amazon.com: Singularity [Online Game Code]: Video Games

Posted: at 4:31 pm

This review might read a little strange because I am going to list a lot of things wrong with the game, then tell you to buy it anyway. The long and short of it is that Raven Software, a group of developers who have been making FPS games for almost two decades, really liked Bioshock. A lot.

While Raven's previous titles from recent years were very old-school, Quake 4 even still having floating weapon pick-ups, Singularity plays much more like a modern console FPS game. Singularity is slow-paced, has various sections that concentrate more on light puzzle-solving than shooting and even long stretches just added for atmosphere. Speaking of atmosphere, the game has it in spades... great environments, cool effects, audio and written logs, films, music, the works. The game uses the Unreal Engine 3 as well, though it has brighter colors and more vibrant environments than many games that use the same engine. Does it sound like Bioshock yet? Okay, how about this: you get special powers that you upgrade and add new abilities to by spending an in-game currency. Warm yet? How about: the game is mostly linear, but has some side paths you can travel for more pick-ups.

What I am driving home here is that Raven might as well of called this Bioshock 3. The UE3 engine, the vibrant colors, the pick-ups, the powers, the slower pace, the light puzzles... everything is influenced by Irrational Games' underwater masterpiece. This makes it a very easy game to review though, because basically if you want another Bioshock then get this game. It is polished and does the Bioshock style well, and the storyline and location... a Russian experimental weapons base you visit long after a disaster of epic proportions, ruined, dark and wet just like Bioshock's Rapture... are well presented.

See the article here:

Amazon.com: Singularity [Online Game Code]: Video Games

Posted in Singularity | Comments Off on Amazon.com: Singularity [Online Game Code]: Video Games

Singularity – Mass Effect Wiki – Wikia

Posted: at 4:31 pm

Mass Effect Edit This gravitational power sucks multiple enemies within a radius to a single area, leaving them floating helplessly and vulnerable to attack. It can also attract objects from the environment, such as crates or pieces of furniture; enemies will take damage if they collide with other solid objects in the Singularity field. Talent Ranks Edit

These classes have access to the Singularity talent:

Note: This power travels in the direction of the cross-hair, arcing towards the target. Upon impact, it will create the Singularity. Liara's Singularity travels in a straight line, instantly creating a singularity at the targeted location.

Rank 4

Choose to evolve the power into one of the following,

Create a sphere of dark energy that traps and dangles enemies caught in its field.

Increase recharge speed by 25%.

Increase Singularity's hold duration by 20%. Increase impact radius by 20%.

Duration

Increase Singularity's hold duration by 30%. Additional enemies can be lifted before Singularity fades.

Radius

Increase impact radius by 25%.

Lift Damage

Inflict 20 damage per second to lifted targets.

Recharge Speed

Increase recharge speed by 30%.

Expand

Expand the Singularity field by 35% for 10 seconds.

Detonate

Detonate Singularity when the field dies to inflict 300 damage across 5 meters.

Create a sphere of dark energy that traps and dangles enemies caught in its field.

Increase recharge speed by 25%.

Increase damage by 20%.

Duration

Increase Singularity's hold duration by 150%.

Radius

Increase impact radius by 35%.

Lift Damage

Inflict 50 damage per second to lifted targets.

Recharge Speed

Increase recharge speed by 35%.

Damage

Increase damage by 50%.

Detonate

Detonate Singularity when the field dies to inflict 500 damage across 7 meters.

Link:

Singularity - Mass Effect Wiki - Wikia

Posted in Singularity | Comments Off on Singularity – Mass Effect Wiki – Wikia

Singularity – RationalWiki

Posted: July 18, 2016 at 3:37 pm

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.

A singularity is a sign that your model doesn't apply past a certain point, not infinity arriving in real life

A singularity, as most commonly used, is a point at which expected rules break down. The term comes from mathematics, where a point on a curve that has a sudden break in slope is considered to have a slope of undefined or infinite value; such a point is known as a singularity.

The term has extended into other fields; the most notable use is in astrophysics, where a singularity is a point (usually, but perhaps not exclusively, at the center a of black hole) where curvature of spacetime approaches infinity.

This article, however, is not about the mathematical or physics uses of the term, but rather the borrowing of it by various futurists. They define a technological singularity as the point beyond which we can know nothing about the world. So, of course, they then write at length on the world after that time.

It's intelligent design for the IQ 140 people. This proposition that we're heading to this point at which everything is going to be just unimaginably different - it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me, no matter what numbers he marshals in favor of it. He's very good at having a lot of curves that point up to the right.

In transhumanist belief, the "technological singularity" refers to a hypothetical point beyond which human technology and civilization is no longer comprehensible to the current human mind. The theory of technological singularity states that at some point in time humans will invent a machine that through the use of artificial intelligence will be smarter than any human could ever be. This machine in turn will be capable of inventing new technologies that are even smarter. This event will trigger an exponential explosion of technological advances of which the outcome and effect on humankind is heavily debated by transhumanists and singularists.

Many proponents of the theory believe that the machines eventually will see no use for humans on Earth and simply wipe us out their intelligence being far superior to humans, there would be probably nothing we could do about it. They also fear that the use of extremely intelligent machines to solve complex mathematical problems may lead to our extinction. The machine may theoretically respond to our question by turning all matter in our solar system or our galaxy into a giant calculator, thus destroying all of humankind.

Critics, however, believe that humans will never be able to invent a machine that will match human intelligence, let alone exceed it. They also attack the methodology that is used to "prove" the theory by suggesting that Moore's Law may be subject to the law of diminishing returns, or that other metrics used by proponents to measure progress are totally subjective and meaningless. Theorists like Theodore Modis argue that progress measured in metrics such as CPU clock speeds is decreasing, refuting Moore's Law[3]. (As of 2015, not only Moore's Law is beginning to stall, Dennard scaling is also long dead, returns in raw compute power from transistors is subjected to diminishing returns as we use more and more of them, there is also Amdahl's Law and Wirth's law to take into account, and also that raw computing power simply doesn't scale up linearly at providing real marginal utility. Then even after all those things, we still haven't taken into account of the fundamental limitations of conventional computing architecture. Moore's law suddenly doesn't look to be the panacea to our problems now, does it?)

Transhumanist thinkers see a chance of the technological singularity arriving on Earth within the twenty first century, a concept that most[Who?]rationalists either consider a little too messianic in nature or ignore outright. Some of the wishful thinking may simply be the expression of a desire to avoid death, since the singularity is supposed to bring the technology to reverse human aging, or to upload human minds into computers. However, recent research, supported by singularitarian organizations including MIRI and the Future of Humanity Institute, does not support the hypothesis that near-term predictions of the singularity are motivated by a desire to avoid death, but instead provides some evidence that many optimistic predications about the timing of a singularity are motivated by a desire to "gain credit for working on something that will be of relevance, but without any possibility that their prediction could be shown to be false within their current career".[4][5]

Don't bother quoting Ray Kurzweil to anyone who knows a damn thing about human cognition or, indeed, biology. He's a computer science genius who has difficulty in perceiving when he's well out of his area of expertise.[6]

Eliezer Yudkowsky identifies three major schools of thinking when it comes to the singularity.[7] While all share common ground in advancing intelligence and rapidly developing technology, they differ in how the singularity will occur and the evidence to support the position.

Under this school of thought, it is assumed that change and development of technology and human (or AI assisted) intelligence will accelerate at an exponential rate. So change a decade ago was much faster than change a century ago, which was faster than a millennium ago. While thinking in exponential terms can lead to predictions about the future and the developments that will occur, it does mean that past events are an unreliable source of evidence for making these predictions.

The "event horizon" school posits that the post-singularity world would be unpredictable. Here, the creation of a super-human artificial intelligence will change the world so dramatically that it would bear no resemblance to the current world, or even the wildest science fiction. This school of thought sees the singularity most like a single point event rather than a process indeed, it is this thesis that spawned the term "singularity." However, this view of the singularity does treat transhuman intelligence as some kind of magic.

This posits that the singularity is driven by a feedback cycle between intelligence enhancing technology and intelligence itself. As Yudkowsky (who endorses this view) "What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that theyd design the next generation of brain-computer interfaces." When this feedback loop of technology and intelligence begins to increase rapidly, the singularity is upon us.

There is also a fourth singularity school which is much more popular than the other three: It's all a load of baloney![8] This position is not popular with high-tech billionaires.[9]

This is largely dependent on your definition of "singularity".

The intelligence explosion singularity is by far the most unlikely. According to present calculations, a hypothetical future supercomputer may well not be able to replicate a human brain in real time. We presently don't even understand how intelligence works, and there is no evidence that intelligence is self-iterative in this manner - indeed, it is not unlikely that improvements on intelligence are actually more difficult the smarter you become, meaning that each improvement on intelligence is increasingly difficult to execute. Indeed, how much smarter it is possible for something to even be than a human being is an open question. Energy requirements are another issue; humans can run off of Doritos and Mountain Dew Dr. Pepper, while supercomputers require vast amounts of energy to function. Unless such an intelligence can solve problems better than groups of humans, its greater intelligence may well not matter, as it may not be as efficient as groups of humans working together to solve problems.

Another major issue arises from the nature of intellectual development; if an artificial intelligence needs to be raised and trained, it may well take twenty years or more between generations of artificial intelligences to get further improvements. More intelligent animals seem to generally require longer to mature, which may put another limitation on any such "explosion".

Accelerating change is questionable; in real life, the rate of patents per capita actually peaked in the 20th century, with a minor decline since then, despite the fact that human beings have gotten more intelligent and gotten superior tools. As noted above, Moore's Law has been in decline, and outside of the realm of computers, the rate of increase in other things has not been exponential - airplanes and cars continue to improve, but they do not improve at the ridiculous rate of computers. It is likely that once computers hit physical limits of transistor density, their rate of improvement will fall off dramatically, and already even today, computers which are "good enough" continue to operate for many years, something which was unheard of in the 1990s, where old computers were rapidly and obviously obsoleted by new ones.

According to this point of view, the Singularity is a past event, and we live in a post-Singularity world.

The rate of advancement has actually been in decline in recent times, as patents per-capita has gone down, and the rate of increase of technology has declined rather than risen, though the basal rate is higher than it was in centuries past. According to this point of view, the intelligence explosion and increasing rate of change already happened with computers, and now that everyone has handheld computing devices, the rate of increase is going to decline as we hit natural barriers in how much additional benefit we gain out of additional computing power. The densification of transistors on microchips has slowed by about a third, and the absolute limit to transistors is approaching - a true, physical barrier which cannot be bypassed or broken, and which would require an entirely different means of computing to create a denser still microchip.

From the point of view of travel, humans have gone from walking to sailing to railroads to highways to airplanes, but communication has now reached the point where a lot of travel is obsolete - the Internet is omnipresent and allows us to effectively communicate with people on any corner of the planet without travelling at all. From this point of view, there is no further point of advancement, because we're already at the point where we can be anywhere on the planet instantly for many purposes, and with improvements in automation, the amount of physical travel necessary for the average human being has declined over recent years. Instant global communication and the ability to communicate and do calculations from anywhere are a natural physical barrier, beyond which further advancement is less meaningful, as it is mostly just making things more convenient - the cost is already extremely low.

The prevalence of computers and communications devices has completely changed the world, as has the presence of cheap, high-speed transportation technology. The world of the 21st century is almost unrecognizable to people from the founding of the United States in the latter half of the 18th century, or even to people from the height of the industrial era at the turn of the 20th century.

Extraterrestrial technological singularities might become evident from acts of stellar/cosmic engineering. One such possibility for example would be the construction of Dyson Spheres that would result in the altering of a star's electromagnetic spectrum in a way detectable from Earth. Both SETI and Fermilab have incorporated that possibility into their searches for alien life. [10][11]

A different view of the concept of singularity is explored in the science fiction book Dragon's Egg by Robert Lull Forward, in which an alien civilization on the surface of a neutron star, being observed by human space explorers, goes from Stone Age to technological singularity in the space of about an hour in human time, leaving behind a large quantity of encrypted data for the human explorers that are expected to take over a million years (for humanity) to even develop the technology to decrypt.

No signs of extraterrestrial civilizations have been found as of 2016.

Read the rest here:

Singularity - RationalWiki

Posted in Singularity | Comments Off on Singularity – RationalWiki

Page 104«..1020..102103104105