Daily Archives: June 12, 2016

Cloning – Learn Genetics

Posted: June 12, 2016 at 12:39 am

explore

Learn the basics about cloning and see how its done.

interactive explore

Try it yourself in the mouse cloning laboratory.

learn more

Explore the history of cloning technologies.

interactive explore

Test your cloning savvy with this interactive quiz.

learn more

Evaluate the reasons for using cloning technologies.

learn more

Here we help you separate the facts from the fiction.

Supported by a Science Education Partnership Award (SEPA) Grant No. R25RR023288 from the National Center for Research Resources, a component of the NIH. The contents provided here are solely the responsibility of the authors and do not necessarily represent the official views of NIH.

APA format: Genetic Science Learning Center (2014, June 22) Cloning. Learn.Genetics. Retrieved June 12, 2016, from http://learn.genetics.utah.edu/content/cloning/ MLA format: Genetic Science Learning Center. "Cloning." Learn.Genetics 12 June 2016 <http://learn.genetics.utah.edu/content/cloning/> Chicago format: Genetic Science Learning Center, "Cloning," Learn.Genetics, 22 June 2014, <http://learn.genetics.utah.edu/content/cloning/> (12 June 2016)

Read more here:

Cloning - Learn Genetics

Posted in Cloning | Comments Off on Cloning – Learn Genetics

EvolutionM.net – Mitsubishi Lancer Evolution | Reviews, News …

Posted: at 12:39 am

A friend of mine recently picked up a 2016 GT350R, and we met at our local Cars & Coffee to weigh his car. As it turns out, we were able to find all versions of the S550 present and get weights on them, sans the V6. The GT350R is gorgeous inside and out. I took [] More

Recently Ive picked up some corner scales. My intention is to start to weigh, photograph, and write mini features on local cars. Im just getting ramped up, and its been pretty cold locally, so there isnt much data yet. Ill be posting results in this thread:**Corner weight database, click me** Im also using the [] More

Today marks the closing of a book, and that makes me feel a little sad. Im driving to Hallmark Mitsubishi in Nashville to review a Final Edition Evolution X, which will likely be the last new Evo Ill ever get to drive. Ive arranged this with my friend Evan, who is the sales manager here. [] More

Last spring, we changed the front page of evolutionm.net, and had a bit of an issue with data migration. During that time, a few articles were lost. To that end, Im rewriting some content. There have been quite a few project cars on the site over the years, and as this is my fifth Evo, [] More

Create new posts and participate in discussions. It's free!

Read the original:

EvolutionM.net - Mitsubishi Lancer Evolution | Reviews, News ...

Posted in Evolution | Comments Off on EvolutionM.net – Mitsubishi Lancer Evolution | Reviews, News …

Darwinism – Wikipedia, the free encyclopedia

Posted: at 12:39 am

Darwinism is a theory of biological evolution developed by the English naturalist Charles Darwin (1809-1882) and others, stating that all species of organisms arise and develop through the natural selection of small, inherited variations that increase the individual's ability to compete, survive, and reproduce. Also called Darwinian theory, it originally included the broad concepts of transmutation of species or of evolution which gained general scientific acceptance after Darwin published On the Origin of Species in 1859, including concepts which predated Darwin's theories, but subsequently referred to specific concepts of natural selection, of the Weismann barrier or in genetics of the central dogma of molecular biology.[1] Though the term usually refers strictly to biological evolution, creationists have appropriated it to refer to the origin of life, and it has even been applied to concepts of cosmic evolution, both of which have no connection to Darwin's work. It is therefore considered the belief and acceptance of Darwin's and of his predecessors' workin place of other theories, including divine design and extraterrestrial origins.[2][3]

English biologist Thomas Henry Huxley coined the term Darwinism in April 1860.[4] It was used to describe evolutionary concepts in general, including earlier concepts published by English philosopher Herbert Spencer. Many of the proponents of Darwinism at that time, including Huxley, had reservations about the significance of natural selection, and Darwin himself gave credence to what was later called Lamarckism. The strict neo-Darwinism of German evolutionary biologist August Weismann gained few supporters in the late 19th century. During the approximate period of the 1880s to about 1920, sometimes called "the eclipse of Darwinism," scientists proposed various alternative evolutionary mechanisms which eventually proved untenable. The development of the modern evolutionary synthesis from the 1930s to the 1950s, incorporating natural selection with population genetics and Mendelian genetics, revived Darwinism in an updated form.[5]

While the term Darwinism has remained in use amongst the public when referring to modern evolutionary theory, it has increasingly been argued by science writers such as Olivia Judson and Eugenie Scott that it is an inappropriate term for modern evolutionary theory.[6][7] For example, Darwin was unfamiliar with the work of the Moravian scientist and Augustinian friar Gregor Mendel,[8] and as a result had only a vague and inaccurate understanding of heredity. He naturally had no inkling of later theoretical developments and, like Mendel himself, knew nothing of genetic drift, for example.[9][10] In the United States, creationists often use the term "Darwinism" as a pejorative term in reference to beliefs such as scientific materialism, but in the United Kingdom the term has no negative connotations, being freely used as a shorthand for the body of theory dealing with evolution, and in particular, with evolution by natural selection.[6]

While the term Darwinism had been used previously to refer to the work of Erasmus Darwin in the late 18th century, the term as understood today was introduced when Charles Darwin's 1859 book On the Origin of Species was reviewed by Thomas Henry Huxley in the April 1860 issue of the Westminster Review.[12] Having hailed the book as "a veritable Whitworth gun in the armoury of liberalism" promoting scientific naturalism over theology, and praising the usefulness of Darwin's ideas while expressing professional reservations about Darwin's gradualism and doubting if it could be proved that natural selection could form new species,[13] Huxley compared Darwin's achievement to that of Nicolaus Copernicus in explaining planetary motion:

What if the orbit of Darwinism should be a little too circular? What if species should offer residual phenomena, here and there, not explicable by natural selection? Twenty years hence naturalists may be in a position to say whether this is, or is not, the case; but in either event they will owe the author of "The Origin of Species" an immense debt of gratitude.... And viewed as a whole, we do not believe that, since the publication of Von Baer's "Researches on Development," thirty years ago, any work has appeared calculated to exert so large an influence, not only on the future of Biology, but in extending the domination of Science over regions of thought into which she has, as yet, hardly penetrated.[4]

Another important evolutionary theorist of the same period was the Russian geographer and prominent anarchist Peter Kropotkin who, in his book Mutual Aid: A Factor of Evolution (1902), advocated a conception of Darwinism counter to that of Huxley. His conception was centred around what he saw as the widespread use of co-operation as a survival mechanism in human societies and animals. He used biological and sociological arguments in an attempt to show that the main factor in facilitating evolution is cooperation between individuals in free-associated societies and groups. This was in order to counteract the conception of fierce competition as the core of evolution, which provided a rationalisation for the dominant political, economic and social theories of the time; and the prevalent interpretations of Darwinism, such as those by Huxley, who is targeted as an opponent by Kropotkin. Kropotkin's conception of Darwinism could be summed up by the following quote:

In the animal world we have seen that the vast majority of species live in societies, and that they find in association the best arms for the struggle for life: understood, of course, in its wide Darwinian sensenot as a struggle for the sheer means of existence, but as a struggle against all natural conditions unfavourable to the species. The animal species, in which individual struggle has been reduced to its narrowest limits, and the practice of mutual aid has attained the greatest development, are invariably the most numerous, the most prosperous, and the most open to further progress. The mutual protection which is obtained in this case, the possibility of attaining old age and of accumulating experience, the higher intellectual development, and the further growth of sociable habits, secure the maintenance of the species, its extension, and its further progressive evolution. The unsociable species, on the contrary, are doomed to decay.[14]

Peter Kropotkin, Mutual Aid: A Factor of Evolution (1902), Conclusion

"Darwinism" soon came to stand for an entire range of evolutionary (and often revolutionary) philosophies about both biology and society. One of the more prominent approaches, summed in the 1864 phrase "survival of the fittest" by Herbert Spencer, later became emblematic of Darwinism even though Spencer's own understanding of evolution (as expressed in 1857) was more similar to that of Jean-Baptiste Lamarck than to that of Darwin, and predated the publication of Darwin's theory in 1859. What is now called "Social Darwinism" was, in its day, synonymous with "Darwinism"the application of Darwinian principles of "struggle" to society, usually in support of anti-philanthropic political agenda. Another interpretation, one notably favoured by Darwin's half-cousin Francis Galton, was that "Darwinism" implied that because natural selection was apparently no longer working on "civilized" people, it was possible for "inferior" strains of people (who would normally be filtered out of the gene pool) to overwhelm the "superior" strains, and voluntary corrective m
easures would be desirablethe foundation of eugenics.

In Darwin's day there was no rigid definition of the term "Darwinism," and it was used by opponents and proponents of Darwin's biological theory alike to mean whatever they wanted it to in a larger context. The ideas had international influence, and Ernst Haeckel developed what was known as Darwinismus in Germany, although, like Spencer's "evolution," Haeckel's "Darwinism" had only a rough resemblance to the theory of Charles Darwin, and was not centered on natural selection.[15] In 1886, Alfred Russel Wallace went on a lecture tour across the United States, starting in New York and going via Boston, Washington, Kansas, Iowa and Nebraska to California, lecturing on what he called "Darwinism" without any problems.[16]

In his book Darwinism (1889), Wallace had used the term pure-Darwinism which proposed a "greater efficacy" for natural selection.[17][18]George Romanes dubbed this view as "Wallaceism", noting that in contrast to Darwin, this position was advocating a "pure theory of natural selection to the exclusion of any supplementary theory."[19][20] Taking influence from Darwin, Romanes was a proponent of both natural selection and the inheritance of acquired characteristics. The latter was denied by Wallace who was a strict selectionist.[21] Romanes' definition of Darwinism conformed directly with Darwin's views and was contrasted with Wallace's definition of the term.[22]

The term Darwinism is often used in the United States by promoters of creationism, notably by leading members of the intelligent design movement, as an epithet to attack evolution as though it were an ideology (an "ism") of philosophical naturalism, or atheism.[23] For example, UC Berkeley law professor and author Phillip E. Johnson makes this accusation of atheism with reference to Charles Hodge's book What Is Darwinism? (1874).[24] However, unlike Johnson, Hodge confined the term to exclude those like American botanist Asa Gray who combined Christian faith with support for Darwin's natural selection theory, before answering the question posed in the book's title by concluding: "It is Atheism."[25][26] Creationists use the term Darwinism, often pejoratively, to imply that the theory has been held as true only by Darwin and a core group of his followers, whom they cast as dogmatic and inflexible in their belief.[27] In the 2008 documentary film Expelled: No Intelligence Allowed, which promotes intelligent design (ID), American writer and actor Ben Stein refers to scientists as Darwinists. Reviewing the film for Scientific American, John Rennie says "The term is a curious throwback, because in modern biology almost no one relies solely on Darwin's original ideas... Yet the choice of terminology isn't random: Ben Stein wants you to stop thinking of evolution as an actual science supported by verifiable facts and logical arguments and to start thinking of it as a dogmatic, atheistic ideology akin to Marxism." [28]

However, Darwinism is also used neutrally within the scientific community to distinguish the modern evolutionary synthesis, sometimes called "neo-Darwinism," from those first proposed by Darwin. Darwinism also is used neutrally by historians to differentiate his theory from other evolutionary theories current around the same period. For example, Darwinism may be used to refer to Darwin's proposed mechanism of natural selection, in comparison to more recent mechanisms such as genetic drift and gene flow. It may also refer specifically to the role of Charles Darwin as opposed to others in the history of evolutionary thoughtparticularly contrasting Darwin's results with those of earlier theories such as Lamarckism or later ones such as the modern evolutionary synthesis.

In political discussions in the United States, the term is mostly used by its enemies. "It's a rhetorical device to make evolution seem like a kind of faith, like 'Maoism,'" says Harvard University biologist E. O. Wilson. He adds, "Scientists don't call it 'Darwinism'."[29] In the United Kingdom the term often retains its positive sense as a reference to natural selection, and for example British ethologist and evolutionary biologist Richard Dawkins wrote in his collection of essays A Devil's Chaplain, published in 2003, that as a scientist he is a Darwinist.[30]

In his 1995 book Darwinian Fairytales, Australian philosopher David Stove[31] used the term "Darwinism" in a different sense than the above examples. Describing himself as non-religious and as accepting the concept of natural selection as a well-established fact, Stove nonetheless attacked what he described as flawed concepts proposed by some "Ultra-Darwinists." Stove alleged that by using weak or false ad hoc reasoning, these Ultra-Darwinists used evolutionary concepts to offer explanations that were not valid (e.g., Stove suggested that sociobiological explanation of altruism as an evolutionary feature was presented in such a way that the argument was effectively immune to any criticism). Philosopher Simon Blackburn wrote a rejoinder to Stove,[32] though a subsequent essay by Stove's protegee James Franklin's[33] suggested that Blackburn's response actually "confirms Stove's central thesis that Darwinism can 'explain' anything."

Read more:

Darwinism - Wikipedia, the free encyclopedia

Posted in Darwinism | Comments Off on Darwinism – Wikipedia, the free encyclopedia

Robotics – Gizmag

Posted: at 12:39 am

With the era of autonomous cars almost upon us, engineers at Stanford University are already working on something more difficult robots that can share the pavement with pedestrians. Jackrabbot may look like a backyard BB8 with WALL-E's head stuck on, but its function goes beyond cuteness. It's designed to interact with pedestrians and learn from them how to get around without bumping into people or annoying them.Read More

Computer systems have helped catalogue libraries for decades, but if some reckless reader has put a book back in the wrong spot, it's a daunting task for librarians to search the entire building for it but not for robotic librarians. Researchers at A*STAR's Institute for Infocomm Research are designing robots that can self-navigate through libraries at night, scanning spines and shelves to report back on missing or out-of-place books.Read More

Double Robotics has launched the latest iteration of its video-equipped robots. The firm now caters for 360-degree video, with a self-balancing 360 Camera Dolly and an accompanying Universal 360 Camera Mount.Read More

At Computex today, Asus revealed the Zenbo home robot. Kind of like Echo meets Keecker with a bit of Pepper sprinkled in the mix, it's been created to offer busy modern family members a helping hand with everyday tasks like keeping the kids entertained, controlling connected smart devices and providing recipe inspiration for mealtimes. The company also sees it acting as a remote guardian for the elderly.Read More

For such a reviled creature, the cockroach has some pretty impressive abilities. It can slide through incredibly narrow gaps, has great acceleration and can cling to overhanging surfaces like a gecko. But something you won't see them doing is launching more than a meter into the air at least not in the natural world. But researchers have developed a new springing mechanism for small robots that enables them to jump many times their own height at just the right time, a technology they have demonstrated in their so-called JumpRoACH leaping milli-scale robot.Read More

It's probably not something you'd say to a person writhing in agony on the floor, but physical pain can have its benefits. It is after all how kids learn to be wary of hot surfaces and carpenters to hit nails on the head. Researchers are now adapting this exercise in self-learning to an artificial nervous system for robots, a tool they believe will better equip these machines to avoid damage and preserve their and our well-being. Read More

Adidas has announced that it is ready to begin commercial production of footwear at a robot-staffed factory in Germany. The so-called "Speedfactory" in Ansbach will apparently allow the firm to produce more shoes, with greater precision and with new designs.Read More

Visitors to a Pizza Hut in Asia will soon be able to place an order, ask about nutritional info and pay for their meal without even speaking to a member of staff, or at least a human one. A robot that can interact with customers, like a glorified self-checkout, is to be piloted at the restaurant.Read More

Developed by researchers at Harvard's Wyss Institute, a new lightweight exosuit, which features a "soft" fabric-based design, could help patients with lower limb disabilities regain mobility. The institute has partnered with ReWalk Robotics the biggest name in powered exoskeletons - for the ambitious project. Read More

When the insect-sized RoboBee first took flight in 2012, its developers were unable to keep it aloft for more than a few seconds at a time. These days, the tiny drone is so adept at flying that researchers are actively bringing it down to rest. In the latest exhibition of their flying microbot, Harvard researchers have demonstrated the RoboBee's newfound ability to land on surfaces during flight, a neat trick that allows it save power and remain in action for longer periods of time.Read More

unused

The rest is here:

Robotics - Gizmag

Posted in Robotics | Comments Off on Robotics – Gizmag

Mind uploading in fiction – Wikipedia, the free encyclopedia

Posted: at 12:39 am

Main article: Mind uploading

Mind uploading, whole brain emulation or substrate-independent minds is a use of a computer or another substrate as an emulated human brain, and the view of thoughts and memories as software information states. The term mind transfer also refers to a hypothetical transfer of a mind from one biological brain to another. Uploaded minds and societies of minds, often in simulated realities, are recurring themes in science fiction novels and films since 1950s.

An early story featuring something like mind uploading is the novella Izzard and the Membrane by Walter M. Miller, Jr., first published in May 1951.[1] In this story, an American cyberneticist named Scott MacDonney is captured by Russians and made to work on an advanced computer, Izzard, which they plan to use to coordinate an attack on the United States. He has conversations with Izzard as he works on it, and when he asks it if it is self-aware, it says "answer indeterminate" and then asks "can human individual's self-awareness transor be mechanically duplicated?" MacDonney is unfamiliar with the concept of a self-awareness transor (it is later revealed that this information was loaded into Izzard by a mysterious entity who may nor may not be God[2]), and Izzard defines it by saying "A self-awareness transor is the mathematical function which describes the specific consciousness pattern of one human individual."[3] It is later found that this mathematical function can indeed be duplicated, although not by a detailed scan of the individual's brain as in later notions of mind uploading; instead, Donney just has to describe the individual verbally in sufficient detail, and Izzard uses this information to locate the transor in the appropriate "mathematical region". In Izzard's words, "to duplicate consciousness of deceased, it will be necessary for you to furnish anthropometric and psychic characteristics of the individual. These characteristics will not determine transor, but will only give its general form. Knowing its form, will enable me to sweep my circuit pattern through its mathematical region until the proper transor is reached. At that point, the consciousness will appear among the circuits."[4] Using this method, MacDonney is able to recreate the mind of his dead wife in Izzard's memory, as well as create a virtual duplicate of himself, which seems to have a shared awareness with the biological MacDonney.

In The Altered Ego by Jerry Sohl (1954), a person's mind can be "recorded" and used to create a "restoration" in the event of their death. In a restoration, the person's biological body is repaired and brought back to life, and their memories are restored to the last time that they had their minds recorded (what the story calls a 'brain record'[5]), an early example of a story in which a person can create periodic backups of their own mind. The recording process is not described in great detail, but it is mentioned that the recording is used to create a duplicate or "dupe" which is stored in the "restoration bank",[6] and at one point a lecturer says that "The experience of the years, the neurograms, simple memory circuitsneurons, if you wishstored among these nerve cells, are transferred to the dupe, a group of more than ten billion molecules in colloidal suspension. They are charged much as you would charge the plates of a battery, the small neuroelectrical impulses emanating from your brain during the recording session being duplicated on the molecular structure in the solution."[7] During restoration, they take the dupe and "infuse it into an empty brain",[7] and the plot turns on the fact that it is possible to install one person's dupe in the body of a completely different person.[8]

An early example featuring uploaded minds in robotic bodies can be found in Frederik Pohl's story "The Tunnel Under the World" from 1955.[9] In this story, the protagonist Guy Burckhardt continually wakes up on the same date from a dream of dying in an explosion. Burckhardt is already familiar with the idea of putting human minds in robotic bodies, since this is what is done with the robot workers at the nearby Contro Chemical factory. As someone has once explained it to him, "each machine was controlled by a sort of computer which reproduced, in its electronic snarl, the actual memory and mind of a human being ... It was only a matter, he said, of transferring a man's habit patterns from brain cells to vacuum-tube cells." Later in the story, Pohl gives some additional description of the procedure: "Take a master petroleum chemist, infinitely skilled in the separation of crude oil into its fractions. Strap him down, probe into his brain with searching electronic needles. The machine scans the patterns of the mind, translates what it sees into charts and sine waves. Impress these same waves on a robot computer and you have your chemist. Or a thousand copies of your chemist, if you wish, with all of his knowledge and skill, and no human limitations at all." After some investigation, Burckhardt learns that his entire town had been killed in a chemical explosion, and the brains of the dead townspeople had been scanned and placed into miniature robotic bodies in a miniature replica of the town (as a character explains to him, 'It's as easy to transfer a pattern from a dead brain as a living one'), so that a businessman named Mr. Dorchin could charge companies to use the townspeople as test subjects for new products and advertisements.

Something close to the notion of mind uploading is very briefly mentioned in Isaac Asimov's 1956 short story The Last Question: "One by one Man fused with AC, each physical body losing its mental identity in a manner that was somehow not a loss but a gain." A more detailed exploration of the idea (and one in which individual identity is preserved, unlike in Asimov's story) can be found in ArthurC. Clarke's novel The City and the Stars, also from 1956 (this novel was a revised and expanded version of Clarke's earlier story Against the Fall of Night, but the earlier version did not contain the elements relating to mind uploading). The story is set in a city named Diaspar one billion years in the future, where the minds of inhabitants are stored as patterns of information in the city's Central Computer in between a series of 1000-year lives in cloned bodies. Various commentators identify this story as one of the first (if not the first) to deal with mind uploading, human-machine synthesis, and computerized immortality.[10][11][12][13]

Another of the "firsts" is the novel Detta r verkligheten (This is reality), 1968, by the renowned philosopher and logician Bertil Mrtensson, a novel in which he describes people living in an uploaded state as a means to control overpopulation. The uploaded people believe that they are "alive", but in reality they are playing elaborate and advanced fantasy games. In a twist at the end, the author changes everything into one of the best "multiverse" ideas of science fiction.

In Robert Silverberg's To Live Again (1969), an entire worldwide economy is built up around the buying and selling of "souls" (personas that have been tape-recorded at six-month intervals), allowing well-heeled consumers the opportunity to spend tens of millions of dollars on a medical treatment that uploads the most recent recordings of archived personalities into the minds of the buyers. Federal law prevents people from buying a "personality recording" unless the possessor first had died; similarly, two or more buyers were not allowed to own a "share" of the persona. In this novel, the personality recording always went to the highest bidder. Howev
er, when one attempted to buy (and therefore possess) too many personalities, there was the risk that one of the personas would wrest control of the body from the possessor.

In the 1982 novel Software, part of the Ware Tetralogy by Rudy Rucker, one of the main characters, Cobb Anderson, has his mind downloaded and his body replaced with an extremely human-like android body. The robots who persuade Anderson into doing this sell the process to him as a way to become immortal.

In William Gibson's award-winning Neuromancer (1984), which popularized the concept of "cyberspace", a hacking tool used by the main character is an artificial infomorph of a notorious cyber-criminal, Dixie Flatline. The infomorph only assists in exchange for the promise that he be deleted after the mission is complete.

The fiction of Greg Egan has explored many of the philosophical, ethical, legal, and identity aspects of mind transfer, as well as the financial and computing aspects (i.e. hardware, software, processing power) of maintaining "copies." In Egan's Permutation City (1994), Diaspora (1997) and Zendegi (2010), "copies" are made by computer simulation of scanned brain physiology. See also Egan's "jewelhead" stories, where the mind is transferred from the organic brain to a small, immortal backup computer at the base of the skull, the organic brain then being surgically removed.

The movie The Matrix is commonly mistaken for a mind uploading movie, but with exception to suggestions in later movies, it is only about virtual reality and simulated reality, since the main character Neo's physical brain still is required to reside his mind. The mind (the information content of the brain) is not copied into an emulated brain in a computer. Neo's physical brain is connected into the Matrix via a brain-machine interface. Only the rest of the physical body is simulated. Neo is disconnected from and reconnected to this dreamworld.

James Cameron's 2009 movie Avatar has so far been the commercially most successful example of a work of fiction that features a form of mind uploading. Throughout most of the movie, the hero's mind has not actually been uploaded and transferred to another body, but is simply controlling the body from a distance, a form of telepresence. However, at the end of the movie the hero's mind is uploaded into Eywa, the mind of the planet, and then back into his Avatar body.

Mind transfer is a theme in many other works of science fiction in a wide range of media. Specific examples include the following:

Being a good soldier comes down to one thing. To one single question: What are you prepared to sacrifice? When they came to me with the nanosuit, I sacrificed Laurence Barnes, the man I was, to become Prophet. When my own flesh and blood held me back, I sacrificed that too. Replaced it, like a spare part. Victory costs. Every time, you pay a little more. I saw a glimpse of what's coming and there was nothing left of me to stop it. When the greatest combat machine fails... what do we do then? What do I do?!

See original here:

Mind uploading in fiction - Wikipedia, the free encyclopedia

Posted in Mind Uploading | Comments Off on Mind uploading in fiction – Wikipedia, the free encyclopedia

Mind uploading won't lead to immortality – Life 2.0 …

Posted: at 12:39 am

By Maciamo Hay, on 19 April 2014 (updated on 25 April 2014)

Uploading the content of one's mind, including one's personality, memories and emotions, into a computer may one day be possible, but it won't transfer our biological consciousness and won't make us immortal.

Uploading one's mind into a computer, a concept popularized by the 2014 movie Transcendence starring Johnny Depp, is likely to become at least partially possible, but won't lead to immortality. Major objections have been raised regarding the feasibility of mind uploading. Even if we could surpass every technical obstacle and successfully copy the totality of one's mind, emotions, memories, personality and intellect into a machine, that would be just that: a copy, which itself can be copied again and again on various computers.

Neuroscientists have not yet been able to explain what consciousness is, or how it works at a neurological level. Once they do, it is might be possible to reproduce consciousness in artificial intelligence. If that proves feasible, then it should in theory be possible to replicate our consciousness on computers too. Or is that jumpig to conclusions ?

Once all the connections in the brain are mapped and we are able to reproduce all neural connections electronically, we will also be able run a faithful simulation of our brain on a computer. However, even if that simulation happens to have a consciousness of its own, it will never be quite like our own biological consciousness. For example, without hormones we couldn't feel emotions like love, jealously or attachment. (see Could a machine or an AI ever feel human-like emotions ?)

Some people think that mind uploading necessarily requires to leave one's biological body. But there is no conscensus about that. Uploading means copying. When a file is uploaded on the Internet, it doesn't get deleted at the source. It's just a copy.

The best analogy to understand that is cloning. Identical twins are an example of human clones that already live among us. Identical twins share the same DNA, yet nobody would argue that they also share a single consciousness.

It will be easy to prove that hypothesis once the technology becomes available. Unlike Johnny Depp in Transcend, we don't have to die to upload our mind to one or several computers. Doing so won't deprive us of our biological consciousness. It will just be like having a mental clone of ourself, but we will never feel like we are inside the computer, without affecting who we are.

If the conscious self doesn't leave the biologically body (i.e. "die") when transferring mind and consciousness, it would basically mean that that individual would feel in two places at the same time: in the biological body and in the computer. That is problematic. It's hard to conceive how that could be possible since the very essence of consciousness is a feeling of indivisible unity.

If we want to avoid this problem of dividing the sense of self, we must indeed find a way to transfer the consciousness from the body to the computer. But this would assume that consciousness is merely some data that can be transferred. We don't know that yet. It could be tied to our neurons or to very specific atoms in some neurons. If that was the case, destroying the neurons would destroy the consciousness.

Even assuming that we found a way to transfer the consciousness from the brain to a computer, how could we avoid consciousness being copied to other computers, recreating the philosophical problem of splitting the self. That would actually be much worse since a computerized consciousness could be copied endless times. How would you then feel a sense of unified consciousness ?

Since mind uploading won't preserve our self-awareness, the feeling that we are ourself and not someone else, it won't lead to immortality. We'll still be bound to our bodies, but life expectancy for transhumanists and cybernetic humans will be considerably extended.

Immortality is a confusing term since it implies living forever, which is impossible since nothing is eternal in our universe, not even atoms or quarks. Living for billions of years, while highly improbable in itself, wouldn't even be close to immortality. It may seem like a very large number compared to our short existence, but compared to eternity (infinite time), it isn't much longer than 100 years.

Even machines aren't much longer lived than we are. Actually modern computers tend to have much shorter life spans than humans. A 10-year old computer is very old indeed, as well as slower and more prone to technical problems than a new computer. So why would we think that transferring our mind to a computer would grant us greatly extended longevity ?

Even if we could transfer all our mind's data and consciousness an unlimited number of times onto new machines, that won't prevent the machine currently hosting us from being destroyed by viruses, bugs, mechanical failures or outright physical destruction of the whole hardware, intentionally, accidentally or due to natural catastrophes.

In the meantime, science will slow down, stop and even reverse the aging process, enabling us to live healthily for a very long time by today's standards. This is known as negligible senescence. Nevertheless, cybernetic humans with robotic limbs and respirocytes will still die in accidents or wars. At best we could hope to living for several hundreds or thousands years, assuming that nothing kills us before.

As a result, there won't be that much differences between living inside a biological body and a machine. The risks will be comparable. Human longevity will in all likelihood increase dramatically, but there simply is no such thing as immortality.

Artificial Intelligence could easily replicate most of processes, thoughts, emotions, sensations and memories of the human brain - with some reservations on some feelings and emotions residing outside the brain, in the biological body. An AI might also have a consciousness of its own. Backing up the content of one's mind will most probably be possible one day. However there is no evidence that consciousness or self-awareness are merely information that can be transferred since consciousness cannot be divided in two or many parts.

Consciousness is most likely tied to neurons in a certain part of the brain (which may well include the thalamus). These neurons are maintained throughout life, from birth to death, without being regenerated like other cells in the body, which explains the experienced feeling of continuity.

There is not the slightest scientific evidence of a duality between body and consciousness, or in other words that consciousness could be equated with an immaterial soul. In the absence of such duality, a person's original consciousness would cease to exist with the destruction of the neurons in his/her brain responsible for consciousness. Unless one believes in an immaterial, immortal soul, the death of one's brain automatically results in the extinction of consciousness. While a new consciousness could be imitated to perfection inside a machine, it would merely be a clone of the person's consciousness, not an actual transfer, meaning that that feeling of self would not be preserved.

Originally posted here:

Mind uploading won't lead to immortality - Life 2.0 ...

Posted in Mind Uploading | Comments Off on Mind uploading won't lead to immortality – Life 2.0 …

Artificial Intelligence | Neuro AI

Posted: at 12:38 am

The phrase Artificial Intelligence was first coined by John McCarthy four decades ago. One representative definition is pivoted around comparing intelligent machines with human beings. Another definition is concerned with the performance of machines which historically have been judged to lie within the domain of intelligence.

Yet none of these definitions have been universally accepted, probably because the reference of the word intelligence which is an immeasurable quantity. A better definition of artificial intelligence, and probably the most accurate would be: An artificial system capable of planning and executing the right task at the right time rationally. Or far simpler: a machine that can act rationally.

With all this a common questions arises:

Does rational thinking and acting include all characteristics of an intelligent system?

If so, how does it represent behavioral intelligence such as learning, perception and planning?

If we think a little, a system capable of reasoning would be a successful planner. Moreover, a system can act rationally only after acquiring knowledge from the real world. So the property of perception is a perquisite of building up knowledge from the real world.

With all this we may conclude that a machine that lacks of perception cannot learn, therefore cannot acquire knowledge.

To understand the practical meaning or artificial intelligence we must illustrate some common problems. All problems that are dealt with artificial intelligence solutions use the common term state.

A state represents the status of a solution at a given step during the problem solving procedure. The solution of a problem is a collection of states. The problem solving procedure or algorithm applies an operator to a state to get the next state. Then, it applies another operator to the resulting state to derive a new state.

The process of applying operators to each state is continued until a desired goal is achieved.

Example : Consider a 4-puzzle problem, where in a 4-cell board there are 3 cells filled with digits and 1 blank cell. The initial state of the game represents a particular orientation of the digits in the cells and the final state to be achieved is another orientation supplied to the game player. The problem of the game is to reach from the given initial state to the goal (final) state, if possible, with a minimum of moves. Let the initial and the final state be as shown in figures 1(a) and (b) respectively.

We now define two operations, blank-up (BU) / blank-down (BD) and blank-left (BL) / blank-right (BR), and the state-space (tree) for the problem is presented below using these operators. The algorithm for the above kind of problems is straightforward. It consists of three steps, described by steps 1, 2(a) and 2(b) below.

Algorithm for solving state-space problems

Begin

It is clear that the main trick in solving problems by the state-space approach is to determine the set of operators and to use it at appropriate states of the problem.

Researchers in artificial intelligence have segregated the AI problems from the non-AI problems. Generally, problems, for which straightforward mathematical / logical algorithms are not readily available and which can be solved by intuitive approach only, are called AI problems.

The 4-puzzle problem, for instance, is an ideal AI Problem. There is no formal algorithm for its realization, i.e., given a starting and a goal state, one cannot say prior to execution of the tasks the sequence of steps required to get the goal from the starting state. Such problems are called the ideal AI problems.

The well known water-jug problem, the Traveling Salesperson Problem (TSP), and the n-Queen problem are typical examples of the classical AI problems.

Among the non-classical AI problems, the diagnosis problems and the pattern classification problem need special mention. For solving an AI problem, one may employ both artificial intelligence and non-AI algorithms. An obvious question is: what is an AI algorithm?

Formally speaking, an artificial intelligence algorithm generally means a non-conventional intuitive approach for problem solving. The key to artificial intelligence approach is intelligent search and matching. In an intelligent search problem / sub-problem, given a goal (or starting) state, one has to reach that state from one or more known starting (or goal) states.

For example, consider the 4-puzzle problem, where the goal state is known and one has to identify the moves for reaching the goal from a pre-defined starting state. Now, the less number of states one generates for reaching the goal, the better. That is the AI algorithm.

The question that then naturally arises is: how to control the generation of states?

This can be achieved by suitably designing control strategies, which would filter a few states only from a large number of legal states that could be generated from a given starting / intermediate state.

As an example, consider the problem of proving a trigonometric identity that children are used to doing during their schooldays. What would they do at the beginning? They would start with one side of the identity, and attempt to apply a number of formula there to find the possible resulting derivations.

But they wont really apply all the formula there. Rather, they identify the right candidate formula that fits there, such that the other side of the identity that seems to be closer in some sense (outlook). Ultimately, when the decision regarding the selection of the formula is over, they apply it to one side (say the L.H.S) of the identity and derive the new state.

Therefore, they continue the process and go on generating new intermediate states until the R.H.S (goal) is reached. But do they always select the right candidate formula at a given state? From our experience, we know the answer is not always. But what would we do if we find that after generation of a few states, the resulting expression seems to be far away from the R.H.S of the identity.

Perhaps we would prefer to move to some old state, which is more promising, i.e., closer to the R.H.S of the identity. The above line of thinking has been realized in many intelligent search problems of AI.

Some of these well-known search algorithms are:

a) Generate and Test Approach : This approach concerns the generation of the state-space from a known starting state (root) of the problem and continues expanding the reasoning space until the goal node or the terminal state is reached.

In fact after generating each and every state, the generated node is compared with the known goal state. When the goal is found, the algorithm terminates. In case there exist multiple paths leading to the goal, then the path having the smallest distance from the root is preferred. The basic strategy used in this search is only generation of states and their testing for goals but it does not allow filtering of states.

(b) Hill Climbing Approach : Under this approach, one has to first generate a starting state and measure the total cost for reaching the goal from the given starting state. Let this cost be f. While f = a predefined utility value and the goal is not reached, new nodes are generated as children of the current node. However, in case all neighborhood nodes (states) yield an identical value of f and the goal is not included in the set of these nodes, the search algorithm is trapped at a hillock or local extreme.

One way to
overcome this problem is to select randomly a new starting state and then continue the above search process. While proving trigonometric identities, we often use Hill Climbing, perhaps unknowingly.

(c) Heuristic Search: Classically heuristics means rule of thumb. In heuristic search, we generally use one or more heuristic functions to determine the better candidate states among a set of legal states that could be generated from a known state.

The heuristic function, in other words, measures the fitness of the candidate states. The better the selection of the states, the fewer will be the number of intermediate states for reaching the goal.

However, the most difficult task in heuristic search problems is the selection of the heuristic functions. One has to select them intuitively, so that in most cases hopefully it would be able to prune the search space correctly.

(d) Means and Ends Analysis: This method of search attempts to reduce the gap between the current state and the goal state. One simple way to explore this method is to measure the distance between the current state and the goal, and then apply an operator to the current state, so that the distance between the resulting state and the goal is reduced. In many mathematical theorem- proving processes, we use Means and Ends Analysis.

The subject of artificial intelligence spans a wide horizon. It deals with various kinds of knowledge representation schemes, different techniques of intelligent search, various methods for resolving uncertainty of data and knowledge, different schemes for automated machine learning and many others.

Among the application areas of AI, we have Expert systems, Game-playing, and Theorem-proving, Natural language processing, Image recognition, Robotics and many others. The subject of artificial intelligence has been enriched with a wide discipline of knowledge from Philosophy, Psychology, Cognitive Science, Computer Science, Mathematics and Engineering. Thus has the figure shows, they have been referred to as the parent disciplines of AI. An at-a-glance look at fig. also reveals the subject area of AI and its application areas. Fig.: AI, its parent disciplines and application areas.

The subject of artificial intelligence was originated with game-playing and theorem-proving programs and was gradually enriched with theories from a number of parent disciplines. As a young discipline of science, the significance of the topics covered under the subject changes considerably with time. At present, the topics which we find significant and worthwhile to understand the subject are outlined below: FigA: Pronunciation learning of a child from his mother.

Learning Systems: Among the subject areas covered under artificial intelligence, learning systems needs special mention. The concept of learning is illustrated here with reference to a natural problem of learning of pronunciation by a child from his mother (vide figA). The hearing system of the child receives the pronunciation of the character A and the voice system attempts to imitate it. The difference of the mothers and the childs pronunciation, hereafter called the error signal, is received by the childs learning system auditory nerve, and an actuation signal is generated by the learning system through a motor nerve for adjustment of the pronunciation of the child. The adaptation of the childs voice system is continued until the amplitude of the error signal is insignificantly low. Each time the voice system passes through an adaptation cycle, the resulting tongue position of the child for speaking A is saved by the learning process. The learning problem discussed above is an example of the well-known parametric learning, where the adaptive learning process adjusts the parameters of the childs voice system autonomously to keep its response close enough to the sample training pattern. The artificial neural networks, which represent the electrical analogue of the biological nervous systems, are gaining importance for their increasing applications in supervised (parametric) learning problems. Besides this type, the other common learning methods, which we do unknowingly, are inductive and analogy-based learning. In inductive learning, the learner makes generalizations from examples. For instance, noting that cuckoo flies, parrot flies and sparrow flies, the learner generalizes that birds fly. On the other hand, in analogy-based learning, the learner, for example, learns the motion of electrons in an atom analogously from his knowledge of planetary motion in solar systems.

Knowledge Representation and Reasoning: In a reasoning problem, one has to reach a pre-defined goal state from one or more given initial states. So, the lesser the number of transitions for reaching the goal state, the higher the efficiency of the reasoning system. Increasing the efficiency of a reasoning system thus requires minimization of intermediate states, which indirectly calls for an organized and complete knowledge base. A complete and organized storehouse of knowledge needs minimum search to identify the appropriate knowledge at a given problem state and thus yields the right next state on the leading edge of the problem-solving process. Organization of knowledge, therefore, is of paramount importance in knowledge engineering. A variety of knowledge representation techniques are in use in Artificial Intelligence. Production rules, semantic nets, frames, filler and slots, and predicate logic are only a few to mention. The selection of a particular type of representational scheme of knowledge depends both on the nature of applications and the choice of users.

Planning: Another significant area of artificial intelligence is planning. The problems of reasoning and planning share many common issues, but have a basic difference that originates from their definitions. The reasoning problem is mainly concerned with the testing of the satisfiability of a goal from a given set of data and knowledge. The planning problem, on the other hand, deals with the determination of the methodology by which a successful goal can be achieved from the known initial states. Automated planning finds extensive applications in robotics and navigational problems, some of which will be discussed shortly.

Knowledge Acquisition: Acquisition (Elicitation) of knowledge is equally hard for machines as it is for human beings. It includes generation of new pieces of knowledge from given knowledge base, setting dynamic data structures for existing knowledge, learning knowledge from the environment and refinement of knowledge. Automated acquisition of knowledge by machine learning approach is an active area of current research in Artificial Intelligence. Intelligent Search: Search problems, which we generally encounter in Computer Science, are of a deterministic nature, i.e., the order of visiting the elements of the search space is known. For example, in depth first and breadth first search algorithms, one knows the sequence of visiting the nodes in a tree. However, search problems, which we will come across in AI, are non-deterministic and the order of visiting the elements in the search space is completely dependent on data sets. The diversity of the intelligent search algorithms will be discussed in detail later.

Logic Programming: For more than a century, mathematicians and logicians were used to designing various tools to represent logical statements by symbolic operators. One outgrowth of such attempts is propositional logic, which deals with a set of binary statements (propositions) connected by Boolean operators. The logic of propositions, which was gradual
ly enriched to handle more complex situations of the real world, is called predicate logic. One classical variety of predicate logic-based programs is Logic Program. PROLOG, which is an abbreviation for PROgramming in LOGic, is a typical language that supports logic programs. Logic Programming has recently been identified as one of the prime area of research in AI. The ultimate aim of this research is to extend the PROLOG compiler to handle spatio-temporal models and support a parallel programming environment. Building architecture for PROLOG machines was a hot topic of the last decade.

Soft Computing: Soft computing, according to Prof. Zadeh, is an emerging approach to computing, which parallels the remarkable ability of the human mind to reason and learn in an environment of uncertainty and imprecision . It, in general, is a collection of computing tools and techniques, shared by closely related disciplines that include fuzzy logic, artificial neural nets, genetic algorithms, belief calculus, and some aspects of machine learning like inductive logic programming. These tools are used independently as well as jointly depending on the type of the domain of applications.

Management of Imprecision and Uncertainty: Data and knowledgebases in many typical AI problems, such as reasoning and planning, are often contaminated with various forms of incompleteness. The incompleteness of data, hereafter called imprecision, generally appears in the database for i) lack of appropriate data, and ii) poor authenticity level of the sources. The incompleteness of knowledge, often referred to as uncertainty, originates in the knowledge base due to lack of certainty of the pieces of knowledge Reasoning in the presence of imprecision of data and uncertainty of knowledge is a complex problem. Various tools and techniques have been devised for reasoning under incomplete data and knowledge. Some of these techniques employ i) stochastic ii) fuzzy and iii) belief network models. In a stochastic reasoning model, the system can have transition from one given state to a number of states, such that the sum of the probability of transition to the next states from the given state is strictly unity. In a fuzzy reasoning system, on the other hand, the sum of the membership value of transition from the given state to the next state may be greater than or equal to one. The belief network model updates the stochastic / fuzzy belief assigned to the facts embedded in the network until a condition of equilibrium is reached, following which there would be no more change in beliefs. Recently, fuzzy tools and techniques have been applied in a specialized belief network, called a fuzzy Petri net, for handling both imprecision of data and uncertainty of knowledge by a unified approach.

Almost every branch of science and engineering currently shares the tools and techniques available in the domain of artificial intelligence. However, for the sake of the convenience of the readers, we mention here a few typical applications, where AI plays a significant and decisive role in engineering automation. Expert Systems: In this example, we illustrate the reasoning process involved in an expert system for a weather forecasting problem with special emphasis to its architecture. An expert system consists of a knowledge base, database and an inference engine for interpreting the database using the knowledge supplied in the knowledge base. The reasoning process of a typical illustrative expert system is described in Fig. PR 1 in Fig. represents i-th production rule. The inference engine attempts to match the antecedent clauses (IF parts) of the rules with the data stored in the database. When all the antecedent clauses of a rule are available in the database, the rule is fired, resulting in new inferences. The resulting inferences are added to the database for activating subsequent firing of other rules. In order to keep limited data in the database, a few rules that contain an explicit consequent (THEN) clause to delete specific data from the databases are employed in the knowledge base. On firing of such rules, the unwanted data clauses as suggested by the rule are deleted from the database. Here PR1 fires as both of its antecedent clauses are present in the database. On firing of PR1, the consequent clause it-will-rain will be added to the database for subsequent firing of PR2. Fig. Illustrative architecture of an expert system.

Image Understanding and Computer Vision: A digital image can be regarded as a two-dimensional array of pixels containing gray levels corresponding to the intensity of the reflected illumination received by a video camera. For interpretation of a scene, its image should be passed through three basic processes: low, medium and high level vision . Fig.: Basic steps in scene interpretation.

The importance of low level vision is to pre-process the image by filtering from noise. The medium level vision system deals with enhancement of details and segmentation (i.e., partitioning the image into objects of interest ). The high level vision system includes three steps: recognition of the objects from the segmented image, labeling of the image and interpretation of the scene. Most of the AI tools and techniques are required in high level vision systems. Recognition of objects from its image can be carried out through a process of pattern classification, which at present is realized by supervised learning algorithms. The interpretation process, on the other hand, requires knowledge-based computation.

Speech and Natural Language Understanding: Understanding of speech and natural languages is basically two class ical problems. In speech analysis, the main probl em is to separate the syllables of a spoken word and determine features like ampli tude, and fundamental and harmonic frequencies of each syllable. The words then could be ident ified from the extracted features by pattern class ification techniques. Recently, artificial neural networks have been employed to class ify words from their features. The probl em of understanding natural languages like English, on the other hand, includes syntactic and semantic interpretation of the words in a sentence, and sentences in a paragraph. The syntactic steps are required to analyze the sentences by its grammar and are similar with the steps of compilation. The semantic analysis, which is performed following the syntactic analysis, determines the meaning of the sentences from the association of the words and that of a paragraph from the closeness of the sentences. A robot capable of understanding speech in a natural language will be of immense importance, for it could execute any task verbally communicated to it. The phonetic typewriter, which prints the words pronounced by a person, is another recent invention where speech understanding is employed in a commercial application.

Scheduling: In a scheduling problem, one has to plan the time schedule of a set of events to improve the time efficiency of the solution. For instance in a class-routine scheduling problem, the teachers are allocated to different classrooms at different time slots, and we want most classrooms to be occupied most of the time. In a flowshop scheduling problem, a set of jobs J1 and J2 (say) are to be allocated to a set of machines M1, M2 and M3. (say). We assume that each job requires some operations to be done on all these machines in a fixed order say, M1, M2 and M3. Now, what should be the schedule of the jobs (J1-J2) or (J2 -J1), so that the completion time of both the jobs, called the make-span, is minimized? Let the processing time of jobs J1 and J2 on machines M1, M2 and M3 be (5, 8, 7) and (8, 2, 3) respe
ctively. The gantt charts in fig. (a) and (b) describe the make-spans for the schedule of jobs J1 J2 and J2 J1 respectively. It is clear from these figures that J1-J2 schedule requires less make-span and is thus preferred. Fig.: The Gantt charts for the flowshop scheduling problem with 2 jobs and 3 machines.

Flowshop scheduling problems are a NP complete problem and determination of optimal scheduling (for minimizing the make-span) thus requires an exponential order of time with respect to both machine-size and job-size. Finding a sub-optimal solution is thus preferred for such scheduling problems. Recently, artificial neural nets and genetic algorithms have been employed to solve this problem. The heuristic search, to be discussed shortly, has also been used for handling this problem.

Intelligent Control: In process control, the controller is designed from the known models of the process and the required control objective. When the dynamics of the plant is not completely known, the existing techniques for controller design no longer remain valid. Rule-based control is appropriate in such situations. In a rule-based control system, the controller is realized by a set of production rules intuitively set by an expert control engineer. The antecedent (premise) part of the rules in a rule-based system is searched against the dynamic response of the plant parameters. The rule whose antecedent part matches with the plant response is selected and fired. When more than one rule is firable, the controller resolves the conflict by a set of strategies. On the other hand, there exist situations when the antecedent part of no rules exactly matches with the plant responses. Such situations are handled with fuzzy logic, which is capable of matching the antecedent parts of rules partially/ approximately with the dynamic plant responses. Fuzzy control has been successfully used in many industrial plants. One typical application is the power control in a nuclear reactor. Besides design of the controller, the other issue in process control is to design a plant (process) estimator, which attempts to follow the response of the actual plant, when both the plant and the estimator are jointly excited by a common input signal. The fuzzy and artificial neural network-based learning techniques have recently been identified as new tools for plant estimation.

See original here:

Artificial Intelligence | Neuro AI

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence | Neuro AI

What is Artificial Intelligence (AI)? – Definition from …

Posted: at 12:38 am

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious approach.

Machine learning is another core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with few sub-problems such as facial, object and speech recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

Follow this link:

What is Artificial Intelligence (AI)? - Definition from ...

Posted in Artificial Intelligence | Comments Off on What is Artificial Intelligence (AI)? – Definition from …

Meme Central – Memes, Memetics, and Mind Virus Resource

Posted: at 12:38 am

Meme Central Books Level 3 Resources Richard Brodie Virus of the Mind Whats New? Site Map

Meme Central

Welcome to Meme Central, the center of the world of memetics. Memes are contagious ideas, all competing for a share of our mind in a kind of Darwinian selection. As memes evolve, they become better and better at distracting and diverting us from whatever we'd really like to be doing with our lives. They are a kind of Drug of the Mind. Confused? Blame it on memes.

Quick Tour:

Subscribe to my free newsletter, Meme Update.

Start reading my book Virus of the Mind: the New Science of the Meme for free.

Learn about Internet Mind Viruses and what you can do to stop them.

Find out about living life at Level 3 of consciousness

Read the list of frequently asked questions about memetics.

Memetics FAQ(Frequently Asked Questions)

Memetics Resources

Internet Mind Virus Antidote

Send this page to people who send you annoying chain letters, virus hoaxes, or bad jokes.

The Church of Virus

People who master memetics gain the ability to program their own mindsand the minds of others! What kind of religion would you create with this knowledge?

Hans-Cees Speel's Memetics Page

Dr. Speel is one of the first academic researchers to devote his work to memetics. His page has many interesting links.

C-Realm

see the work of KMO.

The Lucifer Principle

Howard Bloom is one of the world's most interesting people. He writes on everything from politics to memetics. Visit the site he has set up around his book The Lucifer Principle.

Peruse the Unofficial Richard Dawkins Page or

Read Richard Dawkins's essay "Viruses of the Mind"

When I met Prof. Dawkins, he politely greeted me with "Oh, you're the fellow who pinched my title!" It was not a week after the advance information for my book Virus of the Mind was sent to Books in Print that I walked into Barnes & Noble and saw his essay featured on the cover of Free Inquiry magazine. Well, I suppose the meme infected both of us at about the same time...

Memetics Publications on the Web

Browse through a collection of memetics-related writing available on the Web, including papers by Daniel Dennett, Keith Henson, and Liane Gabora.

Journal of Memetics

Have an academic bent? Then peruse the scholarly journal dedicated to memetics. The first issue includes papers by William Calvin, Liane Gabora, and other heavy hitters..

The Generosity Virus

John Stoner has created a designer virus to spread the meme of generosity. Here's what he says about the virus: "A little while ago, I made up these cards. They create a chain of generous acts, memetically. How do you use them? You do something nice for someone, and you do it anonymously. For example, you could pay the toll of the car behind you at a tollbooth. One thing I've done is go to this wonderful bakery near my home, and buy a treat for the next person who walks in the door after I leave. Be creative! And you pass on one of these cards.... check them out."

Eliezer S. Yudkowsky

Here's a smart young man that I'm a big fan of. He's written quite a bit about the future of humanity, especially the "singularity" predicted when artificial intelligence overtakes human intelligence. He's worth getting to know.

Susan Blackmore

Author of The Meme Machine, she has a nice site with more information on memetics.

Last Edited: March19, 2008 1996-2008 Richard Brodie. All rights reserved. Background image 1996 Lightbourne Images

Read the original here:

Meme Central - Memes, Memetics, and Mind Virus Resource

Posted in Memetics | Comments Off on Meme Central – Memes, Memetics, and Mind Virus Resource

Seasteading – RationalWiki

Posted: at 12:38 am

Seasteading is the libertarian fantasy of attempting to establish a society on (or under) the sea. Given that a large swath of the oceans are international waters, outside the jurisdiction of any one country, some people see seasteading as the most viable possibility for creating new, autonomous states with their own pet political systems in place.

Given that international maritime law doesn't, as such, recognize ginormous boats or artificial islands as stateless enclaves or independent nations, diplomatic recognition, if the owners actually need it, is somewhat problematic.

Seasteading is inspired by real life examples of boat-based provision of services not legal in certain countries. Examples include casino boats (ships that, upon reaching international waters, open up their gambling facilities to passengers) and the organization Women on Waves,[wp] which provides abortion services in countries (such as Ireland, Poland, Portugal and Spain) where abortion is illegal or in which the rules are stricter than they would prefer. Another example is pirate radio stations, which got their name from the fact that many of them operated from boats in international waters.

Several seasteading projects have been started; only two have ever been completed (three if you count Sealand and its 'Prince'), and the vast majority have never even really begun. It is quite possible that herding libertarians is difficult.

Some cryonicists are seasteaders, which implies truly remarkably compartmentalised thinking about the value of large, stable social structures.

As they age, some libertarians are realising that replacing government may be more work than they can personally achieve as actualised individuals.[1]Reason, of course, tells them not to stop thinking about tomorrow.[2]

With the exception of Sealand, there have been three seasteading projects that could be considered "successful" in any sense of the word.

The longest-lived and most successful was the "Republic of Minerva," an artificial island in the South Pacific constructed by real estate millionare Michael J. Oliver and his Phoenix Foundation using dredged sand to expand the tiny Minerva Reef. The intention was to establish an agrarian anarcho-capitalist utopia; presumably the libertarian supermen would evolve past the need to drink, as there was no source of fresh water on the island. Minerva formally declared independence in 1972 and attempted to establish diplomatic relations with the surrounding nations, though it was mostly ignored. The small settlement lasted for approximately five months, until the government of Tonga sent a military expedition (along with a convict work detail, a brass band, and HRM King Taufaahau Tupou himself) to claim the island by force (or rather, re-claim it; the original reef had been considered a culturally important Tongan fishing region). In 1982 a second group of Libertarians tried to reclaim the atoll but were again forced off by the Tongan military. Since then, the project has collapsed, and the island has since been mostly reclaimed by the sea.

Unabashed, Oliver tried to funnel funds into various separatist groups and revolutionaries in the Bahamas and Vanuatu, but was met with extremely little success. Today, the Phoenix Foundation still chugs on, eyeing tiny islands like the Isle of Man and the Azores and grumbling to themselves.

Rose Island (officially the "Respubliko de la Insulo de la Rozoj") was a 400-square-meter artificial platform in the Mediterranean founded by an Italian casino entrepreneur in 1968. It styled itself as a libertarian capitalist state with Esperanto as its official language, but was in fact little more than a tourist resort complex, and had virtually no space for permanent residents. The Italian government, seeing the project as nothing more than a ploy to avoid having to pay taxes on revenue from the resort, seized the platform with police a few weeks after it opened and destroyed it with explosives[3].

Operation Atlantis was an American attempt by Libertarian soap-magnate Werner K. Steifel to create an anarcho-capitalist utopia (noticing a trend here?) in the Bahamas by building a large ferro-cement ship, sailing it to its destination, anchoring it there and living on it. The boat was built, launched from New York in 1971, and (after capsizing once on the Hudson river and catching fire) taken to its final position in the Caribbean, where it was secured in place. Preparations were made for the residents to immigrate to their new floating city-state, but unfortunately for them it sank almost immediately.[4][5] After two more attempts and eventually pouring a lot of money into an island off the coast of Belize that he couldn't get autonomy for, the project collapsed.

Libertarians are hardly the only people to try and colonize the ocean. China, for instance, has used a version of seasteading in order to enforce its claims on the Spratly Islands, an archipelago in the South China Sea that's claimed in whole or in part by six nations (the PRC, the ROC, Vietnam, the Philippines, Malaysia, and Brunei). They've been hard at work using land reclamation to build artificial islands with airstrips, piers, harbors, and helipads, which they say are for military "and civilian" use.[6]

The video game Bioshock[7] features what is probably the best-known example of a seastead in popular culture.

Read the original:

Seasteading - RationalWiki

Posted in Seasteading | Comments Off on Seasteading – RationalWiki