Monthly Archives: June 2016

Transhumanism in fiction – Wikipedia, the free encyclopedia

Posted: June 26, 2016 at 10:51 am

Many of the tropes of science fiction can be viewed as similar to the goals of transhumanism. Science fiction literature contains many positive depictions of technologically enhanced human life, occasionally set in utopian (especially techno-utopian) societies. However, science fiction's depictions of technologically enhanced humans or other posthuman beings frequently come with a cautionary twist. The more pessimistic scenarios include many dystopian tales of human bioengineering gone wrong.

Examples of "transhumanist fiction" include novels by Linda Nagata, Greg Egan, Zoltan Istvan, and Hannu Rajaniemi. Transhuman novels are often philosophical in nature, exploring the impact such technologies might have on human life. Nagata's novels, for example, explore the relationship between the natural and artificial, and suggest that while transhuman modifications of nature may be beneficial, they may also be hazardous, so should not be lightly undertaken.[1] Egan's Diaspora explores the nature of ideas such as reproduction and questions if they make sense in a post-human context. Istvan's novel The Transhumanist Wager explores how far one person would go to achieve an indefinite lifespan via science and technology.[2] Rajaniemi's novel, while more action oriented, still explores themes such as death and finitude in post-human life.

Fictional depictions of transhumanist scenarios are also seen in other media, such as movies (Transcendence), television series (the Ancients of Stargate SG-1), manga and anime (Ghost in the Shell), role-playing games (Rifts and Eclipse Phase) and video games (Deus Ex or BioShock).

Follow this link:

Transhumanism in fiction - Wikipedia, the free encyclopedia

Posted in Transhumanism | Comments Off on Transhumanism in fiction – Wikipedia, the free encyclopedia

Transhumanism: The World's Most Dangerous Idea?

Posted: at 10:51 am

What idea, if embraced, would pose the greatest threat to the welfare of humanity? This was the question posed by the editors of Foreign Policy in the September/October issue to eight prominent policy intellectuals, among them Francis Fukuyama, professor of international political economy at Johns Hopkins School of Advanced International Studies, and member of the Presidents Council on Bioethics.

And Fukuyamas answer? Transhumanism, a strange liberation movement whose crusaders aim much higher than civil rights campaigners, feminists, or gay-rights advocates. This movement, he says, wants nothing less than to liberate the human race from its biological constraints.

More precisely, transhumanists advocate increased funding for research to radically extend healthy lifespan and favor the development of medical and technological means to improve memory, concentration, and other human capacities. Transhumanists propose that everybody should have the option to use such means to enhance various dimensions of their cognitive, emotional, and physical well-being. Not only is this a natural extension of the traditional aims of medicine and technology, but it is also a great humanitarian opportunity to genuinely improve the human condition.

According to transhumanists, however, the choice whether to avail oneself of such enhancement options should generally reside with the individual. Transhumanists are concerned that the prestige of the Presidents Council on Bioethics is being used to push a limiting bioconservative agenda that is directly hostile to the goal of allowing people to improve their lives by enhancing their biological capacities.

So why does Fukuyama nominate this transhumanist ideal, of working towards making enhancement options universally available, as the most dangerous idea in the world? His animus against the transhumanist position is so strong that he even wishes for the death of his adversaries: transhumanists, he writes, are just about the last group that Id like to see live forever. Why exactly is it so disturbing for Fukuyama to contemplate the suggestion that people might use technology to become smarter, or to live longer and healthier lives?

Fierce resistance has often accompanied technological or medical breakthroughs that force us to reconsider some aspects of our worldview. Just as anesthesia, antibiotics, and global communication networks transformed our sense of the human condition in fundamental ways, so too we can anticipate that our capacities, hopes, and problems will change if the more speculative technologies that transhumanists discuss come to fruition. But apart from vague feelings of disquiet, which we may all share to varying degrees, what specific argument does Fukuyama advance that would justify foregoing the many benefits of allowing people to improve their basic capacities?

Fukuyamas objection is that the defense of equal legal and political rights is incompatible with embracing human enhancement: Underlying this idea of the equality of rights is the belief that we all possess a human essence that dwarfs manifest differences in skin color, beauty, and even intelligence. This essence, and the view that individuals therefore have inherent value, is at the heart of political liberalism. But modifying that essence is the core of the transhumanist project.

His argument thus depends on three assumptions: (1) there is a unique human essence; (2) only those individuals who have this mysterious essence can have intrinsic value and deserve equal rights; and (3) the enhancements that transhumanists advocate would eliminate this essence. From this, he infers that the transhumanist project would destroy the basis of equal rights.

The concept of such a human essence is, of course, deeply problematic. Evolutionary biologists note that the human gene pool is in constant flux and talk of our genes as giving rise to an extended phenotype that includes not only our bodies but also our artifacts and institutions. Ethologists have over the past couple of decades revealed just how similar we are to our great primate relatives. A thick concept of human essence has arguably become an anachronism. But we can set these difficulties aside and focus on the other two premises of Fukuyamas argument.

The claim that only individuals who possess the human essence could have intrinsic value is mistaken. Only the most callous would deny that the welfare of some non-human animals matters at least to some degree. If a visitor from outer space arrived on our doorstep, and she had consciousness and moral agency just like we humans do, surely we would not deny her moral status or intrinsic value just because she lacked some undefined human essence. Similarly, if some persons were to modify their own biology in a way that alters whatever Fukuyama judges to be their essence, would we really want to deprive them of their moral standing and legal rights? Excluding people from the moral circle merely because they have a different essence from the rest of us is akin to excluding people on basis of their gender or the color of their skin.

Moral progress in the last two millennia has consisted largely in our gradually learning to overcome our tendency to make moral discriminations on such fundamentally irrelevant grounds. We should bear this hard-earned lesson in mind when we approach the prospect of technologically modified people. Liberal democracies speak to human equality not in the literal sense that all humans are equal in their various capacities, but that they are equal under the law. There is no reason why humans with altered or augmented capacities should not likewise be equal under the law, nor is there any ground for assuming that the existence of such people must undermine centuries of legal, political, and moral refinement.

The only defensible way of basing moral status on human essence is by giving essence a very broad definition; say as possessing the capacity for moral agency. But if we use such an interpretation, then Fukuyamas third premise fails. The enhancements that transhumanists advocate longer healthy lifespan, better memory, more control over emotions, etc. would not deprive people of the capacity for moral agency. If anything, these enhancements would safeguard and expand the reach of moral agency.

Fukuyamas argument against transhumanism is therefore flawed. Nevertheless, he is right to draw attention to the social and political implications of the increasing use of technology to transform human capacities. We will indeed need to worry about the possibility of stigmatization and discrimination, either against or on behalf of technologically enhanced individuals. Social justice is also at stake and we need to ensure that enhancement options are made available as widely and as affordably as possible. This is a primary reason why transhumanist movements have emerged. On a grassroots level, transhumanists are already working to promote the ideas of morphological, cognitive, and procreative freedoms with wide access to enhancement options. Despite the occasional rhetorical overreaches by some of its supporters, transhumanism has a positive and inclusive vision for how we can ethically embrace new technological possibilities to lead lives that are better than well.

The only real danger posed by transhumanism, it seems, is that people on both the left and the right may find it much more attractive than the reactionary bioconservatism proffered by Fukuyama and some of the other members of the Presidents Council.

[For a more developed response, see In Defense of Posthuman Dign
ity, Bioethics, 2005, Vol. 19, No. 3, pp. 202-214.]

Read more:

Transhumanism: The World's Most Dangerous Idea?

Posted in Transhumanism | Comments Off on Transhumanism: The World's Most Dangerous Idea?

Immortality, Transhumanism, and Ray Kurzweils Singularity

Posted: at 10:51 am

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Vernor Vinge, Technological Singularity, 1983

Futurist and Inventor Ray Kurzweil has a plan: He wants to never die.

In order to achieve this goal, he currently takes over 150 supplements per day, eats a calorie restricted diet (a proven technique to prolong lifespan), drinks ionized water (a type of alkalinized water that supposedly protects against free radicals in the body), and exercises daily, all to promote the healthy functioning of his body; and at 60 years old, he reportedly has the physiology of a man 20 years younger.

But the human body, no matter how well you take care of it, is susceptible to illness, disease, and senescence the process of cellular change in the body that results in that little thing we all do called aging. (This cellular process is why humans are physiologically unable to live past the age of around 125 years old.) Kurzweil is well aware of this, but has a solution: he is just trying to live long enough in his human body until technology reaches the point where man can meld with machine, and he can survive as a cyborg with robotically enhanced features; survive, that is, until the day when he can eventually upload his consciousness onto a harddrive, enabling him to live forever as bits of information stored indefinitely; immortal, in a sense, as long as he has a copy of himself in case the computer fails.

What happens if these technological abilities dont come soon enough? Kurzweil has a back-up plan. If, for some reason, this mind-machine blend doesnt occur in his biological lifetime, Kurzweil is signed up at Alcor Life Extension Foundation to be cryonically frozen and kept in Scottsdale, Arizona, amongst approximately 900 other stored bodies (including famous baseball player Ted Williams) who are currently stored. There at Alcor, he will wait until the day when scientists discover the ability to reanimate life back into him and not too long, as Kurzweil believes this day will be in about 50 years.

Watch a video on Alcor and Cryonics here:

Ray Kurzweil is a fascinating and controversial figure, both famous and infamous for his technological predictions. He is a respected scientist and inventor, known for his accurate predictions of a number of technological events, and recently started The Singularity University here in Silicon Valley, an interdisciplinary program (funded in part by Google) aimed to assemble, educate and inspire a cadre of leaders around issues of accelerating technologies.

Ray Kurzweil

Kurzweils most well-known predictions are encapsulated in this event he forecasts called The Singularity, a period of time he predicts in the next few decades when artificial intelligence will exceed human intelligence, and technologies like genetic engineering, nanotechnology, and computer technology will radically transform human life, enabling mind, body and machine to become one.

He is also a pioneer of a movement called transhumanism, which is defined by this belief that technology will ultimately replace biology, and rid human beings of all the things that, well, make us human, like disease, aging, and you guessed itdeath. Why be human when you can be something better? When Artificial intelligence and nanotechnology comes around in the singularity, Kurzweil thinks, being biologically human will become obsolete. With cyborg features and enhanced cognitive capacities, we will have fewer deficiencies, and more capabilities; we will possess the ability to become more like machines, and well be better for it.

Watch A Preview For A Film About Kurzweil entitled Transcendent Man:

Kurzweil outlines his vision of our technological future in his article Reinventing Humanity: The Future of Machine-Human Intelligence for Futurist Magazine, which raises some juicy points to consider from the perspective of ethics and technology. He explains The Singularity, in his own words,:

We stand on the threshold of the most profound and transformative event in the history of humanity, the singularity.

What is the Singularity? From my perspective, the Singularity is a future period during which the pace of technological change will be so fast and far-reaching that human existence on this planet will be irreversibly altered. We will combine our brain powerthe knowledge, skills, and personality quirks that make us humanwith our computer power in order to think, reason, communicate, and create in ways we can scarcely even contemplate today.

This merger of man and machine, coupled with the sudden explosion in machine intelligence and rapid innovation in the fields of gene research as well as nanotechnology, will result in a world where there is no distinction between the biological and the mechanical, or between physical and virtual reality. These technological revolutions will allow us to transcend our frail bodies with all their limitations. Illness, as we know it, will be eradicated. Through the use of nanotechnology, we will be able to manufacture almost any physical product upon demand, world hunger and poverty will be solved, and pollution will vanish. Human existence will undergo a quantum leap in evolution. We will be able to live as long as we choose. The coming into being of such a world is, in essence, the Singularity.

The details of the coming Singularity, Kurzweil outlines, will occur in three areas: The genetic revolution, the nanotech revolution, and strong AI: which means, essentially, machines that are smarter than humans.

The first he describes is the nanotechnology revolution, which refers to a type of technology that manipulates matter on an atomic and molecular scale, potentially allowing us to reassemble matter in a variety of ways. Kurzweil believes nanotechnology will give us the capability to create atomic size robots that can clean our blood cells and eradicate disease; he also thinks nanotechnology will allow us to create essentially anything by assembling it through nanobots (for example, he thinks that nanotechnology will enable us to e-mail physical things like clothing, much like we can currently e-mail audio-files). He explains:

The nanotechnology revolution will enable us to redesign and rebuildmolecule by moleculeour bodies and brains and the world with which we interact, going far beyond the limitations of biology.

In the future, nanoscale devices will run hundreds of tests simultaneously on tiny samples of a given substance. These devices will allow extensive tests to be conducted on nearly invisible samples of blood.

In the area of treatment, a particularly exciting application of this technology is the harnessing of nanoparticles to deliver medication to specific sites in the body. Nanoparticles can guide drugs into cell walls and through the blood-brain barrier. Nanoscale packages can be designed to hold drugs, protect them through the gastrointestinal tract, ferry them to specific locations, and then release them in sophisticated ways that can be influenced and controlled, wirelessly, from outside the body.

In regards to AI, Kurzweil envisions what will eventually become a post-human future, where we upload our consciousness to computers and live forever as stored information:

The implementation of artificial intelligence in our biological systems will mark an evolutionary leap forward for humanity, but
it also implies we will indeed become more machine than human. Billions of nanobots will travel through the bloodstream in our bodies and brains. In our bodies, they will destroy pathogens, correct DNA errors, eliminate toxins, and perform many other tasks to enhance our physical well-being. As a result, we will be able to live indefinitely without aging.

Despite the wonderful future potential of medicine, real human longevity will only be attained when we move away from our biological bodies entirely. As we move toward a software-based existence, we will gain the means of backing ourselves up (storing the key patterns underlying our knowledge, skills, and personality in a digital setting) thereby enabling a virtual immortality. Thanks to nanotechnology, we will have bodies that we can not just modify but change into new forms at will. We will be able to quickly change our bodies in full-immersion virtual-reality environments incorporating all of the senses during the 2020s and in real reality in the 2040s.

Now, the idea of becoming nanobot driven robots is hard to wrap ones head around, particurlaly living in a time when people struggle to get their blue-tooths to work correctly. But even though to most people, these predictions seem very extreme, Kurzweil explains why he thinks these changes are coming fast, even if we cant conceive of them now. He explains that, in the vein of Moores law (which describes how the density of transistors on computer chips has doubled every two years since its invention), technology develops exponentially and thus the rate of change is rapidly increasing in the modern day:

We wont experience 100 years of technological advance in the twenty-first century; we will witness on the order of 20,000 years of progress

How is it possible we could be so close to this enormous change and not see it? The answer is the quickening nature of technological innovation. In thinking about the future, few people take into consideration the fact that human scientific progress is exponential

In other words, the twentieth century was gradually speeding up to todays rate of progress; its achievements, therefore, were equivalent to about 20 years of progress at the rate of 2000. Well make another 20 years of progress in just 14 years (by 2014), and then do the same again in only seven years. To express this another way, we wont experience 100 years of technological advance in the twenty-first century; we will witness on the order of 20,000 years of progress (again, when measured by todays progress rate), or progress on a level of about 1,000 times greater than what was achieved in the twentieth century.

Reflections

There are so many questions to ask, its hard to know where to start. Considering The Singularity, many questions arise (the first, which youre probably thinking, is Is this really possible?!) But that question put temporarily aside, some questions seem to be: what are the promise and perils of nanotechnology, and how can we approach them responsibly?What types of genetic engineering, if any, should we pursue, and what types should we avoid? If we really could live forever, should weparticularly if it meant living no longer as humans, but as machines? And what happens to who we are as human beings our beliefs, our religions and faiths, our thoughts about our purpose if we pursue this type of future?

Each of these topics is rife with ethical and existential questions; and discussion of many of them requires scientific knowledge that extends beyond my ability to represent them here. But contemplating these questions broadly, even in spite of extensive knowledge of their specifics, brings into focus some fundamental questions about the principles of human experience, and about the broad issue of our technological future and how to approach it.The more we envision a technologically saturated future, I think, the more our human values are called upon to be revealed as we react, respond, flinch, or embrace the pictures of our future reflected in these predictions. They ask us to consider: what do we value about being human? What do we want to hold on to about being human, and what do want to replace, augment, and transform with technology? Is living as stored information really any life at all?

In addition to these questions, exploring these futuristic issues calls us to consider some of our fundamental principles about technology. A basic yet extremely complex question arises: Should all technology be pursued? In other words, should we ever restrict technological innovation, and say that some technologies, because of their risks to humanity, or to certain human values simply shouldnt be developed?

Reflections on this question bring up the topic of techno-optimism and techno-pessimism, which I wrote about briefly here.

Kurzweil, it seems to go without saying, is a fullfledged techno-optimist, interested in letting technology run its full reign, even if that means leaving everything that is recognizeably human behind. He concedes that we need to be responsible about our use of nanotechnology a technology which some fear could bring about the end of the world (see the grey goo theory) but for the most part is a proponent of full fledged technological expansion. Reflection is important, but no amount should limit technologies:

We dont have to look past today to see the intertwined promise and peril of technological advancement, he says. Imagine describing the dangers (atomic and hydrogen bombs for one thing) that exist today to people who lived a couple of hundred years ago. They would think it mad to take such risks. But how many people in 2006 would really want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99% of the human race struggled through two centuries ago?

We may romanticize the past, but up until fairly recently most of humanity lived extremely fragile lives in which one all-too-common misfortune could spell disaster. Two hundred years ago, life expectancy for females in the record-holding country (Sweden) was roughly 35-five years, very brief compared with the longest life expectancy today-almost 85 years for Japanese women. Life expectancy for males was roughly 33 years, compared with the current 79 years. Half a day was often required to prepare an evening meal, and hard labor characterized most human activity. There were no social safety nets. Substantial portions of our species still live in this precarious way, which is at least one reason to continue technological progress and the economic improvement that accompanies it. Only technology, with its ability to provide orders of magnitude of advances in capability and affordability has the scale to confront problems such as poverty, disease, pollution, and the other overriding concerns of society today. The benefits of applying ourselves to these challenges cannot be overstated.

But another, more technologically conservative view is important to consider, one characterized by thinkers who question whether these technologies should be proliferated, or even pursued at all.

William Joy, co-founder of Sun Microsystems, famously countered Kurzweils predictions in his article, Why The Future Doesnt Need Us. He opens his article discussing his meeting with Kurzweil:

I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was
hearing a strong argument that they were a near-term possibility

From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21stcentury. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.

I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Rays proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.

Joy then discusses how these technologies (namely nanotechnology and artificial intelligence) pose a new, unparralleled threat to humanity, and that as a result, we shouldnt pursue them in fact, we should purposefully restrict them, on the principle that the amount of harm and threat they pose to humanity itself outweighs what benefit they could bring.

Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies robotics, genetic engineering, and nanotechnology pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once but one bot can become many, and quickly get out of control.

Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of sciences quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.

We are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I dont believe so, but we arent trying yet, and the last chance to assert control the fail-safe point is rapidly approaching. We have our first pet robots, as well as commercially available genetic engineering techniques, and our nanoscale techniques are advancing rapidly. While the development of these technologies proceeds through a number of steps, it isnt necessarily the case as happened in the Manhattan Project and the Trinity test that the last step in proving a technology is large and hard. The breakthrough to wild self-replication in robotics, genetic engineering, or nanotechnology could come suddenly, reprising the surprise we felt when we learned of the cloning of a mammal.

He closes his essay saying:

Thoreau also said that we will be rich in proportion to the number of things which we can afford to let alone. We each seek to be happy, but it would seem worthwhile to question whether we need to take such a high risk of total destruction to gain yet more knowledge and yet more things; common sense says that there is a limit to our material needs and that certain knowledge is too dangerous and is best forgone.

Neither should we pursue near immortality without considering the costs A technological approach to Eternity near immortality through robotics may not be the most desirable utopia, and its pursuit brings clear dangers. Maybe we should rethink our utopian choices.

Another view that counters Kurzweils is presented by Richard Eckersley, focused a bit less on the scientific dangers and more on the threat to human values:

Why pursue this(Kurzweils) future?The future world that Ray Kurzweil describes bears almost no relationship to human well-being that I am aware of. In essence, human health and happiness comes from being connected and engaged, from being suspended in a web of relationships and interestspersonal, social and spiritual that give meaning to our lives. The intimacy and support provided by close personal relationships seem to matter most; isolation exacts the highest price. The need to belong is more important than the need to be rich. Meaning matters more than money and what it buys.

We are left with the matter of destiny: it is our preordained fate, Kurzweil suggests, to advance technologically until the entire universe is at our fingertips. The question then becomes, preordained by whom or what? Biological evolution has not set this course for us; Is technology itself the planner? Perhaps it will eventually be, but not yet.

We are left to conclude that we will do this because it is we who have decided it is our destiny.

Joy and Eckersley powerfully warn against our pursuit of a Kurzweil-type future. So we may be able to have the technical ability to achieve machine-like capacities; does that mean we should? This technological future, though perhaps possible, should not be preferable. The technologies that Kurzweil speaks of are dangerous, presenting a new type of threat that we have not before faced as humans and the risks of pursuing them far outweigh the benefits.

We may find ourselves equipped with the capacity to alter ourselves and the world, and yet unable to handle or control that immense power

If we are to continue down Kurzweils path, we may be able to pursue remarkable things conceived of mostly so far in science fiction a future where we are no longer humans at all, but artifacts of our own technological creations. But if we are to heed Joys and Eckersleys views, we would practice saying enough is enough we would say we have sufficient technology to live reasonably happy lives, and by encouraging the development of these new technologies, we might be unleashing entities of pandoras box that could put humanity in ruins forever. We would say, Yes, there is tremendous promise in these technologies; but there is more so a tremendous risk. We need to hold fast to the human values of restraint and temperance, lest we find ourselves equippedwiththe capacity to alter ourselves and the world, and yet unable to handle or control that immense power.

So the camps seem to be these: Kurzweil believes technology reduces suffering, and that we should pursue it for that reason to any end even until we are no longer human, but become technology ourselves. (Indeed, he feels we have a moral imperative to pursue them for this reason.) Joy believes there are too many dangers in this type of future. And Eckersley asks, why would we want this future, anyway? I am left thinking about a number of things:

First, I am intrigued by Kurzweils unwavering love for technology because it seems to me like technology has both its strengths and its weaknesses, and that such faith in a technological system greatly overinflates the capacities of technology to cure all of the worlds problems while overlooking its very real drawbacks.I wonder about putting so much faith in technology, to solve all our ills, and replace all our deficiencies. Is it really such a healing, improving force? Would it really be possible to achieve this technological utopia without some potentially disastrous consequences?

I also cant help but wonder what role technology, as its own force, plays in this debate. People often fear about rebellious robots or artificially intelligent beings taking over; but is technology already, in a sense guiding us, in control of us, instead of us controlling it? It seems harder and harder to resist the grip of technology, even as we face a future that, as Joy says, no longer needs us. Isnt there something a bit strange about humans contemplatingand preferring a post-human future? Does it indicate, in some sense, that technology has already overtaken man, and is gearing us down a path until it fully reigns supreme?

If we arent drawing the line at genetic engineering, nanotechnology, and artificial intelligence, does that mean we will never actually draw a line?

I am also left wondering, in part because of the aforementioned reason, whether it is possible to forego the development of certain technologies, as Joy suggests, given our current track record and inclinations towards the use of technology. It always seems with technology that if we have the capacity to do something, then we inevitably will. Is it possible to stop the development of technology, especially if that means also giving up some of its potential benefits? And if we arent drawing the line at genetic engineering, nanotechnology, and artificial intelligence, does that mean we will never actually draw a line? What does that say about human nature that we forever seek this sort of technological progress, even when it robs us of what we currently conceive of as making us human? Are there core values to being human that will persevere, or are we really just a fleeting blip in the evolutionary climb towards becoming transhumans?

Concluding Thoughts

The ideas Kurzweil puts forth as his vision of our future really forces one to consider what things about being human seem worth holding onto (if any). And even if his predictions dont materialize in the way or the time frame he anticipates, it does seem undeniable that we are at a critical turning point in our species history. Indeed, the decisions we choose to make now in regards to these fundamentally reshaping technologies will affect generations to come in a profound way generations whose lives will be radically different based on what roads we choose to go down in regards to genetic engineering, artificial intelligence, and nanotechnology.

But making these choices is not strictly a technical task, concerned merely with what we are able to, technologically speaking, accomplish; rather, it really requires us to decideour core beliefs about what makes a good life; to consider what is worth risking about being human beings, not only to alleviate suffering but also to engage in these self-enhancing technologies that will supposedly make us stronger, smarter, and less destructible; and to grapple with these fundamental questions of life and death that are not technological issues but rather metaphysical ones. Indeed, its no small philosophical feat to reshape and change the human genome; its no small feat to create artificial beings smarter than human beings; and its no small feat to eradicate what has, since the birth of mankind, defined our human experience: the fleeting nature of life, and the inevitability of death. Taking this power and control into our own hands requires not just the capability to achieve extended life from a technical standpoint, but a completely redefined scope of who we are, what we want, and what our purpose is on this planet.

There are questions, of course, about the moral decision of living forever. What would we do about overpopulation would we stop procreating completely? Does a person living now have more of a right to be alive than a person who hasnt been born yet? Where would we derive purpose from in life if there was no end point? These would all be real questions to consider in this type of scenario; and they are questions that would require real reflection. With a reshaped experience of what it means to be human, we would be required to make decisions about our lives that weve never even had to consider making before.

But if Kurzweil is correct, then never have we had such power over our own destinies. In Kurzweils world, there is no higher power or God divining our life course, nor is there an afterlife or Heaven worth gaining entrance to. The biological and technical underpinnings of life are, in his view, manipulatable at our will; we can defy what some might call our God- given biology and we can become our own makers. We can even make our own rules. And along with that power, would come the responsibility to answer some very weighty philosophical questions, for nothing else would be determining those answers for us.

My question is, do we really want that responsibility? Are we really equipped to handle that type of power? And furthermore, does getting caught up in all the ways these technologies could enhance our lives in getting caught up in the idea that all technological innovation is definitively progress are we less and less able to step back and ask the philosophical and ethical questions about if this isreally what a good life looks like?

Questions:

When you envision our technological future, do you share Kurzweils dreams? Joys fears? Eckersleys questions about our human values being lost?

Should we place limits on certain technologies, given the dangers they present? Are there any types of technologies we simply shouldnt pursue?

Continue reading here:

Immortality, Transhumanism, and Ray Kurzweils Singularity

Posted in Transhumanism | Comments Off on Immortality, Transhumanism, and Ray Kurzweils Singularity

What does Extropianism mean? – Definitions.net

Posted: at 10:51 am

Extropianism

Extropianism, also referred to as the philosophy of Extropy, is an evolving framework of values and standards for continuously improving the human condition. Extropians believe that advances in science and technology will some day let people live indefinitely. An extropian may wish to contribute to this goal, e.g. by doing research and development or volunteering to test new technology. Extropianism describes a pragmatic consilience of transhumanist thought guided by a proactionary approach to human evolution and progress. Originated by a set of principles developed by Dr. Max More, The Principles of Extropy, extropian thinking places strong emphasis on rational thinking and practical optimism. According to More, these principles "do not specify particular beliefs, technologies, or policies". Extropians share an optimistic view of the future, expecting considerable advances in computational power, life extension, nanotechnology and the like. Many extropians foresee the eventual realization of unlimited maximum life spans, and the recovery, thanks to future advances in biomedical technology or mind uploading, of those whose bodies/brains have been preserved by means of cryonics.

View post:

What does Extropianism mean? - Definitions.net

Posted in Extropianism | Comments Off on What does Extropianism mean? – Definitions.net

Myths of Individualism | Libertarianism.org

Posted: at 10:51 am

Sep 6, 2011

Palmer takes on the misconceptions of individualism common to communitarian critics of liberty.

It has recently been asserted that libertarians, or classical liberals, actually think that individual agents are fully formed and their value preferences are in place prior to and outside of any society. They ignore robust social scientific evidence about the ill effects of isolation, and, yet more shocking, they actively oppose the notion of shared values or the idea of the common good. I am quoting from the 1995 presidential address of Professor Amitai Etzioni to the American Sociological Association (American Sociological Review, February 1996). As a frequent talk show guest and as editor of the journal The Responsive Community,Etzioni has come to some public prominence as a publicist for a political movement known as communitarianism.

Etzioni is hardly alone in making such charges. They come from both left and right. From the left, Washington Post columnist E. J. Dionne Jr. argued in his book Why Americans Hate Politics that the growing popularity of the libertarian cause suggested that many Americans had even given up on the possibility of a common good, and in a recent essay in the Washington Post Magazine, that the libertarian emphasis on the freewheeling individual seems to assume that individuals come into the world as fully formed adults who should be held responsible for their actions from the moment of birth. From the right, the late Russell Kirk, in a vitriolic article titled Libertarians: The Chirping Sectaries, claimed that the perennial libertarian, like Satan, can bear no authority, temporal or spiritual and that the libertarian does not venerate ancient beliefs and customs, or the natural world, or his country, or the immortal spark in his fellow men.

More politely, Sen. Dan Coats (R-Ind.) and David Brooks of the Weekly Standard have excoriated libertarians for allegedly ignoring the value of community. Defending his proposal for more federal programs to rebuild community, Coats wrote that his bill is self-consciously conservative, not purely libertarian. It recognizes, not only individual rights, but the contribution of groups rebuilding the social and moral infrastructure of their neighborhoods. The implication is that individual rights are somehow incompatible with participation in groups or neighborhoods.

Such charges, which are coming with increasing frequency from those opposed to classical liberal ideals, are never substantiated by quotations from classical liberals; nor is any evidence offered that those who favor individual liberty and limited constitutional government actually think as charged by Etzioni and his echoes. Absurd charges often made and not rebutted can come to be accepted as truths, so it is imperative that Etzioni and other communitarian critics of individual liberty be called to account for their distortions.

Let us examine the straw man of atomistic individualism that Etzioni, Dionne, Kirk, and others have set up. The philosophical roots of the charge have been set forth by communitarian critics of classical liberal individualism, such as the philosopher Charles Taylor and the political scientist Michael Sandel. For example, Taylor claims that, because libertarians believe in individual rights and abstract principles of justice, they believe in the self-sufficiency of man alone, or, if you prefer, of the individual. That is an updated version of an old attack on classical liberal individualism, according to which classical liberals posited abstract individuals as the basis for their views about justice.

Those claims are nonsense. No one believes that there are actually abstract individuals, for all individuals are necessarily concrete. Nor are there any truly self-sufficient individuals, as any reader of The Wealth of Nations would realize. Rather, classical liberals and libertarians argue that the system of justice should abstract from the concrete characteristics of individuals. Thus, when an individual comes before a court, her height, color, wealth, social standing, and religion are normally irrelevant to questions of justice. That is what equality before the law means; it does not mean that no one actually has a particular height, skin color, or religious belief. Abstraction is a mental process we use when trying to discern what is essential or relevant to a problem; it does not require a belief in abstract entities.

It is precisely because neither individuals nor small groups can be fully self-sufficient that cooperation is necessary to human survival and flourishing. And because that cooperation takes place among countless individuals unknown to each other, the rules governing that interaction are abstract in nature. Abstract rules, which establish in advance what we may expect of one another, make cooperation possible on a wide scale.

No reasonable person could possibly believe that individuals are fully formed outside societyin isolation, if you will. That would mean that no one could have had any parents, cousins, friends, personal heroes, or even neighbors. Obviously, all of us have been influenced by those around us. What libertarians assert is simply that differences among normal adults do not imply different fundamental rights.

Libertarianism is not at base a metaphysical theory about the primacy of the individual over the abstract, much less an absurd theory about abstract individuals. Nor is it an anomic rejection of traditions, as Kirk and some conservatives have charged. Rather, it is a political theory that emerged in response to the growth of unlimited state power; libertarianism draws its strength from a powerful fusion of a normative theory about the moral and political sources and limits of obligations and a positive theory explaining the sources of order. Each person has the right to be free, and free persons can produce order spontaneously, without a commanding power over them.

What of Dionnes patently absurd characterization of libertarianism: individuals come into the world as fully formed adults who should be held responsible for their actions from the moment of birth? Libertarians recognize the difference between adults and children, as well as differences between normal adults and adults who are insane or mentally hindered or retarded. Guardians are necessary for children and abnormal adults, because they cannot make responsible choices for themselves. But there is no obvious reason for holding that some normal adults are entitled to make choices for other normal adults, as paternalists of both left and right believe. Libertarians argue that no normal adult has the right to impose choices on other normal adults, except in abnormal circumstances, such as when one person finds another unconscious and administers medical assistance or calls an ambulance.

What distinguishes libertarianism from other views of political morality is principally its theory of enforceable obligations. Some obligations, such as the obligation to write a thank-you note to ones host after a dinner party, are not normally enforceable by force. Others, such as the obligation not to punch a disagreeable critic in the nose or to pay for a pair of shoes before walking out of the store in them, are. Obligations may be universal or particular. Individuals, whoever and wherever they may be (i.e., in abstraction from particular circumstances), have an enforceable obligation to all other persons: not to harm them in their lives, liberties, health, or possessions. In John Lockes terms, Being all equal and independent, no one ought to harm another in his
life, health, liberty, or possessions. All individuals have the right that others not harm them in their enjoyment of those goods. The rights and the obligations are correlative and, being both universal and negative in character, are capable under normal circumstances of being enjoyed by all simultaneously. It is the universality of the human right not to be killed, injured, or robbed that is at the base of the libertarian view, and one need not posit an abstract individual to assert the universality of that right. It is his veneration, not his contempt, for the immortal spark in his fellow men that leads the libertarian to defend individual rights.

Those obligations are universal, but what about particular obligations? As I write this, I am sitting in a coffee house and have just ordered another coffee. I have freely undertaken the particular obligation to pay for the coffee: I have transferred a property right to a certain amount of my money to the owner of the coffee shop, and she has transferred the property right to the cup of coffee to me. Libertarians typically argue that particular obligations, at least under normal circumstances, must be created by consent; they cannot be unilaterally imposed by others. Equality of rights means that some people cannot simply impose obligations on others, for the moral agency and rights of those others would then be violated. Communitarians, on the other hand, argue that we all are born with many particular obligations, such as to give to this body of personscalled a state or, more nebulously, a nation, community, or folkso much money, so much obedience, or even ones life. And they argue that those particular obligations can be coercively enforced. In fact, according to communitarians such as Taylor and Sandel, I am actually constituted as a person, not only by the facts of my upbringing and my experiences, but by a set of very particular unchosen obligations.

To repeat, communitarians maintain that we are constituted as persons by our particular obligations, and therefore those obligations cannot be a matter of choice. Yet that is a mere assertion and cannot substitute for an argument that one is obligated to others; it is no justification for coercion. One might well ask, If an individual is born with the obligation to obey, who is born with the right to command? If one wants a coherent theory of obligations, there must be someone, whether an individual or a group, with the right to the fulfillment of the obligation. If I am constituted as a person by my obligation to obey, who is constituted as a person by the right to obedience? Such a theory of obligation may have been coherent in an age of God-kings, but it seems rather out of place in the modern world. To sum up, no reasonable person believes in the existence of abstract individuals, and the true dispute between libertarians and communitarians is not about individualism as such but about the source of particular obligations, whether imposed or freely assumed.

A theory of obligation focusing on individuals does not mean that there is no such thing as society or that we cannot speak meaningfully of groups. The fact that there are trees does not mean that we cannot speak of forests, after all. Society is not merely a collection of individuals, nor is it some bigger or better thing separate from them. Just as a building is not a pile of bricks but the bricks and the relationships among them, society is not a person, with his own rights, but many individuals and the complex set of relationships among them.

A moments reflection makes it clear that claims that libertarians reject shared values and the common good are incoherent. If libertarians share the value of liberty (at a minimum), then they cannot actively oppose the notion of shared values, and if libertarians believe that we will all be better off if we enjoy freedom, then they have not given up on the possibility of a common good, for a central part of their efforts is to assert what the common good is! In response to Kirks claim that libertarians reject tradition, let me point out that libertarians defend a tradition of liberty that is the fruit of thousands of years of human history. In addition, pure traditionalism is incoherent, for traditions may clash, and then one has no guide to right action. Generally, the statement that libertarians reject tradition is both tasteless and absurd. Libertarians follow religious traditions, family traditions, ethnic traditions, and social traditions such as courtesy and even respect for others, which is evidently not a tradition Kirk thought it necessary to maintain.

The libertarian case for individual liberty, which has been so distorted by communitarian critics, is simple and reasonable. It is obvious that different individuals require different things to live good, healthy, and virtuous lives. Despite their common nature, people are materially and numerically individuated, and we have needs that differ. So, how far does our common good extend?

Karl Marx, an early and especially brilliant and biting communitarian critic of libertarianism, asserted that civil society is based on a decomposition of man such that mans essence is no longer in community but in difference; under socialism, in contrast, man would realize his nature as a species being. Accordingly, socialists believe that collective provision of everything is appropriate; in a truly socialized state, we would all enjoy the same common good and conflict simply would not occur. Communitarians are typically much more cautious, but despite a lot of talk they rarely tell us much about what our common good might be. The communitarian philosopher Alasdair MacIntyre, for instance, in his influential book After Virtue, insists for 219 pages that there is a good life for man that must be pursued in common and then rather lamely concludes that the good life for man is the life spent in seeking for the good life for man.

A familiar claim is that providing retirement security through the state is an element of the common good, for it brings all of us together. But who is included in all of us? Actuarial data show that African-American males who have paid the same taxes into the Social Security system as have Caucasian males over their working lives stand to get back about half as much. Further, more black than white males will die before they receive a single penny, meaning all of their money has gone to benefit others and none of their investments are available to their families. In other words, they are being robbed for the benefit of nonblack retirees. Are African-American males part of the all of us who are enjoying a common good, or are they victims of the common good of others? (As readers of this magazine should know, all would be better off under a privatized system, which leads libertarians to assert the common good of freedom to choose among retirement systems.) All too often, claims about the common good serve as covers for quite selfish attempts to secure private goods; as the classical liberal Austrian novelist Robert Musil noted in his great work The Man without Qualities, Nowadays only criminals dare to harm others without philosophy.

Libertarians recognize the inevitable pluralism of the modern world and for that reason assert that individual liberty is at least part of the common good. They also understand the absolute necessity of cooperation for the attainment of ones ends; a solitary individual could never actually be self-sufficient, which is precisely why we must have rulesgoverning property and contracts, for exampleto make peaceful cooperation possible and we institute government to enforce those rules. The c
ommon good is a system of justice that allows all to live together in harmony and peace; a common good more extensive than that tends to be, not a common good for all of us, but a common good for some of us at the expense of others of us. (There is another sense, understood by every parent, to the term self-sufficiency. Parents normally desire that their children acquire the virtue of pulling their own weight and not subsisting as scroungers, layabouts, moochers, or parasites. That is a necessary condition of self-respect; Taylor and other critics of libertarianism often confuse the virtue of self-sufficiency with the impossible condition of never relying on or cooperating with others.)

The issue of the common good is related to the beliefs of communitarians regarding the personality or the separate existence of groups. Both are part and parcel of a fundamentally unscientific and irrational view of politics that tends to personalize institutions and groups, such as the state or nation or society. Instead of enriching political science and avoiding the alleged naivet of libertarian individualism, as communitarians claim, however, the personification thesis obscures matters and prevents us from asking the interesting questions with which scientific inquiry begins. No one ever put the matter quite as well as the classical liberal historian Parker T. Moon of Columbia University in his study of 19th-century European imperialism, Imperialism and World Politics:

Language often obscures truth. More than is ordinarily realized, our eyes are blinded to the facts of international relations by tricks of the tongue. When one uses the simple monosyllable France one thinks of France as a unit, an entity. When to avoid awkward repetition we use a personal pronoun in referring to a countrywhen for example we say France sent her troops to conquer Tuniswe impute not only unity but personality to the country. The very words conceal the facts and make international relations a glamorous drama in which personalized nations are the actors, and all too easily we forget the flesh-and-blood men and women who are the true actors. How different it would be if we had no such word as France, and had to say insteadthirty-eight million men, women and children of very diversified interests and beliefs, inhabiting 218,000 square miles of territory! Then we should more accurately describe the Tunis expedition in some such way as this: A few of these thirty-eight million persons sent thirty thousand others to conquer Tunis. This way of putting the fact immediately suggests a question, or rather a series of questions. Who are the few? Why did they send the thirty thousand to Tunis? And why did these obey?

Group personification obscures, rather than illuminates, important political questions. Those questions, centering mostly around the explanation of complex political phenomena and moral responsibility, simply cannot be addressed within the confines of group personification, which drapes a cloak of mysticism around the actions of policymakers, thus allowing some to use philosophyand mystical philosophy, at thatto harm others.

Libertarians are separated from communitarians by differences on important issues, notably whether coercion is necessary to maintain community, solidarity, friendship, love, and the other things that make life worth living and that can be enjoyed only in common with others. Those differences cannot be swept away a priori; their resolution is not furthered by shameless distortion, absurd characterizations, or petty name-calling.

Myths of Individualism originally appeared in the September/October 1996 issue of Cato Policy Report.

More:

Myths of Individualism | Libertarianism.org

Posted in Libertarianism | Comments Off on Myths of Individualism | Libertarianism.org

Ethical Egoism and Biblical Self-Interest | Papers at …

Posted: at 10:51 am

Illogic Primer Quotes Clippings Books and Bibliography Paper Trails Links Film

J.P. Moreland, Westminster Theological Journal 59 (1997), pp. 257-68. Nov 30 . 1997

The Old and New Testaments contain a number of passages that in some way or another associate moral obligation with self-interest in the form of seeking rewards and avoiding punishment. Thus, Exod 20:12 says Honor your father and your mother, that your days may be prolonged in the land which the Lord your God gives you. Jesus tells us to seek first His kingdom, and His righteousness; and all these things shall be added to you (Matt 6:33). On another occasion he warns his listeners that at the end of the age the angels shall come forth, and take out the wicked from among the righteous, and will cast them into the furnace of fire; there shall be weeping and gnashing of teeth (Matt 13:49-50). Paul states his ambition to be pleasing to the Lord for we must all appear before the judgment-seat of Christ, that each one may be recompensed for his deeds. (II Cor 5:10). The fact that rewards and punishments are associated with self-interest and moral or religious obligation is clear throughout the scriptures. What is not so clear is just how to understand these passages from the point of view of normative moral theory. More specifically, do texts of this sort imply that ethical egoism (to be defined below) is the correct normative ethical theory derived from the Bible? In over a decade of teaching ethics, I regularly have students, when first exposed to ethical egoism, draw the conclusion that this ethical system is, indeed, the best way to capture biblical ethics. And while popular works on the spiritual life are not sophisticated enough to be clear on the matter, a number of them, especially those that promote a prosperity gospel, would seem to be expressions of ethical egoism.

The identification of ethical egoism with biblical ethics is not confined to popular venues. Secular philosopher John Hospers argues that when believers justify being moral on the basis of a doctrine of eternal rewards and punishments, this is simply an appeal to self-interest . [N]othing could be a clearer appeal to naked and unbridled power than this.1 The vast majority of Christian philosophers and theologians have seen some combination of deontological and virtue ethics to be the best way to capture the letter and spirit of biblical ethics. Still, the problem of egoism has been noted by some and embraced by others. Years ago, Paul Ramsey raised the problem of ethical egoism when he queried,

But what of salvation? Is not salvation the end for which Christians quest? What of rewards in the kingdom of heaven? What of mans everlasting and supernatural good, the souls life with God in the hereafter mans chief end, glorifying God and enjoying him forever. Is not salvation itself a supreme value which Christians seek with earnest passion, each first of all for himself?2

Theologian Edward John Carnell (inaccurately in my view) has been understood as having promoted some form of ethical egoism.3

In recent years, Christian philosopher and theologian Philip R. West has argued that deontological ethics do not capture biblical morality and that ethical egoism is the correct normative theory in this regard. Says West, They [the OT writers] apparently believed not only that actual divine punishment is enough to establish the obligation to obey divine commands, but like Paul, that the absence of actual divine punishment erodes the obligatory status of these commands.4 Elsewhere, West defends the thesis that some agent A has a moral obligation to do P if and only if doing P will maximize As own self-interest. He argues that since scripture grounds our obligations in self-interest (rewards/punishments), this amounts to ethical egoism.

What should we make of this claim? Is ethical egoism the correct normative theory from a biblical point of view? My purpose in what follows is to show why ethical egoism is a defective normative ethical theory and, given this conclusion, to offer ways to understand biblical self-interest that do not entail the truth of ethical egoism. In what follows, I will, first, clarify the precise nature of ethical egoism; second, summarize the main types of arguments for and against ethical egoism in the literature and conclude that ethical egoism is inadequate; third, offer a set of distinctions for understanding biblical self-interest while avoiding ethical egoism.

The most plausible form of ethical egoism, embraced by philosophers such as Ayn Rand and John Hospers, is called universal or impersonal rule-egoism (hereafter, simply ethical egoism). Since Hospers is the most prominent philosopher to advocate ethical egoism, his definition is the most pertinent: each person has a moral duty to follow those moral rules that will be in the agents maximal self-interest over the long haul.5 For the ethical egoist, one has a duty to follow correct moral rules. And the factor that makes a rule a correct one is that, if followed, it will be in the agents own best interests in the long run. Each person ought to advance his own self-interests and that is the sole foundation of morality.

Ethical egoism is sometimes confused and identified with various distinct issues. First, there is individual or personal ethical egoism which says everyone has a duty to act so as to serve my self-interests. Here, everyone is morally obligated to serve the speakers long term best interests. Second, there is psychological egoism, roughly, the idea that each person can only do that act which the person takes to maximize his or her own self-interest. Psychological egoism is a descriptive thesis about motivation to the effect that we can only act on motives that are in our own self-interests. As we shall see shortly, psychological egoism is sometimes used as part of an argument for ethical egoism, but the two are distinct theses.

Third, ethical egoism is not the same thing as egotism an irritating character trait of always trying to be the center of attention. Nor is it the same as what is sometimes called being a wanton. A wanton has no sense of duty at all, but only acts to satisfy his or her own desires. The only conflict the wanton knows is that between two or more desires he cannot simultaneously satisfy (e.g. to eat more and lose weight). The wanton knows nothing about duty. Arguably, animals are wantons. Fifth, ethical egoism is not to be confused with being an egoist, i.e. being someone who believes that the sole worth of an act is its fairly immediate benefits to the individual himself. With this understanding of ethical egoism as a backdrop, let us look at the arguments for and against ethical egoism that have been preeminent in the literature. A detailed treatment of these arguments is not possible here, but by looking briefly at the main considerations usually brought to bear on ethical egoism, a feel for its strengths and weaknesses as a normative ethical theory emerges.

Among the arguments for ethical egoism, two have distinguished themselves, at least in textbook treatments of the position.6 First, it is argued that ethical egoism follows from psychological egoism in this way: psychological egoism is true and this implies that we always and cannot help but act egoistically. This is a fact about human motivation and action. Further, ought implies can. If I ought to do x, if I have a duty to do x, then I must be able to do x. If I cannot do something, then I have no duty or responsibility to do it. Applied to egoism, this means that since I can act egoistically, then I have a d
uty to do so and since I cannot act non-egoistically, then I have no duty to do so. Thus, ethical egoism is the correct picture of moral obligation in keeping with what we know about human motivation.

Does this argument work? Most philosophers have not thought so. First, the principle of psychological egoism, viz. that we always act to maximize our own self-interest, is ambiguous. So stated, the principle falls to make a distinction between the result of an act vs. the intent of an act. If it is understood in the former way it is irrelevant and if taken in the latter way it is false. If the statement merely asserts that, as a matter of fact, the result of our actions is the maximization of self-interest, then this does not imply ethical egoism. Ethical egoism is the view that the thing which morally justifies an act is the agents intent to maximize his own self-interests. So the mere psychological fact (if it is a fact) that people only do those acts that result in their own satisfaction proves nothing.

On the other hand, if the statement claims that we always act solely with the intent to satisfy our own desires, then this claim is simply false. Every day we are aware of doing acts with the sole intent of helping someone else, of doing something just because we think it is the right thing to do, and of expressing virtuous, other-centered behavior. As Christian phiosopher Joseph Butler (1692-1752) argued:

Mankind has various instincts and principles of action as brute creatures have; Some leading most directly and immediately to the good of the community, and some most directly to private good [I]t is not a true representation of mankind, to affirm that they are wholly governed by self-love, the love of power and sensual appetites . it is manifest fact, that the same persons, the generality, are frequently influenced by friendship, compassion, gratitude; and liking of what is fair and just, takes its turn amongst the other motives of action.7

Furthermore, it is not even true that we always try to do what we want or what we think is in our self-interests. We sometimes experience akrasia (weakness of will) when we fail to do or even try to do what we want (see Rom 7:15-25). And we sometimes do (or try to do) our duty even when we dont want to do it. These points appear to be facts about human action.

Second, this argument for ethical egoism suffers from what has been called the paradox of hedonism. Often, the best way maximize self-interest, say, to get happiness and the satisfaction of desire is not to aim at it. Happiness is not usually achieved as an intended goal, but rather, it is a bi-product of a life well lived and of doing what is right. If people always act in order to gain happiness, then it will remain forever elusive. Thus, psychological egoism contains a paradox when viewed as a model of human intention and action.

Finally, as a model of human action, psychological egoism rules out the possibility of libertarian freedom of the will. Briefly put, it should be noted that if libertarian freedom of the will is the correct view of human action, then the following implications follow: 1) no amount of internal states (e.g. desires, beliefs, emotions) are sufficient to produce behavior, and 2) the agent himself must spontaneously exercise his causal powers and act for the sake of reasons which function as teleological goals. For libertarians, a free act is never determined by any particular reason, including desire. Thus, on this view, psychological egoism is false if taken as a total account of human action because it implies that a person must always act for a self-interested reason. Libertarian freedom is controversial and not everyone accepts this model of human action. But the point is that for those who do, it counts as a counter-argument to psychological egoism.

A second argument for ethical egoism is called the closet utilitarian position. Some point out that if everyone acted in keeping with ethical egoism, the result would be the maximization of happiness for the greatest number of people. If acted upon, ethical egoism, as a matter of fact, leads to the betterment of humanity. There are two main problems with this argument. First, it amounts to a utilitarian justification of ethical egoism. Utilitarianism is a normative ethical theory to the effect that a moral action or rule is correct if and only if performing that act or following that rule maximizes the greatest amount of non-moral good vs. bad for the greatest number compared to alternative acts or rules open to the agent. While both are consequentialist in orientation, nevertheless, utilitarianism and ethical egoism are rival normative theories. It is inconsistent, therefore, for someone to use a rival theory, in this case, utilitarianism, as the moral justification for ethical egoism. If one is an ethical egoist, why should he or she care about the greatest good for the greatest number for its own sake, and not merely because such caring would itself lead to greater satisfaction of ones individual desires? Second, the claim seems to be factually false. Is it really the case that if everyone acted according to ethical egoism, it would maximize everyones happiness? Surely not. Sometimes self-sacrifice is needed to maximize happiness for the greatest number, and this argument for ethical egoism cannot allow for personal denial.

There are other arguments for ethical egoism, but these two have been the most central for those who advocate this normative theory. As we have seen, both arguments fail. By contrast, the main arguments against ethical egoism seem to be strong enough to justify rejecting the system as an adequate normative theory.

Among the arguments against ethical egoism, three are most prominent. First, is the publicity objection. Moral principles must serve as action guides that inform moral situations. Most moral situations involve more than one person and, in this sense, are public situations. Thus, moral action guides must be teachable to others so they can be publically used principles that help us in our interpersonal moral interactions. However, according to ethical egoism itself, there is a possible world where it is immoral for me to teach others to embrace ethical egoism because that could easily turn out not to be in my own self-interest. It may be better for me if others always acted altruistically. Thus, it could be immoral for one to go public and teach ethical egoism to others and, if so, this would violate one of the necessary conditions for a moral theory, namely, that it be teachable to others.

Philosopher Fred Feldman has offered a rejoinder to this argument.8 He claims that we have no good reason to believe that a moral doctrine needs to be consistently promulgatable. Why, he asks, should we have to be able to teach a moral doctrine to others? Someone could consistently hold to the following moral notion P as a part of his overall moral system: it is never right to promulgate anything. Unfortunately, this response fails because it does not capture the public nature of moral principles (or normative ethical theories) in so far as they serve as action guides to adjudicate interpersonal moral conflict. How could the principle it is never right to promulgate anything serve as an action guide sufficient to deal with the various aspects of duty, virtue, and rights that constitute much of the point of action guides in the first place?

Moreover, this response fails to take into account the universalizability of moral rules. If I should never promulgate anything, then this implies that I should not teach something to someone else. But there
does not seem to be a clear moral difference in this case between others and myself. To be consistent, then, I should not proclaim this moral principle to myself. Perhaps I should try to hide from myself the fact that I accept this role. This implies, among other things, that if I hold to P as a moral principle that should be universalized, then, applying P to myself, I would no longer have moral grounds for continuing to embrace P on the basis of reasons known to me or for making P known to myself. I should do my best to forget P or talk myself out of believing P. On the other hand, if I do not think P should be universalized, then in what sense is P a moral principle (since universalizability is most likely a necessary condition for a principle counting as moral)?

A second argument against ethical egoism is called the paradox of egoism. Some things, e.g. altruism, deep love, genuine friendship, are inconsistent with ethical egoism. Why? Because these features of a virtuous, moral life require us not to seek our own interests but, rather, those of others. Moreover, ethical egoism would seem to imply that helping others at ones own expense (and other acts of self sacrifice) is wrong if it is not in my long term self-interest to do so. Thus, egoism would seem to rule out important, central features of the moral life. The main point of a normative moral theory is to explain and not to eliminate what we already know to be central facets of morality prior to ethical theorizing. Furthermore, in order to reach the goal of egoism (e.g., personal happiness), one must give up egoism and be altruistic in love, friendship, and other ways. Thus, egoism is paradoxical in its own right and it eliminates key aspects of the moral life.

Some respond by claiming that altruism is fully consistent with ethical egoism. Hospers argues that, according to ethical egoism, we ought to do acts that benefit others because that is in our own self-interests.9 In a similar vein, Fred Feldman asserts that egoism allows us to perform altruistic actions provided that such actions are ultimately in our own self-interests.10 But this response fails to distinguish pseudo-altruism from genuine altruism.

Genuine altruism requires that an altruistic act have, as its sole, or at least main intent, the benefit of the other. An act whose sole or ultimate intent is self-interest but which, nevertheless, does result in the benefit of others is not genuine altruism. If you found out that someone loved you or acted altruistically toward you solely or ultimately with the intent of benefiting himself, then you would not count that as genuine love or altruism even if the act happened to benefit you in some way. Thus, egoistic altruism is a contradiction in terms. Ethical egoism is consistent with pseudo-altruism but not with genuine altruism.

Finally, a third objection claims that ethical egoism leads to inconsistent outcomes. A moral theory must allow for moral rules that are public and universalizable. But ethical egoism could lead to situations where this is not the case. How? Consider two persons A and B in a situation where they have a conflict of interest. For example, suppose there was only one kidney available for transplant, that A and B both need it, and that A or B will die without the transplant. According to universal ethical egoism, A ought to act in his own self-interests and prescribe that his desires come out on top. A had a duty to secure the kidney and thwart Bs attempts to do the same. This would seem to imply that A should prescribe that B has a duty to act in As self-interest. Of course B, according to universal ethical egoism, has from his perspective a duty to act in his own self-interest. But now a contradiction arises because ethical egoism implies that B both has a duty to give the kidney to A and obtain it himself.

Jesse Kalin has responded to this argument by claiming that, as an ethical egoist, A should not hold that B should act in As self-interest, but in Bs own self- interest.11 This would seem to solve the problem of contradictory duty above by rejecting individual ethical egoism (everyone should act in my self-interest) in favor of the universal version (everyone should act in his own self-interest). But this way of stating ethical egoism does not seem to capture the egoistic spirit of the ethical egoism because it leaves open the question as to why egoist A would need to hold that B should act in Bs interests and not in As. In other words, it may not be in As own self-interests to hold to universal, as opposed to individual ethical egoism.

Moreover, there is still a problem for this formulation of ethical egoism which can be brought out as follows: A holds that B has a duty to obtain the kidney for himself, have his interests come out on top, and, thus, harm A. But in this case, ethical egoism still seems to imply an inconsistent posture on As (and Bs) part, namely, that A thinks that B has a duty to get the kidney and harm A but that A has a duty to thwart B. Any moral theory that implies that someone has a moral duty to keep others from doing their moral duty is surely in trouble, so the objection goes. And it is hard to see how an ethical egoist A could claim that someone else had a duty to harm A himself.

Not everyone accepts this argument. Following Kalin, Louis Pojman claims that we often find it to be the case that we have a duty to thwart what is the duty of others, e.g., in a war one soldier has a duty to thwart anothers efforts to do his duty to win. In a case like this, soldiers on different sides do not believe that the other side has adequate moral grounds for being at war.12 If we separate beliefs about ethical situations from desires, so the response goes, then one person can believe the other had a duty to win the war or get the kidney, but the person can also desire to these objectives for himself and act on those desires. In general, the belief that B ought to do x does not imply that A wants B to do x.

What should we make of this response? First, the soldier example fails because it does not distinguish between subjective and objective duty. Subjective duty is one someone has when he has done his best to discover what is and is not the right thing to do. If someone sincerely and conscientiously tries to ascertain what is right, and acts on this, then he has fulfilled his subjective duty and, in a sense, is praiseworthy. But people can be sincerely wrong and fail to live up to their objective duty-the truly correct thing to do from Gods perspective, the overriding moral obligation when all things, including prima facie duties, have been taken into account-even if they have tried to do their best. Admittedly, it is not always easy to determine what the correct objective duty is in a given case. But this is merely an epistemological point and, while valid, it does not negate the legitimacy of the distinction between subjective and objective duty.

Applied to the question at hand, soldier A could only claim that soldier B has both a prima facie duty and a subjective duty to obey his country. But A could also believe that B has an objective duty to do so only if Bs country is, in fact, conducting a morally justified war. Now either A or B is on the right side of the war even though it may be hard to tell which side is correct. Thus, A and B could believe that only one of them actually has an objective duty to fight and thwart the other. So the war example does not give a genuine case where A believes B has a (objective) duty to fight and that he has to thwart B.

Second, what should we make of the claim that we should separate our beliefs about anothers dut
y from the desire to see that duty done? For one thing, a main point of a moral theory is to describe what a virtuous person is and how we can become such persons. Now, one aspect of a virtuous person is that there is a harmony and unity between desire and duty. A virtuous person desires that the objective moral right be done. Such a person is committed to the good and the right. With this in mind, it becomes clear that ethical egoism, if consistently practiced, could produce fragmented, non-virtuous individuals who believe one thing about duty (e.g., A believes B ought to do x) but who desire something else altogether (e.g., A does not desire B to do x).

However, if we grant that the ethical egoists distinction between beliefs and desires is legitimate from a moral point of view, then this distinction does resolve the claim that ethical egoism leads to a conflict of desire, e.g., A desires the kidney and that B obtain the kidney, since it implies that A believes that B has a duty to receive the kidney but only desires that he himself have it. Nevertheless, this misses the real point of the objection to ethical egoism, namely, that ethical egoism straightforwardly leads to a conflict of desire. Rather, the objection shows that ethical egoism leads to an unresolvable conflict of moral beliefs and moral duty. If A and B are ethical egoists, then A believes that it is wrong for B to have the kidney but also that it is Bs duty to try to obtain it. But how can A consistently believe that B has a duty to do something wrong? And how can A have an objective duty to thwart Bs objective duty?

It would seem, then, that ethical egoism should be rejected as a normative ethical theory and that legitimate self-interest is part of Biblical teaching, e.g. in the passages relating moral obligation to rewards and punishments. If we should not understand these texts as implicitly affirming ethical egoism, how should we understand the self-interest they apparently advocate? I do not think that exegesis alone can solve this problem because the context and grammar of the passages are usually not precise enough to settle the philosophical issue before us. However, if we assume with the majority of thinkers that deontological and virtue ethics, and not ethical egoism, are the correct normative theories implied by Scripture, then we have a set of distinctions that provide a number of legitimate ways of understanding biblical self-interest.

To begin with, we need to distinguish between self-benefit as a bi-product of an act vs. self-interest as the sole intent of an act. Scriptural passages that use self-interest may simply be pointing out that if you intentionally do the right thing, then a good bi-product of this will be rewards of various kinds. It could be argued against philosopher Philip R. West (mentioned earlier) that these passages do not clearly use self-interest as the sole legitimate intent of a moral action.

This observation relates to a second distinction between a motive and a reason. Put roughly, a motive is some state within a person that influences and moves him to action. By contrast, a reason is something that serves to justify rationally some belief that one has or some action one does; a reason for believing or doing x is an attempt to cite something that makes it likely that x be true or that x should be done. In this context, just because something, say self-interest, serves as a motive for an action, it does not follow that it also serves as the reason which justifies the action in the first place. Self-interest may be a legitimate motive for moral action, but, it could be argued, Gods commands, the objective moral law, etc. could be rationally cited as the things that make an act our duty in the first place. The Bible may be citing selfinterest as a motive for action and not as the reason for what makes the act our duty.

Moreover, even if Scripture is teaching that self-interest is a reason for doing some duty, it may be offering self-interest as a prudential and not a moral reason for doing the duty. In other words, the Bible may be saying that it is wise, reasonable, and a matter of good judgment to avoid hell and seek rewards without claiming that these considerations are moral reasons for acting according to self-interest.13 In sum, it could be argued that Scripture can be understood as advocating self-interest as a bi-product and not an intent for action, as a motive and not a reason, or as a prudential and not a moral reason. If this is so, then these scriptural ideas do not entail ethical egoism.

Second, even if scripture teaches that self-interest contributes to making something my moral duty, ethical egoism still does not follow. For one thing, ethical egoism teaches that an act is moral if and only if it maximizes my own self-interests. Ethical egoism teaches that self-interest is both necessary and sufficient for something to be my duty. However, it could be argued that egoistic factors, while not alone morally relevant to an act (other things like self-sacrifice or obeying God for its own sake are relevant as well), nevertheless, are at least one feature often important for assessing the moral worth of an act. Moral duty is not exhausted by self-interest as ethical egoism implies, but self-interest can be a legitimate factor in moral deliberation and scripture may be expressing this point.

Additionally, it is likely that the precise nature of self-interest contained in scripture is different in two ways from that which forms part of ethical egoism. For one thing, according to ethical egoism, the thing that makes an act right is that it is in my self-interest. The important value-making property here is the fact that something promotes the first person interests of the actor. Here, the moral agent attends to himself precisely as being identical to himself and to no one else.

By contrast, the scriptural emphasis on self-interest most likely grounds the appropriateness of such interest, not in the mere fact that these interests are mine, but in the fact that I am a creature of intrinsic value made in Gods image and, as such, ought to care about what happens to me. Here I seek my own welfare not because it is my own, but because of what I am, viz. a creature with high intrinsic value. Consider a possible world where human persons have no value whatever (or where human counter-parts with no intrinsic value exist). In that world, ethical egoism would still legislate self-interest, but the second view under consideration (that self-interest follows from the fact that I am a creature of value) would not because the necessary condition for self-interest (being a creature of intrinsic value) does not obtain in that world.

There is a second way that the nature of self-interest in Scripture and in ethical egoism differ. As C. S. Lewis and C. Stephen Evans have argued, there are different kinds of rewards, and some are proper because they have a natural, intrinsic connection with the things we do to earn them and because they are expressions of what God made us to be by nature. 14 In such cases, these rewards provide a reason to do an activity which does not despoil the character of the activity itself. Money is not a natural reward for love (one is mercenary to marry for money) because money is foreign to the desires that ought to accompany love. By contrast, victory is a natural reward for battle. It is a proper reward because it is not tacked onto the activity for which the reward is given, but rather victory is the consummation of and intrinsically related to the activity itself.

According to Lewis, the desire for heaven and rewards is a natural desire express
ing what we, by nature, are. We were made to desire honor before God, to be in his presence, and to hunger to enjoy the rewards he will offer us and these things are the natural consummations of our activity on earth. Thus, the appropriateness of seeking heaven and rewards derives from the fact that these results are genuine expressions of our natures and are the natural consummation of our activities for God. By contrast, according to ethical egoism, the value of results has nothing to do with our natures or with natural consummations of activities. Rather, the worth of those outcomes is solely a function of the fact that they benefit the agent himself.

In sum, self-interest is part of biblical teaching, especially in association with rewards and punishments. But ethical egoism neither captures adequately the nature of biblical self-interest nor is it the best normative ethical theory in its own right. As Christians, we should include self-interest as an important part of our moral and religious lives but without advocating ethical egoism in the process.

With degrees in philosophy, theology and chemisty, Dr. Moreland brings erudition, passion, and his distinctive ebullience to the end of loving God with all of one's mind. Moreland received his B.S. in Chemistry (with honors) from the University of Missouri, his M.A. in Philosophy (with highest honors) from the University of California, Riverside, his Th.M. in Theology (with honors) from Dallas Theological Seminary and his Ph.D. in Philosophy from the University of Southern California. Dr. Moreland has taught theology and philosophy at several schools throughout the U.S. He is currently Distinguished Professor of Philosophy at Biola University's Talbot School of Theology.

Read more:

Ethical Egoism and Biblical Self-Interest | Papers at ...

Posted in Ethical Egoism | Comments Off on Ethical Egoism and Biblical Self-Interest | Papers at …

Singularity Q&A | KurzweilAI

Posted: at 10:50 am

Originally published in 2005 with the launch of The Singularity Is Near.

Questions and Answers

So what is the Singularity?

Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), experience beaming (like Being John Malkovich), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.

And thats the Singularity?

No, thats just the precursor. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. Well get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.

When will that occur?

I set the date for the Singularityrepresenting a profound and disruptive transformation in human capabilityas 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.

Why is this called the Singularity?

The term Singularity in my book is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied trillions-fold, be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. Thats what Ive tried to do in this book.

Okay, lets break this down. It seems a key part of your thesis is that we will be able to capture the intelligence of our brains in a machine.

Indeed.

So how are we going to achieve that?

We can break this down further into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016 cps around the end of this decade. Several supercomputers with 1 quadrillion cps are already on the drawing board, with two Japanese efforts targeting 10 quadrillion cps around the end of the decade. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, came out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms.

And how will we recreate the algorithms of human intelligence?

To understand the principles of human intelligence we need to reverse-engineer the human brain. Here, progress is far greater than most people realize. The spatial and temporal (time) resolution of brain scanning is also progressing at an exponential rate, roughly doubling each year, like most everything else having to do with information. Just recently, scanning tools can see individual interneuronal connections, and watch them fire in real time. Already, we have mathematical models and simulations of a couple dozen regions of the brain, including the cerebellum, which comprises more than half the neurons in the brain. IBM is now creating a simulation of about 10,000 cortical neurons, including tens of millions of connections. The first version will simulate the electrical activity, and a future version will also simulate the relevant chemical activity. By the mid 2020s, its conservative to conclude that we will have effective models for all of the brain.

So at that point well just copy a human brain into a supercomputer?

I would rather put it this way: At that point, well have a full understanding of the methods of the human brain. One benefit will be a deep understanding of ourselves, but the key implication is that it will expand the toolkit of techniques we can apply to create artificial intelligence. We will then be able to create nonbiological systems that match human intelligence in the ways that humans are now superior, for example, our pattern- recognition abilities. These superintelligent computers will be able to do things we are not able to do, such as share knowledge and skills at electronic speeds.

By 2030, a thousand dollars of computation will be about a thousand times more powerful than a human brain. Keep in mind also that computers will not be organized as discrete objects as they are today. There will be a web of computing deeply integrated into the environment, our bodies and brains.

You mentioned the AI tool kit. Hasnt AI failed to live up to its expectations?

There was a boom and bust cycle in AI during the 1980s, similar to what we saw recently in e-commerce and telecommunications. Such boom-bust cycles are often harbingers of true revolutions; recall the railroad boom and bust in the 19th century. But just as the Internet bust was not the end of the Internet, the so-called AI Winter was not the end of the story for AI either. There are hundreds of applications of narrow AI (machine intelligence that equals or exceeds human intelligence for specific tasks) now permeating our modern infrastructure. Every time you send an email or make a cell phone call, intelligent algorithms route the information. AI programs diagnose electrocardiograms with an accuracy rivaling doctors, evaluate medical images, fly and land airplanes, guide intelligent autonomous weapons, make automated investment decisions for over a trillion dollars of funds, and guide industrial processes. These were all research projects a couple of decades ago. If all the intelligent software in the world were to suddenly stop functioning, modern civilization would grind to a halt. Of course, our AI programs are not intelligent enough to organize such a conspiracy, at least not yet.

Why dont more people see these profound changes ahead?

Hopefully after they read my new book, they will. But the primary failure is the inability of many observers to think in exponential terms. Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments because they are based on what I call the intuitive linear view of history rather than the historical exponential view. My models show that we are doubling the paradigm-shift rate every decade. Thus the 20th century was gradually speeding up to the rate of progress at the end of the century; its achievements, therefore, were equivalent to about twenty years of progress at the rate in 2000. Well make another twenty years of progress in just fourteen years (by 2014), and then do the same again in only seven years. To express this another way, we wont experience one hundred years of techn
ological advance in the 21st century; we will witness on the order of 20,000 years of progress (again, when measured by the rate of progress in 2000), or about 1,000 times greater than what was achieved in the 20th century.

The exponential growth of information technologies is even greater: were doubling the power of information technologies, as measured by price-performance, bandwidth, capacity and many other types of measures, about every year. Thats a factor of a thousand in ten years, a million in twenty years, and a billion in thirty years. This goes far beyond Moores law (the shrinking of transistors on an integrated circuit, allowing us to double the price-performance of electronics each year). Electronics is just one example of many. As another example, it took us 14 years to sequence HIV; we recently sequenced SARS in only 31 days.

So this acceleration of information technologies applies to biology as well?

Absolutely. Its not just computer devices like cell phones and digital cameras that are accelerating in capability. Ultimately, everything of importance will be comprised essentially of information technology. With the advent of nanotechnology-based manufacturing in the 2020s, well be able to use inexpensive table-top devices to manufacture on-demand just about anything from very inexpensive raw materials using information processes that will rearrange matter and energy at the molecular level.

Well meet our energy needs using nanotechnology-based solar panels that will capture the energy in .03 percent of the sunlight that falls on the Earth, which is all we need to meet our projected energy needs in 2030. Well store the energy in highly distributed fuel cells.

I want to come back to both biology and nanotechnology, but how can you be so sure of these developments? Isnt technical progress on specific projects essentially unpredictable?

Predicting specific projects is indeed not feasible. But the result of the overall complex, chaotic evolutionary process of technological progress is predictable.

People intuitively assume that the current rate of progress will continue for future periods. Even for those who have been around long enough to experience how the pace of change increases over time, unexamined intuition leaves one with the impression that change occurs at the same rate that we have experienced most recently. From the mathematicians perspective, the reason for this is that an exponential curve looks like a straight line when examined for only a brief duration. As a result, even sophisticated commentators, when considering the future, typically use the current pace of change to determine their expectations in extrapolating progress over the next ten years or one hundred years. This is why I describe this way of looking at the future as the intuitive linear view. But a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process, of which technology is a primary example.

As I show in the book, this has also been true of biological evolution. Indeed, technological evolution emerges from biological evolution. You can examine the data in different ways, on different timescales, and for a wide variety of technologies, ranging from electronic to biological, as well as for their implications, ranging from the amount of human knowledge to the size of the economy, and you get the same exponentialnot linearprogression. I have over forty graphs in the book from a broad variety of fields that show the exponential nature of progress in information-based measures. For the price-performance of computing, this goes back over a century, well before Gordon Moore was even born.

Arent there are a lot of predictions of the future from the past that look a little ridiculous now?

Yes, any number of bad predictions from other futurists in earlier eras can be cited to support the notion that we cannot make reliable predictions. In general, these prognosticators were not using a methodology based on a sound theory of technology evolution. I say this not just looking backwards now. Ive been making accurate forward-looking predictions for over twenty years based on these models.

But how can it be the case that we can reliably predict the overall progression of these technologies if we cannot even predict the outcome of a single project?

Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. For example, how will the wireless-communication protocols Wimax, CDMA, and 3G fare over the next several years? However, as I argue extensively in the book, we find remarkably precise and predictable exponential trends when assessing the overall effectiveness (as measured in a variety of ways) of information technologies. And as I mentioned above, information technology will ultimately underlie everything of value.

But how can that be?

We see examples in other areas of science of very smooth and reliable outcomes resulting from the interaction of a great many unpredictable events. Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting the properties of the entire gascomprised of a great many chaotically interacting moleculescan be done very reliably through the laws of thermodynamics. Analogously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology, comprised of many chaotic activities, can nonetheless be dependably anticipated through what I call the law of accelerating returns.

What will the impact of these developments be?

Radical life extension, for one.

Sounds interesting, how does that work?

In the book, I talk about three great overlapping revolutions that go by the letters GNR, which stands for genetics, nanotechnology, and robotics. Each will provide a dramatic increase to human longevity, among other profound impacts. Were in the early stages of the geneticsalso called biotechnologyrevolution right now. Biotechnology is providing the means to actually change your genes: not just designer babies but designer baby boomers. Well also be able to rejuvenate all of your bodys tissues and organs by transforming your skin cells into youthful versions of every other cell type. Already, new drug development is precisely targeting key steps in the process of atherosclerosis (the cause of heart disease), cancerous tumor formation, and the metabolic processes underlying each major disease and aging process. The biotechnology revolution is already in its early stages and will reach its peak in the second decade of this century, at which point well be able to overcome most major diseases and dramatically slow down the aging process.

That will bring us to the nanotechnology revolution, which will achieve maturity in the 2020s. With nanotechnology, we will be able to go beyond the limits of biology, and replace your current human body version 1.0 with a dramatically upgraded version 2.0, providing radical life extension.

And how does that work?

The killer app of nanotechnology is nanobots, which are blood-cell sized robots that can travel in the bloodstream destroying pathogens, removing debris, correcting DNA errors, and reversing aging processes.

Human body version 2.0?

Were already in the early stages of augmenting and replacing each of our organs, even portions of our brains with neural implants, th
e most recent versions of which allow patients to download new software to their neural implants from outside their bodies. In the book, I describe how each of our organs will ultimately be replaced. For example, nanobots could deliver to our bloodstream an optimal set of all the nutrients, hormones, and other substances we need, as well as remove toxins and waste products. The gastrointestinal tract could be reserved for culinary pleasures rather than the tedious biological function of providing nutrients. After all, weve already in some ways separated the communication and pleasurable aspects of sex from its biological function.

And the third revolution?

The robotics revolution, which really refers to strong AI, that is, artificial intelligence at the human level, which we talked about earlier. Well have both the hardware and software to recreate human intelligence by the end of the 2020s. Well be able to improve these methods and harness the speed, memory capabilities, and knowledge- sharing ability of machines.

Well ultimately be able to scan all the salient details of our brains from inside, using billions of nanobots in the capillaries. We can then back up the information. Using nanotechnology-based manufacturing, we could recreate your brain, or better yet reinstantiate it in a more capable computing substrate.

Which means?

Our biological brains use chemical signaling, which transmit information at only a few hundred feet per second. Electronics is already millions of times faster than this. In the book, I show how one cubic inch of nanotube circuitry would be about one hundred million times more powerful than the human brain. So well have more powerful means of instantiating our intelligence than the extremely slow speeds of our interneuronal connections.

So well just replace our biological brains with circuitry?

I see this starting with nanobots in our bodies and brains. The nanobots will keep us healthy, provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the Internet, and otherwise greatly expand human intelligence. But keep in mind that nonbiological intelligence is doubling in capability each year, whereas our biological intelligence is essentially fixed in capacity. As we get to the 2030s, the nonbiological portion of our intelligence will predominate.

The closest life extension technology, however, is biotechnology, isnt that right?

Theres certainly overlap in the G, N and R revolutions, but thats essentially correct.

So tell me more about how genetics or biotechnology works.

As we are learning about the information processes underlying biology, we are devising ways of mastering them to overcome disease and aging and extend human potential. One powerful approach is to start with biologys information backbone: the genome. With gene technologies, were now on the verge of being able to control how genes express themselves. We now have a powerful new tool called RNA interference (RNAi), which is capable of turning specific genes off. It blocks the messenger RNA of specific genes, preventing them from creating proteins. Since viral diseases, cancer, and many other diseases use gene expression at some crucial point in their life cycle, this promises to be a breakthrough technology. One gene wed like to turn off is the fat insulin receptor gene, which tells the fat cells to hold on to every calorie. When that gene was blocked in mice, those mice ate a lot but remained thin and healthy, and actually lived 20 percent longer.

New means of adding new genes, called gene therapy, are also emerging that have overcome earlier problems with achieving precise placement of the new genetic information. One company Im involved with, United Therapeutics, cured pulmonary hypertension in animals using a new form of gene therapy and it has now been approved for human trials.

So were going to essentially reprogram our DNA.

Thats a good way to put it, but thats only one broad approach. Another important line of attack is to regrow our own cells, tissues, and even whole organs, and introduce them into our bodies without surgery. One major benefit of this therapeutic cloning technique is that we will be able to create these new tissues and organs from versions of our cells that have also been made youngerthe emerging field of rejuvenation medicine. For example, we will be able to create new heart cells from your skin cells and introduce them into your system through the bloodstream. Over time, your heart cells get replaced with these new cells, and the result is a rejuvenated young heart with your own DNA.

Drug discovery was once a matter of finding substances that produced some beneficial effect without excessive side effects. This process was similar to early humans tool discovery, which was limited to simply finding rocks and natural implements that could be used for helpful purposes. Today, we are learning the precise biochemical pathways that underlie both disease and aging processes, and are able to design drugs to carry out precise missions at the molecular level. The scope and scale of these efforts is vast.

But perfecting our biology will only get us so far. The reality is that biology will never be able to match what we will be capable of engineering, now that we are gaining a deep understanding of biologys principles of operation.

Isnt nature optimal?

Not at all. Our interneuronal connections compute at about 200 transactions per second, at least a million times slower than electronics. As another example, a nanotechnology theorist, Rob Freitas, has a conceptual design for nanobots that replace our red blood cells. A conservative analysis shows that if you replaced 10 percent of your red blood cells with Freitas respirocytes, you could sit at the bottom of a pool for four hours without taking a breath.

If people stop dying, isnt that going to lead to overpopulation?

A common mistake that people make when considering the future is to envision a major change to todays world, such as radical life extension, as if nothing else were going to change. The GNR revolutions will result in other transformations that address this issue. For example, nanotechnology will enable us to create virtually any physical product from information and very inexpensive raw materials, leading to radical wealth creation. Well have the means to meet the material needs of any conceivable size population of biological humans. Nanotechnology will also provide the means of cleaning up environmental damage from earlier stages of industrialization.

So well overcome disease, pollution, and povertysounds like a utopian vision.

Its true that the dramatic scale of the technologies of the next couple of decades will enable human civilization to overcome problems that we have struggled with for eons. But these developments are not without their dangers. Technology is a double edged swordwe dont have to look past the 20th century to see the intertwined promise and peril of technology.

What sort of perils?

G, N, and R each have their downsides. The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far more widespread than the tools and knowledge to create an atomic b
omb, and the impact could be far worse.

So maybe we shouldnt go down this road.

Its a little late for that. But the idea of relinquishing new technologies such as biotechnology and nanotechnology is already being advocated. I argue in the book that this would be the wrong strategy. Besides depriving human society of the profound benefits of these technologies, such a strategy would actually make the dangers worse by driving development underground, where responsible scientists would not have easy access to the tools needed to defend us.

So how do we protect ourselves?

I discuss strategies for protecting against dangers from abuse or accidental misuse of these very powerful technologies in chapter 8. The overall message is that we need to give a higher priority to preparing protective strategies and systems. We need to put a few more stones on the defense side of the scale. Ive given testimony to Congress on a specific proposal for a Manhattan style project to create a rapid response system that could protect society from a new virulent biological virus. One strategy would be to use RNAi, which has been shown to be effective against viral diseases. We would set up a system that could quickly sequence a new virus, prepare a RNA interference medication, and rapidly gear up production. We have the knowledge to create such a system, but we have not done so. We need to have something like this in place before its needed.

Ultimately, however, nanotechnology will provide a completely effective defense against biological viruses.

But doesnt nanotechnology have its own self-replicating danger?

Yes, but that potential wont exist for a couple more decades. The existential threat from engineered biological viruses exists right now.

Okay, but how will we defend against self-replicating nanotechnology?

There are already proposals for ethical standards for nanotechnology that are based on the Asilomar conference standards that have worked well thus far in biotechnology. These standards will be effective against unintentional dangers. For example, we do not need to provide self-replication to accomplish nanotechnology manufacturing.

But what about intentional abuse, as in terrorism?

Well need to create a nanotechnology immune systemgood nanobots that can protect us from the bad ones.

Blue goo to protect us from the gray goo!

Yes, well put. And ultimately well need the nanobots comprising the immune system to be self-replicating. Ive debated this particular point with a number of other theorists, but I show in the book why the nanobot immune system we put in place will need the ability to self-replicate. Thats basically the same lesson that biological evolution learned.

Ultimately, however, strong AI will provide a completely effective defense against self-replicating nanotechnology.

Okay, whats going to protect us against a pathological AI?

Yes, well, that would have to be a yet more intelligent AI.

This is starting to sound like that story about the universe being on the back of a turtle, and that turtle standing on the back of another turtle, and so on all the way down. So what if this more intelligent AI is unfriendly? Another even smarter AI?

History teaches us that the more intelligent civilizationthe one with the most advanced technologyprevails. But I do have an overall strategy for dealing with unfriendly AI, which I discuss in chapter 8.

Okay, so Ill have to read the book for that one. But arent there limits to exponential growth? You know the story about rabbits in Australiathey didnt keep growing exponentially forever.

There are limits to the exponential growth inherent in each paradigm. Moores law was not the first paradigm to bring exponential growth to computing, but rather the fifth. In the 1950s they were shrinking vacuum tubes to keep the exponential growth going and then that paradigm hit a wall. But the exponential growth of computing didnt stop. It kept going, with the new paradigm of transistors taking over. Each time we can see the end of the road for a paradigm, it creates research pressure to create the next one. Thats happening now with Moores law, even though we are still about fifteen years away from the end of our ability to shrink transistors on a flat integrated circuit. Were making dramatic progress in creating the sixth paradigm, which is three-dimensional molecular computing.

But isnt there an overall limit to our ability to expand the power of computation?

Yes, I discuss these limits in the book. The ultimate 2 pound computer could provide 1042 cps, which will be about 10 quadrillion (1016) times more powerful than all human brains put together today. And thats if we restrict the computer to staying at a cold temperature. If we allow it to get hot, we could improve that by a factor of another 100 million. And, of course, well be devoting more than two pounds of matter to computing. Ultimately, well use a significant portion of the matter and energy in our vicinity. So, yes, there are limits, but theyre not very limiting.

And when we saturate the ability of the matter and energy in our solar system to support intelligent processes, what happens then?

Then well expand to the rest of the Universe.

Which will take a long time I presume.

Well, that depends on whether we can use wormholes to get to other places in the Universe quickly, or otherwise circumvent the speed of light. If wormholes are feasible, and analyses show they are consistent with general relativity, we could saturate the universe with our intelligence within a couple of centuries. I discuss the prospects for this in the chapter 6. But regardless of speculation on wormholes, well get to the limits of computing in our solar system within this century. At that point, well have expanded the powers of our intelligence by trillions of trillions.

Getting back to life extension, isnt it natural to age, to die?

Other natural things include malaria, Ebola, appendicitis, and tsunamis. Many natural things are worth changing. Aging may be natural, but I dont see anything positive in losing my mental agility, sensory acuity, physical limberness, sexual desire, or any other human ability.

In my view, death is a tragedy. Its a tremendous loss of personality, skills, knowledge, relationships. Weve rationalized it as a good thing because thats really been the only alternative weve had. But disease, aging, and death are problems we are now in a position to overcome.

Wait, you said that the golden era of biotechnology was still a decade away. We dont have radical life extension today, do we?

Go here to read the rest:
Singularity Q&A | KurzweilAI

Posted in The Singularity | Comments Off on Singularity Q&A | KurzweilAI

Euthanasia Debate | Debate.org

Posted: June 25, 2016 at 11:01 am

Euthanasia Debate

Euthanasia is defined as the practice of ending a life prematurely in order to end pain and suffering. The process is also sometimes called Mercy Killing. Euthanasia can fall into several categories. Voluntary Euthanasia is carried out with the permission of the person whose life is taken. Involuntary euthanasia is carried out without permission, such as in the case of a criminal execution. The moral and social questions surrounding these practices are the most active fields of research in Bioethics today. Many Supreme Court cases, such as Gonzales v. Oregon and Baxter vs. Montana, also surround this issue.

Voluntary euthanasia is typically performed when a person is suffering from a terminal illness and is in great pain. When the patient performs this procedure with the help of a doctor, the term assisted suicide is often used. This practice is legal in Belgium, the Netherlands and Luxemburg. It is also legal in the state of Oregon, Washington and Montana. Passive euthanasia is carried out by terminating a medication that is keeping a patient alive or not performing a life-saving procedure. Active euthanasia involves the administration of a lethal drug or otherwise actively ending the life. These two types of procedures carry different moral and social issues.

There is a lot of controversy surrounding the issue of euthanasia and whether or not it should be legal. From a legal standpoint, the Encyclopedia of American Law categorizes mercy killing as a class of criminal homicide. Judicially, not all homicide is illegal. Killing is seen as excusable when used as a criminal punishment, but inexcusable when carried out for any other reason. In most nations, euthanasia is considered criminal homicide: however, in the jurisdictions mentioned above, it is placed on the other side of the table with criminal punishment.

Arguments regarding the euthanasia debate often depend on the method used to take the life of the patient. The Oregon Death with Dignity Act made it legal for residents to request a lethal injection from a doctor. This is seen in other jurisdictions as being a criminal form of homicide. However, passive euthanasia through denial of drugs or procedures is considered to be legal in almost all jurisdictions. Those who argue for euthanasia feel that there is no difference. Those who are against it disagree.

Many arguments also hinge on religious beliefs. Many Christians believe that taking a life, for any reason, is interfering with God's plan and is comparable to murder. The most conservative of Christians are against even passive euthanasia. Some religious people do take the other side of the argument and believe that the drugs to end suffering early are God-given and should be used.

One of the main groups of people who are involved with the euthanasia debate is physicians. One survey in the United States recorded the opinions of over 10,000 medical doctors and found that sixteen percent would consider stopping a life-maintaining therapy at the recommendation of family or the patient. Fifty five percent would never do such. The study also found that 46 percent of doctors believe that physician assisted suicide should be allowed in some cases.

The controversy surrounding euthanasia involves many aspects of religion, medical and social sciences. As this is one of the most studied fields of bioethics, one can rest assured that more studies will be performed to learn more about this issue and how to best address it.

Continued here:

Euthanasia Debate | Debate.org

Posted in Euthanasia | Comments Off on Euthanasia Debate | Debate.org

Euthanasia – New World Encyclopedia

Posted: at 11:01 am

Euthanasia (from Greek: -, eu, "good," , thanatos, "death") is the practice of terminating the life of a human being or animal with an incurable disease, intolerable suffering, or a possibly undignified death in a painless or minimally painful way, for the purpose of limiting suffering. It is a form of homicide; the question is whether it should be considered justifiable or criminal.

Euthanasia refers both to the situation when a substance is administered to a person with intent to kill that person or, with basically the same intent, when removing someone from life support. There may be a legal divide between making someone die and letting someone die. In some instances, the first is (in some societies) defined as murder, the other is simply allowing nature to take its course. Consequently, laws around the world vary greatly with regard to euthanasia and are constantly subject to change as cultural values shift and better palliative care or treatments become available. Thus, while euthanasia is legal in some nations, in others it is criminalized.

Of related note is the fact that suicide, or attempted suicide, is no longer a criminal offense in most states. This demonstrates that there is consent among the states to self determination, however, the majority of the states argue that assisting in suicide is illegal and punishable even when there is written consent from the individual. The problem with written consent is that it is still not sufficient to show self-determination, as it could be coerced; if active euthanasia were to become legal, a process would have to be in place to assure that the patient's consent is fully voluntary.

Euthanasia has been used with several meanings:

The term euthanasia is used only in senses (6) and (7) in this article. When other people debate about euthanasia, they could well be using it in senses (1) through (5), or with some other definition. To make this distinction clearer, two other definitions of euthanasia follow:

There can be passive, non-aggressive, and aggressive euthanasia.

James Rachels has challenged both the use and moral significance of that distinction for several reasons:

To begin with a familiar type of situation, a patient who is dying of incurable cancer of the throat is in terrible pain, which can no longer be satisfactorily alleviated. He is certain to die within a few days, even if present treatment is continued, but he does not want to go on living for those days since the pain is unbearable. So he asks the doctor for an end to it, and his family joins in this request. Suppose the doctor agrees to withhold treatment. The justification for his doing so is that the patient is in terrible agony, and since he is going to die anyway, it would be wrong to prolong his suffering needlessly. But now notice this. If one simply withholds treatment, it may take the patient longer to die, and so he may suffer more than he would if more direct action were taken and a lethal injection given. This fact provides strong reason for thinking that, once the initial decision not to prolong his agony has been made, active euthanasia is actually preferable to passive euthanasia, rather than the reverse (Rachels 1975 and 1986).

There is also involuntary, non-voluntary, and voluntary euthanasia.

Mercy killing refers to killing someone to put them out of their suffering. The killer may or may not have the informed consent of the person killed. We shall use the term mercy killing only when there is no consent. Legally, mercy killing without consent is usually treated as murder.

Murder is intentionally killing someone in an unlawful way. There are two kinds of murder:

In most parts of the world, types (1) and (2) murder are treated identically. In other parts, type (1) murder is excusable under certain special circumstances, in which case it ceases to be considered murder. Murder is, by definition, unlawful. It is a legal term, not a moral one. Whether euthanasia is murder or not is a simple question for lawyers"Will you go to jail for doing it or won't you?"

Whether euthanasia should be considered murder or not is a matter for legislators. Whether euthanasia is good or bad is a deep question for the individual citizen. A right to die and a pro life proponent could both agree "euthanasia is murder," meaning one will go to jail if he were caught doing it, but the right to die proponent would add, "but under certain circumstances, it should not be, just as it is not considered murder now in the Netherlands."

The term "euthanasia" comes from the Greek words eu and thanatos, which combined means good death. Hippocrates mentions euthanasia in the Hippocratic Oath, which was written between 400 and 300 B.C.E. The original Oath states: To please no one will I prescribe a deadly drug nor give advice which may cause his death."

Despite this, the ancient Greeks and Romans generally did not believe that life needed to be preserved at any cost and were, in consequence, tolerant of suicide in cases where no relief could be offered to the dying or, in the case of the Stoics and Epicureans, where a person no longer cared for his life.

The English Common Law from the 1300s until today also disapproved of both suicide and assisting suicide. It distinguished a suicide, who was by definition of unsound mind, from a felo-de-se or "evildoer against himself," who had coolly decided to end it all and, thereby, perpetrated an infamous crime. Such a person forfeited his entire estate to the crown. Furthermore his corpse was subjected to public indignities, such as being dragged through the streets and hung from the gallows, and was finally consigned to "ignominious burial," and, as the legal scholars put it, the favored method was beneath a crossroads with a stake driven through the body.

Since the nineteenth century, euthanasia has sparked intermittent debates and activism in North America and Europe. According to medical historian Ezekiel Emanuel, it was the availability of anesthesia that ushered in the modern era of euthanasia. In 1828, the first known anti-euthanasia law in the United States was passed in the state of New York, with many other localities and states following suit over a period of several years.

Euthanasia societies were formed in England, in 1935, and in the U.S., in 1938, to promote aggressive euthanasia. Although euthanasia legislation did not pass in the U.S. or England, in 1937, doctor-assisted euthanasia was declared legal in Switzerland as long as the person ending the life has nothing to gain. During this period, euthanasia proposals were sometimes mixed with eugenics.

While some proponents focused on voluntary euthanasia for the terminally ill, others expressed interest in involuntary euthanasia for certain eugenic motivations (targeting those such as the mentally "defective"). Meanwhile, during this same era, U.S. court trials tackled cases involving critically ill people who requested physician assistance in dying as well as mercy killings, such as by parents of their severely disabled children (Kamisar 1977).

Prior to World War II, the Nazis carried out a controversial and now-condemned euthanasia program. In 1939, Nazis, in what was code named Action T4, involuntarily euthanized children under three who exhibited mental retardation, physical deformity, or other debilitating problems whom they considered "unworthy of life. This program was later extended to include older children and adults.

Leo Alexander, a judge at the Nuremberg trials after World War II, employed a "slippery slope" argument to suggest that any act of mercy killing inevitably will lead to the mass killings of unwanted persons:

The beginnings at first were a subtle shifting in the basic attitude of the physicians. It started with the acceptance of the attitude, basic in the euthanasia movement, that there is such a thing as life not worthy to be lived. This attitude in its early stages concerned itself merely with the severely and chronically sick. Gradually, the sphere of those to be included in this category was enlarged to encompass the socially unproductive, the ideologically unwanted, the racially unwanted and finally all non-Germans.

Critics of this position point to the fact that there is no relation at all between the Nazi "euthanasia" program and modern debates about euthanasia. The Nazis, after all, used the word "euthanasia" to camouflage mass murder. All victims died involuntarily, and no documented case exists where a terminal patient was voluntarily killed. The program was carried out in the closest of secrecy and under a dictatorship. One of the lessons that we should learn from this experience is that secrecy is not in the public interest.

However, due to outrage over Nazi euthanasia crimes, in the 1940s and 1950s, there was very little public support for euthanasia, especially for any involuntary, eugenics-based proposals. Catholic church leaders, among others, began speaking against euthanasia as a violation of the sanctity of life.

Nevertheless, owing to its principle of double effect, Catholic moral theology did leave room for shortening life with pain-killers and what would could be characterized as passive euthanasia (Papal statements 1956-1957). On the other hand, judges were often lenient in mercy-killing cases (Humphrey and Wickett, 1991, ch.4).

During this period, prominent proponents of euthanasia included Glanville Williams (The Sanctity of Life and the Criminal Law) and clergyman Joseph Fletcher ("Morals and medicine"). By the 1960s, advocacy for a right-to-die approach to voluntary euthanasia increased.

A key turning point in the debate over voluntary euthanasia (and physician-assisted dying), at least in the United States, was the public furor over the case of Karen Ann Quinlan. In 1975, Karen Ann Quinlan, for reasons still unknown, ceased breathing for several minutes. Failing to respond to mouth-to mouth resuscitation by friends she was taken by ambulance to a hospital in New Jersey. Physicians who examined her described her as being in "a chronic, persistent, vegetative state," and later it was judged that no form of treatment could restore her to cognitive life. Her father asked to be appointed her legal guardian with the expressed purpose of discontinuing the respirator which kept Karen alive. After some delay, the Supreme Court of New Jersey granted the request. The respirator was turned off. Karen Ann Quinlan remained alive but comatose until June 11, 1985, when she died at the age of 31.

In 1990, Jack Kevorkian, a Michigan physician, became infamous for encouraging and assisting people in committing suicide which resulted in a Michigan law against the practice in 1992. Kevorkian was later tried and convicted in 1999, for a murder displayed on television. Meanwhile in 1990, the Supreme Court approved the use of non-aggressive euthanasia.

Suicide or attempted suicide, in most states, is no longer a criminal offense. This demonstrates that there is consent among the states to self determination, however, the majority of the states postulate that assisting in suicide is illegal and punishable even when there is written consent from the individual. Let us now see how individual religions regard the complex subject of euthanasia.

In Catholic medical ethics, official pronouncements tend to strongly oppose active euthanasia, whether voluntary or not. Nevertheless, Catholic moral theology does allow dying to proceed without medical interventions that would be considered "extraordinary" or "disproportionate." The most important official Catholic statement is the Declaration on Euthanasia (Sacred Congregation, Vatican 1980).

The Catholic policy rests on several core principles of Catholic medical ethics, including the sanctity of human life, the dignity of the human person, concomitant human rights, and due proportionality in casuistic remedies (Ibid.).

Protestant denominations vary widely on their approach to euthanasia and physician assisted death. Since the 1970s, Evangelical churches have worked with Roman Catholics on a sanctity of life approach, though the Evangelicals may be adopting a more exceptionless opposition. While liberal Protestant denominations have largely eschewed euthanasia, many individual advocates (such as Joseph Fletcher) and euthanasia society activists have been Protestant clergy and laity. As physician assisted dying has obtained greater legal support, some liberal Protestant denominations have offered religious arguments and support for limited forms of euthanasia.

Not unlike the trend among Protestants, Jewish movements have become divided over euthanasia since the 1970s. Generally, Orthodox Jewish thinkers oppose voluntary euthanasia, often vigorously, though there is some backing for voluntary passive euthanasia in limited circumstances (Daniel Sinclair, Moshe Tendler, Shlomo Zalman Auerbach, Moshe Feinstein). Likewise, within the Conservative Judaism movement, there has been increasing support for passive euthanasia. In Reform Judaism responsa, the preponderance of anti-euthanasia sentiment has shifted in recent years to increasing support for certain passive euthanasia.

In Theravada Buddhism, a monk can be expelled for praising the advantages of death, even if they simply describe the miseries of life or the bliss of the afterlife in a way that might inspire a person to commit suicide or pine away to death. In caring for the terminally ill, one is forbidden to treat a patient so as to bring on death faster than would occur if the disease were allowed to run its natural course (Buddhist Monastic Code I: Chapter 4).

In Hinduism, the Law of Karma states that any bad action happening in one lifetime will be reflected in the next. Euthanasia could be seen as murder, and releasing the Atman before its time. However, when a body is in a vegetative state, and with no quality of life, it could be seen that the Atman has already left. When avatars come down to earth they normally do so to help out humankind. Since they have already attained Moksha they choose when they want to leave.

Muslims are against euthanasia. They believe that all human life is sacred because it is given by Allah, and that Allah chooses how long each person will live. Human beings should not interfere in this. Euthanasia and suicide are not included among the reasons allowed for killing in Islam.

"Do not take life, which Allah made sacred, other than in the course of justice" (Qur'an 17:33).

"If anyone kills a personunless it be for murder or spreading mischief in the landit would be as if he killed the whole people" (Qur'an 5:32).

The Prophet said: "Amongst the nations before you there was a man who got a wound, and growing impatient (with its pain), he took a knife and cut his hand with it and the blood did not stop till he died. Allah said, 'My Slave hurried to bring death upon himself so I have forbidden him (to enter) Paradise'" (Sahih Bukhari 4.56.669).

The debate in the ethics literature on euthanasia is just as divided as the debate on physician-assisted suicide, perhaps more so. "Slippery-slope" arguments are often made, supported by claims about abuse of voluntary euthanasia in the Netherlands.

Arguments against it are based on the integrity of medicine as a profession. In response, autonomy and quality-of-life-base arguments are made in support of euthanasia, underscored by claims that when the only way to relieve a dying patient's pain or suffering is terminal sedation with loss of consciousness, death is a preferable alternativean argument also made in support of physician-assisted suicide.

To summarize, there may be some circumstances when euthanasia is the morally correct action, however, one should also understand that there are real concerns about legalizing euthanasia because of fear of misuse and/or overuse and the fear of the slippery slope leading to a loss of respect for the value of life. What is needed are improvements in research, the best palliative care available, and above all, people should, perhaps, at this time begin modifying homicide laws to include motivational factors as a legitimate defense.

Just as homicide is acceptable in cases of self-defense, it could be considered acceptable if the motive is mercy. Obviously, strict parameters would have to be established that would include patients' request and approval, or, in the case of incompetent patients, advance directives in the form of a living will or family and court approval.

Mirroring this attitude, there are countries and/or statessuch as Albania (in 1999), Australia (1995), Belgium (2002), The Netherlands (2002), the U.S. state of Oregon, and Switzerland (1942)that, in one way or other, have legalized euthanasia; in the case of Switzerland, a long time ago.

In others, such as UK and U.S., discussion has moved toward ending its illegality. On November 5, 2006, Britain's Royal College of Obstetricians and Gynecologists submitted a proposal to the Nuffield Council on Bioethics calling for consideration of permitting the euthanasia of disabled newborns. The report did not address the current illegality of euthanasia in the United Kingdom, but rather calls for reconsideration of its viability as a legitimate medical practice.

In the U.S., recent Gallup Poll surveys showed that more than 60 percent of Americans supported euthanasia (Carroll 2006; Moore 2005) and attempts to legalize euthanasia and assisted suicide resulted in ballot initiatives and legislation bills within the United States in the last 20 years. For example, Washington voters saw Ballot Initiative 119 in 1991, California placed Proposition 161 on the ballot in 1992, Michigan included Proposal B in their ballot in 1998, and Oregon passed the Death with Dignity Act. The United States Supreme Court has ruled on the constitutionality of assisted suicide, in 2000, recognizing individual interests and deciding how, rather than whether they will die.

Perhaps a fitting conclusion of the subject could be the Japanese suggestion of the Law governing euthanasia:

All links retrieved May 26, 2015.

Autopsy Brain death Clinical death Euthanasia Persistent vegetative state Terminal illness

Immortality Infant mortality Legal death Maternal death Mortality rate

New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:

Note: Some restrictions may apply to use of individual images which are separately licensed.

Follow this link:

Euthanasia - New World Encyclopedia

Posted in Euthanasia | Comments Off on Euthanasia – New World Encyclopedia

Euthanasia and Physician Assisted Suicide – All sides

Posted: at 11:01 am

"Hot" religious topics Menu

A root cause for the desire to commit suicide is often depression. This can often be controlled with medication. If you are depressed, I strongly recommend that you seek medical help to see if your depression can be lifted.

Another cause of suicidal ideation is often intolerable levels of pain associated with a terminal illness, like cancer. Many physicians are reluctant to prescribe high levels of some pain killers out of fear that the person will become addicted to them. If you are suffering from pain in spite of medication, try insisting on better levels or types of pain killers. Recruit friends and family to intercede with your physician if you can.

If you feel overwhelmed and lack an effective support system of friends and family, consider tapping into the services of a crisis hotline. These are called by various names: distress centers, crisis centers, suicide prevention centers, etc. Their telephone numbers can often be found in the first page(s) of your telephone directory. If you cannot find a number for a center in your area, try phoning directory assistance at 4-1-1.

In the United States, you can call 1-800-273-TALK. See: http://www.suicidepreventionlifeline.org/ They will direct you to a crisis center in your area.

U.S. Crisis Center map

Crisis centers/distress centers/ etc are often confidential services that you can phone up at any time of the day or night for support. You can usually remain anonymous.

Wikipedia lists suicide crisis lines for many countries from Australia to the United States at: https://en.wikipedia.org/ Although these lines are often called "suicide prevention lines" or "crisis lines." most of the people calling are not suicidal, not in crisis, but are in distress. So, don't be reluctant to call them because you are not suicidal or in crisis.

Sponsored link.

Throughout North America, committing suicide or attempting to commit suicide is no longer a criminal offense. However, helping another person commit suicide is generally considered a criminal act. A few exceptions are:

There were four failed ballot initiatives between 1991 and 2000:

Between 1994 and 2016, there have been in excess of 75 legislative bills to legalize PAS in at least 21 states. Almost all failed to become law. 4

The author of this section is approaching his 80th birthday and is in good health. To him, end of life issues have taken on a personal aspect. Being an Agnostic, he doubts the existence of an afterlife. He does not fear death. He does not fear being dead. However, he has considerable fear about the process of dying, For many people in North America is an agonizingly painful and lengthy process during which time one's enjoyment of life often drops to zero and becomes negative without any hope that it will return to positive territory. Fortunately for him, he lives in Canada which -- like all other developed countries except for the U.S. -- has universal health care. So he will receive competent medical attention. Unfortunately, pain management is often as poorly managed in Canada as it is in the U.S. He regards suicide as a civil right and would prefer that he have access to a means of suicide if life becomes unbearable. He thus strongly supports legalizing physician assisted suicide.

He is critical of PAS laws that have been passed to date because they generally give access to assisted dying only to terminally ill people who are expected to die in the near future of natural causes. They do not do anything for people who experience chronic, overwhelming pain with no hope of relief for years.

He has attempted to remain impartial, objective and fair while writing these essays.

Site navigation:

See the article here:

Euthanasia and Physician Assisted Suicide - All sides

Posted in Euthanasia | Comments Off on Euthanasia and Physician Assisted Suicide – All sides