Yudkowsky – Simplified Humanism

Frank Sulloway once said: Ninety-nine per cent of what Darwinian theory says about human behavior is so obviously true that we dont give Darwin credit for it. Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, Is that really true? How radical! Freuds ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.

Suppose you find an unconscious six-year-old girl lying on the train tracks of an active railroad. What, morally speaking, ought you to do in this situation? Would it be better to leave her there to get run over, or to try to save her? How about if a 45-year-old man has a debilitating but nonfatal illness that will severely reduce his quality of life is it better to cure him, or not cure him?

Oh, and by the way: This is not a trick question.

I answer that I would save them if I had the power to do so both the six-year-old on the train tracks, and the sick 45-year-old. The obvious answer isnt always the best choice, but sometimes it is.

I wont be lauded as a brilliant ethicist for my judgments in these two ethical dilemmas. My answers are not surprising enough that people would pay me for them. If you go around proclaiming What does two plus two equal? Four! you will not gain a reputation as a deep thinker. But it is still the correct answer.

If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle which says Life is good, death is bad; health is good, sickness is bad. If so and here we enter into controversial territory we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is not possible. But if the technology became available in some future year given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise would you judge it a good thing, to save that life, and stay that debility?

The important thing to remember, which I think all too many people forget, is that it is not a trick question.

Transhumanism is simpler requires fewer bits to specify because it has no special cases. If you believe professional bioethicists (people who get paid to explain ethical judgments) then the rule Life is good, death is bad; health is good, sickness is bad holds only until some critical age, and then flips polarity. Why should it flip? Why not just keep on with life-is-good? It would seem that it is good to save a six-year-old girl, but bad to extend the life and health of a 150-year-old. Then at what exact age does the term in the utility function go from positive to negative? Why?

As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someones health, you should. There, youre done. No special cases. You dont have to ask anyones age.

You also dont ask whether the remedy will involve only primitive technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they havent been invented yet. Your ethical dilemma report form doesnt have a line where you write down the invention year of the technology. Can you save lives? Yes? Okay, go ahead. There, youre done.

Suppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boys sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so?

Well, of course. Why not? Its not a trick question. Either its better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or its better to have an IQ of 120 than 110, in which case we should raise the sisters IQ if possible. As far as I can see, the obvious answer is the correct one.

But you ask where does it end? It may seem well and good to talk about extending life and health out to 150 years but what about 200 years, or 300 years, or 500 years, or more? What about when in the course of properly integrating all these new life experiences and expanding ones mind accordingly over time the equivalent of IQ must go to 140, or 180, or beyond human ranges?

Where does it end? It doesnt. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.

Ultimate physical limits may or may not permit a lifespan of at least length X for some X just as the medical technology of a particular century may or may not permit it. But physical limitations are questions of simple fact, to be settled strictly by experiment. Transhumanism, as a moral philosophy, deals only with the question of whether a healthy lifespan of length X is desirable if it is physically possible. Transhumanism answers yes for all X. Because, you see, its not a trick question.

So that is transhumanism loving life without special exceptions and without upper bound.

Can transhumanism really be that simple? Doesnt that make the philosophy trivial, if it has no extra ingredients, just common sense? Yes, in the same way that the scientific method is nothing but common sense.

Then why have a complicated special name like transhumanism ? For the same reason that scientific method or secular humanism have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.

But a moral philosophy should not have special ingredients. The purpose of a moral philosophy is not to look delightfully strange and counterintuitive, or to provide employment to bioethicists. The purpose is to guide our choices toward life, health, beauty, happiness, fun, laughter, challenge, and learning. If the judgments are simple, that is no black mark against them morality doesnt always have to be complicated.

There is nothing in transhumanism but the same common sense that underlies standard humanism, rigorously applied to cases outside our modern-day experience. A million-year lifespan? If its possible, why not? The prospect may seem very foreign and strange, relative to our current everyday experience. It may create a sensation of future shock. And yet is life a bad thing?

Could the moral question really be just that simple?

Yes.

View original post here:
Yudkowsky - Simplified Humanism

Related Posts

Comments are closed.