Should Science End Humankind?

Posted: November 17, 2014 at 3:40 am

"I want you to hold off on your intellectual gag response," the speaker told us. "I want you to stay with me through this 'til we get to the end."

The speaker was Paul Horn, former executive director of research at IBM. He's the man behind Watson, the machine that beat humans at Jeopardy. Horn is a highly informed, deep thinker on future technology. His talk was called "The Co-Evolution of Humans and Machines." His purpose was to get us thinking more deeply about a revolution that, if it comes, would be unlike anything humanity has experienced so far in its long history.

Horn's main argument was that, in the near future, we will build machines surpassing us in intelligence. What the machines those machines then build will surpass their own creator's intelligence. This process will rapidly continue until, very soon, it yields a new force on the planet superintelligence. This runaway process is often called the "singularity" and Horn's main job was to argue that, given current trends in technology, something more or less like it is coming.

What happens next (not the subject of Horn's talk), depends on your level of optimism. If you think things will turn out badly, well, then, you know the story. Skynet. The Matrix. Robot overlords.

But if you're an optimist, then you think something wonderful is going to happen. With the help of our super-intelligent machines we become more.

"More what?" you ask. Well, more than human. We become the next step in evolution and that will mean humanity, as we know it, will come to an end. What comes next will be a new post-human era (transhumanism, the step in between, is an idea we've covered before in this blog).

But now comes the real question. Even under the most optimistic scenario where a post-human transformation is available to everyone regardless of race, creed or (the more likely stumbling block) economic status, is it still a good idea? More to the point, is actively developing technologies to put us at the intellectual level of a schnauzer relative to future post-human beings ethical, just and proper?

Nick Bostrom, a philosopher at Oxford, identifies the core value of transhumanism in the ideal of human potential. Thus, for a transhumanist, raising future generations to the heights our current potential is all that matters. As Bostrom puts it:

"This affirmation of human potential is offered as an alternative to customary injunctions against playing God, messing with nature, tampering with our human essence, or displaying punishable hubris."

Bostrom runs through the limits that can be overcome when we transcend the current version of humanity: lifespan, intelligence, bodily functionality, sensory modalities, special faculties and sensibilities. Thus, in a post-human world our children's' children may live for centuries, see in all wavelengths of the spectrum and think trillions of times faster and more deeply than we can even imagine.

Excerpt from:
Should Science End Humankind?

Related Posts