The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Post Human
Post-Human by David Simpson – facebook.com
Posted: April 4, 2016 at 1:40 am
Bridget Graham like you've never seen her before. I want to say something about Bridget's talent. For whatever the clout I currently have in filmmaking is worth, I wish Hollywood productions would recognize this relatively undiscovered but supremely talented actor. Yes, she's been amazing in the parts she's had, but I've seen first hand how great this leading lady in waiting actually is. Case in point, after meeting with her for several hours last night about a secret horror/...psychological film project for which only Bridget, Jenny, and I know the story, she offered to screen test on the spot. We couldn't resist, because we knew what the screen test entailed. The character suffers from a serious mental disorder and the moment she was offering to portray was the moment when the character hits rock bottom. We knew Bridget had always been able to do anything she promised before, but we couldn't fathom that she could pull this off with so little notice. Five minutes before this, Bridget looked like the girl you all know and love, smiling and being enthusiastic and joking with Jenny. Then, after her offer, while she got ready in the washroom, we grabbed our camera and set up. Bridget emerged from the washroom having completely and mesmerizingly transformed into the character. She was crying, homicidal, hurt but also raging, and Jenny and I were in awe. I'm writing this because I know you all love Bridget, but also because this is a public post, and I know there's a chance someone with a film project in the works that can let Bridget shine will read this, and it's about time for the world to see how incredible she is. I ask only one thing in return, which is that you let me have her back for my movies! In the meantime, I'm happy to keep having access to seriously the most talented actress I've ever seen! From Jenny and I, thank you, Bridget and thank you, tribe for helping to make all of our dreams come true!
See the article here:
Post-Human by David Simpson - facebook.com
Posted in Post Human
Comments Off on Post-Human by David Simpson – facebook.com
Post-Human on Vimeo
Posted: at 1:40 am
Post-Human is a scifi proof-of-concept short based on the award-winning and bestselling series of novels by me, David Simpson. Amazingly, filmed over just three hours by a crew of three, the short depicts the opening of Post-Human, drawing back the curtain on the Post-Human world and letting viewers see the world and characters theyve only been able to imagine previously. Youll get a taste of a world where everyone is immortal, have onboard mental minds eye computers, nanotechnology can make your every dream a reality, and thanks to the magnetic targeted fusion implants every post-human has, everyone can fly (and yep, theres flying in this short!) But theres a dark side to this brave new world, including the fact that every post-human is monitored from the inside out, and the one artificial superintelligence running the show might be about to make its first big mistake. 😉
The entire crew was only three people, including me, and I was behind the camera at all times. The talent is Madison Smith (Legends of Tomorrow, Supernatural) as James Keats, and Bridget Graham (Pixels, Manhattan Undying, Hemlock Grove) as his wife, Katherine. As a result of the expense of the spectacular location, the entire short had to be filmed in three hours, so we had to be lean and fast. What a rush! (Pun intended).
The concept was to try to replicate what a full-length feature would look and feel like by adapting the opening of Post-Human, right up to what would be the opening credits. Of course, as I was producing the movie myself, we only had a micro-budget, but after researching the indie films here on Vimeo over the last year, I became convinced that we could create a reasonable facsimile of what a big-budget production would look like and hopefully introduce this world to many more people who arent necessarily aficionados of scifi exclusively on the Kindle. While the series has been downloaded over a million times since 2012, Ive always intended for it to be adapted for film, and Im excited to have, in some small measure, finally succeeded.
Many thanks to Ivan Torrent, an unbelievably talented composer who allows his work to be used for free for non-commercial films. We set the pacing of the opening to his incredible work, and we hope we can expose him to thousands of new listeners through our movie.
Also, a huge thank you to Michael Eng, who is literally the only visual effects artist to have worked on this film. He did everything CG that you see (and some that you dont see) and is the reason the effects have such a unified, compelling look.
Thank you to Sanha Cho, my former student and now a film student at NYU for helping us with the cameras during our rushed set ups.
And my biggest thank you has to go to my wife, Jenny, who worked just as hard as I did on this project and deserves a ton of credit. I cant believe how lucky I am to have a wife that supports my dreams like this. Jenny, you are the BEST! I love you!
And for you, the Vimeo community, a huge thank you! If I hadnt seen such incredible films on here, done with micro budgets and affordable but powerful cameras, this would have been impossible. A special mention is deserved by Ian Watt, who made beautiful films with a Black Magic Pocket Cinema, a Glidecam HD4000, a Metabones Speedbooster, some nice filters and a Rokinon 16mm f/2.0 lens. We ended up opting for the Sigma 18-35 mm, but we basically had the same set up. Add a couple of tripods and a slider, and youve got what you need to make a great short film! We also used a Zoom audio recorder and a Rhode mic, and we edited the film in Final Cut and the sound in Adobe Audition. The colour correction was done in DaVinci Resolve. And yes, we did it all! Everything in post was done by Jenny, myself, and Michael. We even bought a second-hand Makerbot and 3D printed the original design for James' helmet! It was a heck of a lot of work, but it was a labor of love and were very proud of it. I hope youll enjoy it too, and if you feel like being extra awesome, please share what we did or leave a positive comment to help us get a feature film off the ground. 🙂
And a huge thank you to the growing Post-Human tribe of readers, whove made attempting something like this possible by supporting the series online so enthusiastically and making my dream of becoming a professional novelist a reality!
CREDITS Written & Directed by David Simpson posthumanmedia@gmail.com
Th3 Awak3n1ng, Blue Factor and Angelus Ex Inferno by Ivan Torrent goo.gl/WXphk1
Madison Smith imdb.com/name/nm5073287/
Bridget Graham imdb.com/name/nm5016485/
Michael Eng Visual Effects youtube.com/channel/UCdNwqEFLCUJKEVTI_NbQPgQ linkedin.com/pub/michael-eng/96/83a/882
Post-Human VFX Breakdown: youtube.com/watch?v=ANFZRfNPrPA
Follow this link:
Post-Human on Vimeo
Posted in Post Human
Comments Off on Post-Human on Vimeo
Human Resources, Learning, and Leadership: Our Ten …
Posted: October 19, 2015 at 4:40 am
2014 will be an exciting and challenging year for HR, learning, and talent professionals. (Download our 66 page Predictions Report here.)
Global economic growth will create a new level of competition for people. HR organizations will shift their focus from cost reduction to retention and engagement. Technology will continue to make the world a smaller place, forcing companies to improve their employment brand in every possible way. Data will become a new currency. Leadership will continue to be in short supply. And you, as an HR professional, will have to innovate and adapt to stay ahead.
In this blog I summarize our ten predictions for 2014, detailed in the report linkedhere. This is our tenth year publishing these predictions, and I hope you find them educational and valuable as you plan your strategies for the year ahead.
2014: The Year of the Employee:
Attraction, Retention, and Engagement Will Really Matter
For the first time in nearly a decade, this year you will find the issues of retention, engagement, and "attraction of talent" to be top on your priority list. We are just completing a major global study (Deloitte's Human Capital Trends 2014, coming soon) and found thatthe top two people issues facing organizations in 2014 are leadership and retention. These are the problems we face in a dynamic, growing global economy.
This year the power will shift: high-performing employees will start to exert control. Top people with key skills (engineering, math, life sciences, energy) will be in short supply. Thanks to the US healthcare laws, people will feel more free to change jobs. And companies who can't engage and attract Millenials will lose out.
While there will still be high levels of unemployment in places, generally people have changed their perspectives. They want work which is meaningful, rewarding, and enjoyable. Top performers will seek out career growth. Mid-level staff will strive for leadership development. And you, as an HR organization, will have to compete, adapt, and innovate to stay ahead.
1. Talent, skills, and capability needs become global.
In 2014 key skills will be scarce. Software engineering, energy and life sciences, mathematics and analytics, IT, and other technical skills are in short supply. And unlike prior years, this problem is no longer one of "hiring top people" or "recruiting better than your competition." Now we need to source and locate operations around the world to find the skills we need.
You must expand your sourcing and recruiting to a global level. Locate work where you can best find talent. And build talent networks which attract people around the world.
2. Integrated capability Development Replaces Training.
The "training department" will be renamed "capability development." Companies will find skills short and they will have to build a supply chain for talent. Partner with universities, establish apprentice programs, create developmental assignments, and focus on continuous learning. Companies that focus on continuous learning in 2014 will attract the best and build for the future.
3. Redesign of Performance Management Accelerates.
The old-fashioned performance review is slowly going out the window. In 2014 companies will aggressively redesign their appraisal and evaluation programs to focus on coaching, development, continuous goal alignment, and recognition. The days of "stacked ranking" are slowly going away in today's talent-constrained workplace, to be replaced by a focus on engaging people and helping them perform at extraordinary levels.
4. Redefine engagement: Focus on Passion and the Holistic Work Environment
As one HR manager recently put it, "our employees are no longer looking for a career, they're looking for an experience." Your job in 2014 is to make sure that experience is rewarding, exciting, and empowering.
5. Take Talent Mobility and Career Development Seriously
Talent mobility is with us for good: thanks to tools like LinkedIn, Twitter, and Facebook people can find new jobs in a heartbeat. This means you, as an employer, need to provide internal talent mobility and career growth in your own organization. 2014 is the time to build a "facilitated talent mobility" strategy which includes open access to internal positions, employee assessment tools, interview guides, and leadership values that focus on internal development.
Are your managers paid to "consume talent" or "produce talent?" Remember the best source of skills is within your own organization - if you cannot make internal mobility easy, good people will go elsewhere.
6. Redesign and Reskill the HR Function
Surprise: in our global Human Capital Trends research the need to "Reskill HR" was rated one of the top five challenges in every geography around the world. Why? Because HR itself is changing dramatically and we need to continuously skill our own teams to maintain our relevance and value.
Our new High-Impact HR research, scheduled for launch in early 2014, shows statistically that high-performing companies invest in HR skills development, external intelligence, and specialization. In 2014 if you aren't reinvesting in HR, you'll likely fall behind.
7. Reinvent and Expand Focus on Talent Acquisition
As the economy improves you will need to more aggressively and intelligently source and recruit. The talent acquisition market is the fastest-changing part of HR: new social recruiting, talent networks, BigData, assessment science, and recruiting platforms are being launched every month.
In 2014 organizations will need to integrate their talent acquisition teams, develop a global strategy, and expand their use of analytics, BigData, and social networks. Your employment brand now becomes more strategic than ever - so partner with your VP of Marketing if you haven't already. Today your ability to recruit is directly dependent on your engagement and retention strategy - what your employees experience is what is communicated in the outside world.
8. Continued Explosive Growth in HR Technology and Content Markets
The HR technology and content markets will expand again in 2014. ERP players (Oracle, SAP, Workday, ADP) are all delivering integrated solutions now. IBM, CornerstoneOnDemand, PeopleFluent, SumTotal, and dozens of other fast-growing talent management companies are now offering end-to-end solutions. And most now offer integrated analytics solutions as well.
Mobile apps, MOOCs, expanded use of Twitter, and an explosion in the use of video has created a need to continuously invest in HR technology. In 2014 the theme is "simplify" - understand technology but keep it simple. Employees are already overwhelmed and we need to make these tools and content easy to use. The word for 2014 is "adoption" - make technology easy to use and it will deliver great value.
9. Talent Analytics Comes to Front of the Stage
Talent Analytics is red hot. More than 60% of you are increasing investment in this area and company after company is uncovering new secrets to workforce performance each day. In 2014 you should build a talent analytics center of excellence and invest in the infrastructure, data quality, and integration tools you need. This market is finally here, and companies that excel in talent analytics have improved their recruiting by 2X, leadership pipeline by 3X, and financial performance as well.
10. Innovation Comes to HR. The New Bold, CHRO
One of the top three challenges companies now face is "reskilling their HR team." This points to the issue that HR itself, as a business function, is undergoing radical change. Today's HR organization is no longer judged by its administrative efficiency - it is judged by its ability to acquire, develop, retain, and help manage talent. And more and more HR is being asked to become "Data-Driven" - understand how to best manage people based on real data, not just judgement or good ideas.
As a result of these changes, our research shows a new model for HR emerging - one we call High-Impact HR. In this new world HR professionals are highly trained specialists, they act as consultants, and they operate in "networks of expertise" not just "centers of expertise." And driving this new world is a strong-willed, business-driven CHRO. In 2014 organizations should focus on innovation, new ideas, and leveraging technology to drive value in HR. This demands an integrated team, a focus on skills and capabilities within HR, and strong HR leadership.
Bottom line:
2014 looks to be an exciting and critically important year for Human Resources. The economy will grow, employees will be in charge, and HR's role in business success will be more important than ever.
(Click here to download the entire 66 page Bersin 2014 Predictions Report.)
Read the original here:
Human Resources, Learning, and Leadership: Our Ten ...
Posted in Post Human
Comments Off on Human Resources, Learning, and Leadership: Our Ten …
posthuman.TV – POSTHUMAN.TV
Posted: September 20, 2015 at 3:40 pm
First Publication of
W.B. Yeatss Chess Boards
Press Release UK 1:00 GMT 14th Oct 2013
Steve Nichols was the first to publish the four Mathers-Westcott board Enochian chess sets back in 1982. After much further research, Steve is now delighted to be able to make public WB Yeatss sixteen (16) Sub-Elemental Enochian chessboards. These 'Celtic' board designs look and resonate very differently from the four Golden Dawn Egyptian Enochian Chess boards (Cities of Pyramids). These boards can be used for game-play or 'active' divination (using standard Enochian Chess pieces), or used ritually for Ceremonial work (when placed on the 16 compass points when they form an 'astral temple' or protected space) and also for Skrying. These boards can be hung or displayed as striking art-works when not in use. Chessboards are printed on thick extra-glossy substrates. Pieces need to be cut out before use, very simple assembly. PRESS REVIEWS, BLOGS and links
WB Yeats Castle of Heroes Facebook
Enochian Chess Facebook
http://www.enlighteningtimes.co.uk/2013/12/enochian-chess-and-publication-of-wb.html
The rest is here:
posthuman.TV - POSTHUMAN.TV
Posted in Post Human
Comments Off on posthuman.TV – POSTHUMAN.TV
Pax Gaea Index Page
Posted: September 6, 2015 at 3:40 pm
Pax Gaea is about our hope for the people of the world, and while it may appear this fledgling website has little to do with the concept, it is but a seed.
We are the Carrolls. On July 28, 2006, we sold our home in Wilmington, North Carolina, and embarked on an adventure that carried us to Patzcuaro, nestled in the mountains of central Mexico. Our purpose was to remove ourselves from the familiar, to experience a culture different from our own and to write a novel - in our own mysterious way, to nurture this idea that we have more in common than our differences tend to dictate. In April of 2007 we returned to the states after completing the novel and have settled for the next year or so in Nags Head, on the Outer Banks of North Carolina, while we await publication of
No matter how powerful the forces that try to drive us apart, if our will is greater than those forces of division, we can be drawn back together ...
n 1718, one of the most notorious pirates of all time was killed off the coast of North Carolina. History chronicles the last eighteen months of his life and makes stabs at his origins ... but really, who was this man of many names? Where did he come from? And what motivated him to lead this campaign of terror, striking fear
As part of her home schooling project in 2006 and 2007, Abigail studied all of the countries of the world and a series of compiled comprehensive reports and photographs of each nation. Since then we've attempted to diligently maintain these pages which are some of the most timely and comprehensive on the web.
I
Thatcher is our novel of historic fiction, coming soon! It explores the fascinating and highly edited history of European colonialism in the late 17th and early 18th centuries, highlighting the pirates and privateers, illegal and legal, who profited from the expanding global trade of the era. In an age of entrepreneuralism, a man may cross the fine line from capitalist to criminal, depending on who is making the definition. Thatcher explores what happens when a man who is merely
trying to make his mark discovers the rules are written against him, and what happens when he decides to change the rules, following his own code in order to achieve his ambitions.
This is not your typical story of a one-dimensional, filthy brigand with a penchant for saying Aaargh! From the ports of England to the colonial New World, from the frozen winter of Moscow to the balmy shores of Madagascar, Thatcher is filled with action, intrigue and interesting characters, intertwining fact with fiction. As for the man himself, a shooting star of mythic proportion, Thatchers Blackbeard possesses a plausible and fantastical pedigree that forms a highly complex, multi-dimensional man who challenges the bounds of his circumstances and leads him to the inevitable fate written by the powers who, then and now, control the civilized world.
into the hearts of all who plied the high seas from the Caribbean to the Atlantic, causing all to tremble at the mention of the name Blackbeard ...
See the original post:
Pax Gaea Index Page
Posted in Post Human
Comments Off on Pax Gaea Index Page
The Future of Humanity – Nick Bostrom’s Home Page
Posted: at 3:40 pm
Nick Bostrom
Future of Humanity Institute
Faculty of Philosophy & James Martin 21st Century School
Oxford University
[Complete draft circulated (2007)]
[Published in New Waves in Philosophy of Technology, eds. Jan-Kyrre Berg Olsen, Evan Selinger, & Soren Riis (New York: Palgrave McMillan, 2009): 186-216]
[Reprinted in the journal Geopolitics, History, and International Relations, Vol. 1, No. 2 (2009): 41-78]
[pdf]
The future of humanity is often viewed as a topic for idle speculation. Yet our beliefs and assumptions on this subject matter shape decisions in both our personal lives and public policy decisions that have very real and sometimes unfortunate consequences. It is therefore practically important to try to develop a realistic mode of futuristic thought about big picture questions for humanity. This paper sketches an overview of some recent attempts in this direction, and it offers a brief discussion of four families of scenarios for humanitys future: extinction, recurrent collapse, plateau, and posthumanity.
In one sense, the future of humanity comprises everything that will ever happen to any human being, including what you will have for breakfast next Thursday and all the scientific discoveries that will be made next year. In that sense, it is hardly reasonable to think of the future of humanity as a topic: it is too big and too diverse to be addressed as a whole in a single essay, monograph, or even 100-volume book series. It is made into a topic by way of abstraction. We abstract from details and short-term fluctuations and developments that affect only some limited aspect of our lives. A discussion about the future of humanity is about how the important fundamental features of the human condition may change or remain constant in the long run.
What features of the human condition are fundamental and important? On this there can be reasonable disagreement. Nonetheless, some features qualify by almost any standard. For example, whether and when Earth-originating life will go extinct, whether it will colonize the galaxy, whether human biology will be fundamentally transformed to make us posthuman, whether machine intelligence will surpass biological intelligence, whether population size will explode, and whether quality of life will radically improve or deteriorate: these are all important fundamental questions about the future of humanity. Less fundamental questions for instance, about methodologies or specific technology projections are also relevant insofar as they inform our views about more fundamental parameters.
Traditionally, the future of humanity has been a topic for theology. All the major religions have teachings about the ultimate destiny of humanity or the end of the world.1 Eschatological themes have also been explored by big-name philosophers such as Hegel, Kant, and Marx. In more recent times the literary genre of science fiction has continued the tradition. Very often, the future has served as a projection screen for our hopes and fears; or as a stage setting for dramatic entertainment, morality tales, or satire of tendencies in contemporary society; or as a banner for ideological mobilization. It is relatively rare for humanitys future to be taken seriously as a subject matter on which it is important to try to have factually correct beliefs. There is nothing wrong with exploiting the symbolic and literary affordances of an unknown future, just as there is nothing wrong with fantasizing about imaginary countries populated by dragons and wizards. Yet it is important to attempt (as best we can) to distinguish futuristic scenarios put forward for their symbolic significance or entertainment value from speculations that are meant to be evaluated on the basis of literal plausibility. Only the latter form of realistic futuristic thought will be considered in this paper.
We need realistic pictures of what the future might bring in order to make sound decisions. Increasingly, we need realistic pictures not only of our personal or local near-term futures, but also of remoter global futures. Because of our expanded technological powers, some human activities now have significant global impacts. The scale of human social organization has also grown, creating new opportunities for coordination and action, and there are many institutions and individuals who either do consider, or claim to consider, or ought to consider, possible long-term global impacts of their actions. Climate change, national and international security, economic development, nuclear waste disposal, biodiversity, natural resource conservation, population policy, and scientific and technological research funding are examples of policy areas that involve long time-horizons. Arguments in these areas often rely on implicit assumptions about the future of humanity. By making these assumptions explicit, and subjecting them to critical analysis, it might be possible to address some of the big challenges for humanity in a more well-considered and thoughtful manner.
The fact that we need realistic pictures of the future does not entail that we can have them. Predictions about future technical and social developments are notoriously unreliable to an extent that have lead some to propose that we do away with prediction altogether in our planning and preparation for the future. Yet while the methodological problems of such forecasting are certainly very significant, the extreme view that we can or should do away with prediction altogether is misguided. That view is expressed, to take one example, in a recent paper on the societal implications of nanotechnology by Michael Crow and Daniel Sarewitz, in which they argue that the issue of predictability is irrelevant:
preparation for the future obviously does not require accurate prediction; rather, it requires a foundation of knowledge upon which to base action, a capacity to learn from experience, close attention to what is going on in the present, and healthy and resilient institutions that can effectively respond or adapt to change in a timely manner.2
Note that each of the elements Crow and Sarewitz mention as required for the preparation for the future relies in some way on accurate prediction. A capacity to learn from experience is not useful for preparing for the future unless we can correctly assume (predict) that the lessons we derive from the past will be applicable to future situations. Close attention to what is going on in the present is likewise futile unless we can assume that what is going on in the present will reveal stable trends or otherwise shed light on what is likely to happen next. It also requires non-trivial prediction to figure out what kind of institution will prove healthy, resilient, and effective in responding or adapting to future changes.
The reality is that predictability is a matter of degree, and different aspects of the future are predictable with varying degrees of reliability and precision.3 It may often be a good idea to develop plans that are flexible and to pursue policies that are robust under a wide range of contingencies. In some cases, it also makes sense to adopt a reactive approach that relies on adapting quickly to changing circumstances rather than pursuing any detailed long-term plan or explicit agenda. Yet these coping strategies are only one part of the solution. Another part is to work to improve the accuracy of our beliefs about the future (including the accuracy of conditional predictions of the form if x is done, y will result). There might be traps that we are walking towards that we could only avoid falling into by means of foresight. There are also opportunities that we could reach much sooner if we could see them farther in advance. And in a strict sense, prediction is always necessary for meaningful decision-making.4
Predictability does not necessarily fall off with temporal distance. It may be highly unpredictable where a traveler will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination. The very long-term future of humanity may be relatively easy to predict, being a matter amenable to study by the natural sciences, particularly cosmology (physical eschatology). And for there to be a degree of predictability, it is not necessary that it be possible to identify one specific scenario as what will definitely happen. If there is at least some scenario that can be ruled out, that is also a degree of predictability. Even short of this, if there is some basis for assigning different probabilities (in the sense of credences, degrees of belief) to different propositions about logically possible future events, or some basis for criticizing some such probability distributions as less rationally defensible or reasonable than others, then again there is a degree of predictability. And this is surely the case with regard to many aspects of the future of humanity. While our knowledge is insufficient to narrow down the space of possibilities to one broadly outlined future for humanity, we do know of many relevant arguments and considerations which in combination impose significant constraints on what a plausible view of the future could look like. The future of humanity need not be a topic on which all assumptions are entirely arbitrary and anything goes. There is a vast gulf between knowing exactly what will happen and having absolutely no clue about what will happen. Our actual epistemic location is some offshore place in that gulf.5
Most differences between our lives and the lives of our hunter-gatherer forebears are ultimately tied to technology, especially if we understand technology in its broadest sense, to include not only gadgets and machines but also techniques, processes, and institutions. In this wide sense we could say that technology is the sum total of instrumentally useful culturally-transmissible information. Language is a technology in this sense, along with tractors, machine guns, sorting algorithms, double-entry bookkeeping, and Roberts Rules of Order.6
Technological innovation is the main driver of long-term economic growth. Over long time scales, the compound effects of even modest average annual growth are profound. Technological change is in large part responsible for many of the secular trends in such basic parameters of the human condition as the size of the world population, life expectancy, education levels, material standards of living, and the nature of work, communication, health care, war, and the effects of human activities on the natural environment. Other aspects of society and our individual lives are also influenced by technology in many direct and indirect ways, including governance, entertainment, human relationships, and our views on morality, mind, matter, and our own human nature. One does not have to embrace any strong form of technological determinism to recognize that technological capability through its complex interactions with individuals, institutions, cultures, and environment is a key determinant of the ground rules within which the games of human civilization get played out.7
This view of the important role of technology is consistent with large variations and fluctuations in deployment of technology in different times and parts of the world. The view is also consistent with technological development itself being dependent on socio-cultural, economic, or personalistic enabling factors. The view is also consistent with denying any strong version of inevitability of the particular growth pattern observed in human history. One might hold, for example, that in a re-run of human history, the timing and location of the Industrial Revolution might have been very different, or that there might not have been any such revolution at all but rather, say, a slow and steady trickle of invention. One might even hold that there are important bifurcation points in technological development at which history could take either path with quite different results in what kinds of technological systems developed. Nevertheless, under the assumption that technological development continues on a broad front, one might expect that in the long run, most of the important basic capabilities that could be obtained through some possible technology, will in fact be obtained through technology. A bolder version of this idea could be formulated as follows:
Technological Completion Conjecture. If scientific and technological development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.
The conjecture is not tautological. It would be false if there is some possible basic capability that could be obtained through some technology which, while possible in the sense of being consistent with physical laws and material constraints, is so difficult to develop that it would remain beyond reach even after an indefinitely prolonged development effort. Another way in which the conjecture could be false is if some important capability can only be achieved through some possible technology which, while it could have been developed, will not in fact ever be developed even though scientific and technological development efforts continue.
The conjecture expresses the idea that which important basic capabilities are eventually attained does not depend on the paths taken by scientific and technological research in the short term. The principle allows that we might attain some capabilities sooner if, for example, we direct research funding one way rather than another; but it maintains that provided our general techno-scientific enterprise continues, even the non-prioritized capabilities will eventually be obtained, either through some indirect technological route, or when general advancements in instrumentation and understanding have made the originally neglected direct technological route so easy that even a tiny effort will succeed in developing the technology in question.8
One might find the thrust of this underlying idea plausible without being persuaded that the Technological Completion Conjecture is strictly true, and in that case, one may explore what exceptions there might be. Alternatively, one might accept the conjecture but believe that its antecedent is false, i.e. that scientific and technological development efforts will at some point effectively cease (before the enterprise is complete). But if one accepts both the conjecture and its antecedent, what are the implications? What will be the results if, in the long run, all of the important basic capabilities that could be obtained through some possible technology are in fact obtained? The answer may depend on the order in which technologies are developed, the social, legal, and cultural frameworks within which they are deployed, the choices of individuals and institutions, and other factors, including chance events. The obtainment of a basic capability does not imply that the capability will be used in a particular way or even that it will be used at all.
These factors determining the uses and impacts of potential basic capabilities are often hard to predict. What might be somewhat more foreseeable is which important basic capabilities will eventually be attained. For under the assumption that the Technological Completion Conjecture and its antecedent are true, the capabilities that will eventually be include all the ones that could be obtained through some possible technology. While we may not be able to foresee all possible technologies, we can foresee many possible technologies, including some that that are currently infeasible; and we can show that these anticipated possible technologies would provide a large range of new important basic capabilities.
One way to foresee possible future technologies is through what Eric Drexler has termed theoretical applied science.9 Theoretical applied science studies the properties of possible physical systems, including ones that cannot yet be built, using methods such as computer simulation and derivation from established physical laws.,10 Theoretical applied science will not in every instance deliver a definitive and uncontroversial yes-or-no answer to questions about the feasibility of some imaginable technology, but it is arguably the best method we have for answering such questions. Theoretical applied science both in its more rigorous and its more speculative applications is therefore an important methodological tool for thinking about the future of technology and, a fortiori, one key determinant of the future of humanity.
It may be tempting to refer to the expansion of technological capacities as progress. But this term has evaluative connotations of things getting better and it is far from a conceptual truth that expansion of technological capabilities makes things go better. Even if empirically we find that such an association has held in the past (no doubt with many big exceptions), we should not uncritically assume that the association will always continue to hold. It is preferable, therefore, to use a more neutral term, such as technological development, to denote the historical trend of accumulating technological capability.
Technological development has provided human history with a kind of directionality. Instrumentally useful information has tended to accumulate from generation to generation, so that each new generation has begun from a different and technologically more advanced starting point than its predecessor. One can point to exceptions to this trend, regions that have stagnated or even regressed for extended periods of time. Yet looking at human history from our contemporary vantage point, the macro-pattern is unmistakable.
It was not always so. Technological development for most of human history was so slow as to be indiscernible. When technological development was that slow, it could only have been detected by comparing how levels of technological capability differed over large spans of time. Yet the data needed for such comparisons detailed historical accounts, archeological excavations with carbon dating, and so forth were unavailable until fairly recently, as Robert Heilbroner explains:
At the very apex of the first stratified societies, dynastic dreams were dreamt and visions of triumph or ruin entertained; but there is no mention in the papyri and cuniform tablets on which these hopes and fears were recorded that they envisaged, in the slightest degree, changes in the material conditions of the great masses, or for that matter, of the ruling class itself.11
Heilbroner argued in Visions of the Future for the bold thesis that humanitys perceptions of the shape of things to come has gone through exactly three phases since the first appearance of Homo sapiens. In the first phase, which comprises all of human prehistory and most of history, the worldly future was envisaged with very few exceptions as changeless in its material, technological, and economic conditions. In the second phase, lasting roughly from the beginning of the eighteenth century until the second half of the twentieth, worldly expectations in the industrialized world changed to incorporate the belief that the hitherto untamable forces of nature could be controlled through the appliance of science and rationality, and the future became a great beckoning prospect. The third phase mostly post-war but overlapping with the second phase sees the future in a more ambivalent light: as dominated by impersonal forces, as disruptive, hazardous, and foreboding as well as promising.
Supposing that some perceptive observer in the past had noticed some instance of directionality be it a technological, cultural, or social trend the question would have remained whether the detected directionality was a global feature or a mere local pattern. In a cyclical view of history, for example, there can be long stretches of steady cumulative development of technology or other factors. Within a period, there is clear directionality; yet each flood of growth is followed by an ebb of decay, returning things to where they stood at the beginning of the cycle. Strong local directionality is thus compatible with the view that, globally, history moves in circles and never really gets anywhere. If the periodicity is assumed to go on forever, a form of eternal recurrence would follow.
Modern Westerners who are accustomed to viewing history as directional pattern of development may not appreciate how natural the cyclical view of history once seemed.12 Any closed system with only a finite number of possible states must either settle down into one state and remain in that one state forever, or else cycle back through states in which it has already been. In other words, a closed finite state system must either become static or else start repeating itself. If we assume that the system has already been around for an eternity, then this eventual outcome must already have come about; i.e., the system is already either stuck or is cycling through states in which it has been before. The proviso that the system has only a finite number of states may not be as significant as it seems, for even a system that has an infinite number of possible states may only have finitely many perceptibly different possible states.13 For many practical purposes, it may not matter much whether the current state of the world has already occurred an infinite number of times, or whether an infinite number of states have previously occurred each of which is merely imperceptibly different from the present state.14 Either way, we could characterize the situation as one of eternal recurrence the extreme case of a cyclical history.
In the actual world, the cyclical view is false because the world had a beginning a finite time ago. The human species has existed for a mere two hundred thousand years or so, and this is far from enough time for it to have experienced all possible conditions and permutations of which the system of humans and their environment is capable.
More fundamentally, the reason why the cyclical view is false is that the universe itself has existed for only a finite amount of time.15 The universe started with the Big Bang an estimated 13.7 billion years ago, in a low-entropy state. The history of the universe has its own directionality: an ineluctable increase in entropy. During its process of entropy increase, the universe has progressed through a sequence of distinct stages. In the eventful first three seconds, a number of transitions occurred, including probably a period of inflation, reheating, and symmetry breaking. These were followed, later, by nucleosynthesis, expansion, cooling, and formation of galaxies, stars, and planets, including Earth (circa 4.5 billion years ago). The oldest undisputed fossils are about 3.5 billion years old, but there is some evidence that life already existed 3.7 billion years ago and possibly earlier. Evolution of more complex organisms was a slow process. It took some 1.8 billion years for eukaryotic life to evolve from prokaryotes, and another 1.4 billion years before the first multicellular organisms arose. From the beginning of the Cambrian period (some 542 million years ago), important developments began happening at a faster pace, but still enormously slowly by human standards. Homo habilis our first human-like ancestors evolved some 2 million years ago; Homo sapiens 100,000 years ago. The agricultural revolution began in the Fertile Crescent of the Middle East 10,000 years ago, and the rest is history. The size of the human population, which was about 5 million when we were living as hunter-gatherers 10,000 years ago, had grown to about 200 million by the year 1; it reached one billion in 1835 AD; and today over 6.6 billion human beings are breathing on this planet.16 From the time of the industrial revolution, perceptive individuals living in developed countries have noticed significant technological change within their lifetimes.
All techno-hype aside, it is striking how recent many of the events are that define what we take to be the modern human condition. If compress the time scale such that the Earth formed one year ago, then Homo sapiens evolved less than 12 minutes ago, agriculture began a little over one minute ago, the Industrial Revolution took place less than 2 seconds ago, the electronic computer was invented 0.4 seconds ago, and the Internet less than 0.1 seconds ago in the blink of an eye.
Almost all the volume of the universe is ultra-high vacuum, and almost all of the tiny material specks in this vacuum are so hot or so cold, so dense or so dilute, as to be utterly inhospitable to organic life. Spatially as well as temporally, our situation is an anomaly.17
Given the technocentric perspective adopted here, and in light of our incomplete but substantial knowledge of human history and its place in the universe, how might we structure our expectations of things to come? The remainder of this paper will outline four families of scenarios for humanitys future:
Unless the human species lasts literally forever, it will some time cease to exist. In that case, the long-term future of humanity is easy to describe: extinction. An estimated 99.9% of all species that ever existed on Earth are already extinct.18
There are two different ways in which the human species could become extinct: one, by evolving or developing or transforming into one or more new species or life forms, sufficiently different from what came before so as no longer to count as Homo sapiens; the other, by simply dying out, without any meaningful replacement or continuation. Of course, a transformed continuant of the human species might itself eventually terminate, and perhaps there will be a point where all life comes to an end; so scenarios involving the first type of extinction may eventually converge into the second kind of scenario of complete annihilation. We postpone discussion of transformation scenarios to a later section, and we shall not here discuss the possible existence of fundamental physical limitations to the survival of intelligent life in the universe. This section focuses on the direct form of extinction (annihilation) occurring within any very long, but not astronomically long, time horizon we could say one hundred thousand years for specificity.
Human extinction risks have received less scholarly attention than they deserve. In recent years, there have been approximately three serious books and one major paper on this topic. John Leslie, a Canadian philosopher, puts the probability of humanity failing to survive the next five centuries to 30% in his book End of the World.19 His estimate is partly based on the controversial Doomsday argument and on his own views about the limitations of this argument.20 Sir Martin Rees, Britains Astronomer Royal, is even more pessimistic, putting the odds that humanity will survive the 21st century to no better than 50% in Our Final Hour.21 Richard Posner, an eminent American legal scholar, offers no numerical estimate but rates the risk of extinction significant in Catastrophe.22 And I published a paper in 2002 in which I suggested that assigning a probability of less than 25% to existentialdisaster (no time limit) would be misguided.23 The concept of existential risk is distinct from that of extinction risk. As I introduced the term, an existential disaster is one that causes either the annihilation of Earth-originating intelligent life or the permanent and drastic curtailment of its potential for future desirable development.24
It is possible that a publication bias is responsible for the alarming picture presented by these opinions. Scholars who believe that the threats to human survival are severe might be more likely to write books on the topic, making the threat of extinction seem greater than it really is. Nevertheless, it is noteworthy that there seems to be a consensus among those researchers who have seriously looked into the matter that there is a serious risk that humanitys journey will come to a premature end.25
The greatest extinction risks (and existential risks more generally) arise from human activity. Our species has survived volcanic eruptions, meteoric impacts, and other natural hazards for tens of thousands of years. It seems unlikely that any of these old risks should exterminate us in the near future. By contrast, human civilization is introducing many novel phenomena into the world, ranging from nuclear weapons to designer pathogens to high-energy particle colliders. The most severe existential risks of this century derive from expected technological developments. Advances in biotechnology might make it possible to design new viruses that combine the easy contagion and mutability of the influenza virus with the lethality of HIV. Molecular nanotechnology might make it possible to create weapons systems with a destructive power dwarfing that of both thermonuclear bombs and biowarfare agents.26 Superintelligent machines might be built and their actions could determine the future of humanity and whether there will be one.27 Considering that many of the existential risks that now seem to be among the most significant were conceptualized only in recent decades, it seems likely that further ones still remain to be discovered.
The same technologies that will pose these risks will also help us to mitigate some risks. Biotechnology can help us develop better diagnostics, vaccines, and anti-viral drugs. Molecular nanotechnology could offer even stronger prophylactics.28 Superintelligent machines may be the last invention that human beings ever need to make, since a superintelligence, by definition, would be far more effective than a human brain in practically all intellectual endeavors, including strategic thinking, scientific analysis, and technological creativity.29 In addition to creating and mitigating risks, these powerful technological capabilities would also affect the human condition in many other ways.
Extinction risks constitute an especially severe subset of what could go badly wrong for humanity. There are many possible global catastrophes that would cause immense worldwide damage, maybe even the collapse of modern civilization, yet fall short of terminating the human species. An all-out nuclear war between Russia and the United States might be an example of a global catastrophe that would be unlikely to result in extinction. A terrible pandemic with high virulence and 100% mortality rate among infected individuals might be another example: if some groups of humans could successfully quarantine themselves before being exposed, human extinction could be avoided even if, say, 95% or more of the worlds population succumbed. What distinguishes extinction and other existential catastrophes is that a comeback is impossible. A non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback: a giant massacre for man, a small misstep for mankind.
An existential catastrophe is therefore qualitatively distinct from a mere collapse of global civilization, although in terms of our moral and prudential attitudes perhaps we should simply view both as unimaginably bad outcomes.30 One way that civilization collapse could be a significant feature in the larger picture for humanity, however, is if it formed part of a repeating pattern. This takes us to the second family of scenarios: recurrent collapse.
Environmental threats seem to have displaced nuclear holocaust as the chief specter haunting the public imagination. Current-day pessimists about the future often focus on the environmental problems facing the growing world population, worrying that our wasteful and polluting ways are unsustainable and potentially ruinous to human civilization. The credit for having handed the environmental movement its initial impetus is often given to Rachel Carson, whose book Silent Spring (1962) sounded the alarm on pesticides and synthetic chemicals that were being released into the environment with allegedly devastating effects on wildlife and human health.31 The environmentalist forebodings swelled over the decade. Paul Ehrlichs book Population Bomb, and the Club of Rome report Limits to Growth, which sold 30 million copies, predicted economic collapse and mass starvation by the eighties or nineties as the results of population growth and resource depletion.32
In recent years, the spotlight of environmental concern has shifted to global climate change. Carbon dioxide and other greenhouse gases are accumulating in the atmosphere, where they are expected to cause a warming of Earths climate and a concomitant rise in sea water levels. The more recent report by the United Nations Intergovernmental Panel on Climate Change, which represents the most authoritative assessment of current scientific opinion, attempts to estimate the increase in global mean temperature that would be expected by the end of this century under the assumption that no efforts at mitigation are made. The final estimate is fraught with uncertainty because of uncertainty about what the default rate of emissions of greenhouse gases will be over the century, uncertainty about the climate sensitivity parameter, and uncertainty about other factors. The IPCC therefore expresses its assessment in terms of six different climate scenarios based on different models and different assumptions. The low model predicts a mean global warming of +1.8C (uncertainty range 1.1C to 2.9C); the high model predicts warming by +4.0C (2.4C to 6.4C).33 Estimated sea level rise predicted by these two most extreme scenarios among the six considered is 18 to 38 cm, and 26 to 59 cm, respectively.34
While this prognosis might well justify a range of mitigation policies, it is important to maintain a sense of perspective when we are considering the issue from a future of humanity point of view. Even the Stern Review on the Economics of Climate Change, a report prepared for the British Government which has been criticized by some as overly pessimistic, estimates that under the assumption of business-as-usual with regard to emissions, global warming will reduce welfare by an amount equivalent to a permanent reduction in per capita consumption of between 5 and 20%.35 In absolute terms, this would be a huge harm. Yet over the course of the twentieth century, world GDP grew by some 3,700%, and per capita world GDP rose by some 860%.36 It seems safe to say that (absent a radical overhaul of our best current scientific models of the Earths climate system) whatever negative economic effects global warming will have, they will be completely swamped by other factors that will influence economic growth rates in this century.
There have been a number of attempts by scholars to explain societal collapse either as a case study of some particular society, such as Gibbons classic Decline and Fall of the Roman Empire or else as an attempt to discover failure modes applying more generally.37 Two examples of the latter genre include Joseph Tainters Collapse of Complex Societies, and Jared Diamonds more recent Collapse: How Societies Choose to Fail or Succeed. Tainter notes that societies need to secure certain resources such as food, energy, and natural resources in order to sustain their populations.38 In their attempts to solve this supply problem, societies may grow in complexity for example, in the form of bureaucracy, infrastructure, social class distinction, military operations, and colonies. At some point, Tainter argues, the marginal returns on these investments in social complexity become unfavorable, and societies that do not manage to scale back when their organizational overheads become too large eventually face collapse.
Diamond argues that many past cases of societal collapse have involved environmental factors such as deforestation and habitat destruction, soil problems, water management problems, overhunting and overfishing, the effects of introduced species, human population growth, and increased per-capita impact of people.39 He also suggests four new factors that may contribute to the collapse of present and future societies: human-caused climate change, but also build-up of toxic chemicals in the environment, energy shortages, and the full utilization of the Earths photosynthetic capacity. Diamond draws attention to the danger of creeping normalcy, referring to the phenomenon of a slow trend being concealed within noisy fluctuations, so that a detrimental outcome that occurs in small, almost unnoticeable steps may be accepted or come about without resistance even if the same outcome, had it come about in one sudden leap, would have evoked a vigorous response.40
We need to distinguish different classes of scenarios involving societal collapse. First, we may have a merely local collapse: individual societies can collapse, but this is unlikely to have a determining effect on the future of humanity if other advanced societies survive and take up where the failed societies left off. All historical examples of collapse have been of this kind. Second, we might suppose that new kinds of threat (e.g. nuclear holocaust or catastrophic changes in the global environment) or the trend towards globalization and increased interdependence of different parts of the world create a vulnerability to human civilization as a whole. Suppose that a global societal collapse were to occur. What happens next? If the collapse is of such a nature that a new advanced global civilization can never be rebuilt, the outcome would qualify as an existential disaster. However, it is hard to think of a plausible collapse which the human species survives but which nevertheless makes it permanently impossible to rebuild civilization. Supposing, therefore, that a new technologically advanced civilization is eventually rebuilt, what is the fate of this resurgent civilization? Again, there are two possibilities. The new civilization might avoid collapse; and in the following two sections we will examine what could happen to such a sustainable global civilization. Alternatively, the new civilization collapses again, and the cycle repeats. If eventually a sustainable civilization arises, we reach the kind of scenario that the following sections will discuss. If instead one of the collapses leads to extinction, then we have the kind of scenario that was discussed in the previous section. The remaining case is that we face a cycle of indefinitely repeating collapse and regeneration (see figure 1).
While there are many conceivable explanations for why an advanced society might collapse, only a subset of these explanations could plausibly account for an unending pattern of collapse and regeneration. An explanation for such a cycle could not rely on some contingent factor that would apply to only some advanced civilizations and not others, or to a factor that an advanced civilization would have a realistic chance of counteracting; for if such a factor were responsible, one would expect that the collapse-regeneration pattern would at some point be broken when the right circumstances finally enabled an advanced civilization to overcome the obstacles to sustainability. Yet at the same time, the postulated cause for collapse could not be so powerful as to cause the extinction of the human species.
A recurrent collapse scenario consequently requires a carefully calibrated homeostatic mechanism that keeps the level of civilization confined within a relatively narrow interval, as illustrated in figure 1. Even if humanity were to spend many millennia on such an oscillating trajectory, one might expect that eventually this phase would end, resulting in either the permanent destruction of humankind, or the rise of a stable sustainable global civilization, or the transformation of the human condition into a new posthuman condition. We turn now to the second of these possibilities, that the human condition will reach a kind of stasis, either immediately or after undergoing one of more cycles of collapse-regeneration.
Figure 2 depicts two possible trajectories, one representing an increase followed by a permanent plateau, the other representing stasis at (or close to) the current status quo.
The static view is implausible. It would imply that we have recently arrived at the final human condition even at a time when change is exceptionally rapid: What we do know, writes distinguished historian of technology Vaclav Smil, is that the past six generations have amounted to the most rapid and the most profound change our species has experienced in its 5,000 years of recorded history.41 The static view would also imply a radical break with several long-established trends. If the world economy continues to grow at the same pace as in the last half century, then by 2050 the world will be seven times richer than it is today. World population is predicted to increase to just over 9 billion in 2050, so average wealth would also increase dramatically.42 Extrapolating further, by 2100 the world would be almost 50 times richer than today. A single modest-sized country might then have as much wealth as the entire world has at the present. Over the course of human history, the doubling time of the world economy has been drastically reduced on several occasions, such as in the agricultural transition and the Industrial Revolution. Should another such transition should occur in this century, the world economy might be several orders of magnitudes larger by the end of the century.43
Figure 2: Two trajectories: increase followed by plateau; or stasis at close to the current level.
Another reason for assigning a low probability to the static view is that we can foresee various specific technological advances that will give humans important new capacities. Virtual reality environments will constitute an expanding fraction of our experience. The capability of recording, surveillance, biometrics, and data mining technologies will grow, making it increasingly feasible to keep track of where people go, whom they meet, what they do, and what goes on inside their bodies.44
Among the most important potential developments are ones that would enable us to alter our biology directly through technological means.45 Such interventions could affect us more profoundly than modification of beliefs, habits, culture, and education. If we learn to control the biochemical processes of human senescence, healthy lifespan could be radically prolonged. A person with the age-specific mortality of a 20-year-old would have a life expectancy of about a thousand years. The ancient but hitherto mostly futile quest for happiness could meet with success if scientists could develop safe and effective methods of controlling the brain circuitry responsible for subjective well-being.46 Drugs and other neurotechnologies could make it increasingly feasible for users to shape themselves into the kind of people they want to be by adjusting their personality, emotional character, mental energy, romantic attachments, and moral character.47 Cognitive enhancements might deepen our intellectual lives.48
Nanotechnology will have wide-ranging consequences for manufacturing, medicine, and computing.49 Machine intelligence, to be discussed further in the next section, is another potential revolutionary technology. Institutional innovations such as prediction markets might improve the capability of human groups to forecast future developments, and other technological or institutional developments might lead to new ways for humans to organize more effectively.50 The impacts of these and other technological developments on the character of human lives are difficult to predict, but that they will have such impacts seems a safe bet.
Those who believe that developments such as those listed will not occur should consider whether their skepticism is really about ultimate feasibility or merely about timescales. Some of these technologies will be difficult to develop. Does that give us reason to think that they will never be developed? Not even in 50 years? 200 years? 10,000 years? Looking back, developments such as language, agriculture, and perhaps the Industrial Revolution may be said to have significantly changed the human condition. There are at least a thousand times more of us now; and with current world average life expectancy at 67 years, we live perhaps three times longer than our Pleistocene ancestors. The mental life of human beings has been transformed by developments such as language, literacy, urbanization, division of labor, industrialization, science, communications, transport, and media technology.
The other trajectory in figure 2 represents scenarios in which technological capability continues to grow significantly beyond the current level before leveling off below the level at which a fundamental alteration of the human condition would occur. This trajectory avoids the implausibility of postulating that we have just now reached a permanent plateau of technological development. Nevertheless, it does propose that a permanent plateau will be reached not radically far above the current level. We must ask what could cause technological development to level off at that stage.
One conceptual possibility is that development beyond this level is impossible because of limitation imposed by fundamental natural laws. It appears, however, that the physical laws of our universe permit forms of organization that would qualify as a posthuman condition (to be discussed further in the next section). Moreover, there appears to be no fundamental obstacle to the development of technologies that would make it possible to build such forms of organization.51 Physical impossibility, therefore, is not a plausible explanation for why we should end up on either of the trajectories depicted in figure 2.
Another potential explanation is that while theoretically possible, a posthuman condition is just too difficult to attain for humanity ever to be able to get there. For this explanation to work, the difficulty would have to be of a certain kind. If the difficulty consisted merely of there being a large number of technologically challenging steps that would be required to reach the destination, then the argument would at best suggest that it will take a long time to get there, not that we never will. Provided the challenge can be divided into a sequence of individually feasible steps, it would seem that humanity could eventually solve the challenge given enough time. Since at this point we are not so concerned with timescales, it does not appear that technological difficulty of this kind would make any of the trajectories in figure 2 a plausible scenario for the future of humanity.
In order for technological difficulty to account for one of the trajectories in figure 2, the difficulty would have to be of a sort that is not reducible to a long sequence of individually feasible steps. If all the pathways to a posthuman condition required technological capabilities that could be attained only by building enormously complex, error-intolerant systems of a kind which could not be created by trial-and-error or by assembling components that could be separately tested and debugged, then the technological difficulty argument would have legs to stand on. Charles Perrow argued in Normal Accidents that efforts to make complex systems safer often backfire because the added safety mechanisms bring with them additional complexity which creates additional opportunities for things to go wrong when parts and processes interact in unexpected ways.52 For example, increasing the number of security personnel on a site can increase the insider threat, the risk that at least one person on the inside can be recruited by would-be attackers.53 Along similar lines, Jaron Lanier has argued that software development has run into a kind of complexity barrier.54 An informal argument of this kind has also been made against the feasibility of molecular manufacturing.55
Each of these arguments about complexity barriers is problematic. And in order to have an explanation for why humanitys technological development should level off before a posthuman condition is reached, it is not sufficient to show that some technologies run into insuperable complexity barriers. Rather, it would have to be shown that all technologies that would enable a posthuman condition (biotechnology, nanotechnology, artificial intelligence, etc.) will be blocked by such barriers. That seems an unlikely proposition. Alternatively, one might try to build an argument based on complexity barriers for social organization in general rather than for particular technologies perhaps something akin to Tainters explanation of past cases of societal collapse, mentioned in the previous section. In order to produce the trajectories in figure 2, however, the explanation would have to be modified to allow for stagnation and plateauing rather than collapse. One problem with this hypothesis is that it is unclear that the development of the technologies requisite to reach a posthuman condition would necessarily require a significant increase in the complexity of social organization beyond its present level.
A third possible explanation is that even if a posthuman condition is both theoretically possible and practically feasible, humanity might decide not to pursue technological development beyond a certain level. One could imagine systems, institutions, or attitudes emerging which would have the effect of blocking further development, whether by design or as an unintended consequence. Yet an explanation rooted in unwillingness for technological advancement would have to overcome several challenges. First, how does enough unwillingness arise to overcome what at the present appears like an inexorable process of technological innovation and scientific research? Second, how does a decision to relinquish development get implemented globally in a way that leaves no country and no underground movement able to continue technological research? Third, how does the policy of relinquishment avoid being overturned, even on timescales extending over tens of thousands of years and beyond? Relinquishment would have to be global and permanent in order to account for a trajectory like one of those represented in figure 2. A fourth difficulty emerges out of the three already mentioned: the explanation for how the aversion to technological advancement arises, how it gets universally implemented, and how it attains permanence, would have to avoid postulating causes that in themselves would usher in a posthuman condition. For example, if the explanation postulated that powerful new mind-control technologies would be deployed globally to change peoples motivation, or that an intensive global surveillance system would be put in place and used to manipulate the direction of human development along a predetermined path, one would have to wonder whether these interventions, or their knock-on effects on society, culture, and politics, would not themselves alter the human condition in sufficiently fundamental ways that the resulting condition would qualify as posthuman.
To argue that stasis and plateau are relatively unlikely scenarios is not inconsistent with maintaining that some aspects of the human condition will remain unchanged. For example, Francis Fukuyama argued in The End of History and the Last Man that the endpoint of mankinds ideological evolution has essentially been reached with the end of the Cold War.56 Fukuyama suggested that Western liberal democracy is the final form of human government, and that while it would take some time for this ideology to become completely universalized, secular free-market democracy will in the long term become more and more prevalent. In his more recent book Our Posthuman Future, he adds an important qualification to his earlier thesis, namely that direct technological modification of human nature could undermine the foundations of liberal democracy.57 But be that as it may, the thesis that liberal democracy (or any other political structure) is the final form of government is consistent with the thesis that the general condition for intelligent Earth-originating life will not remain a human condition for the indefinite future.
An explication of what has been referred to as posthuman condition is overdue. In this paper, the term is used to refer to a condition which has at least one of the following characteristics:
This definitions vagueness and arbitrariness may perhaps be excused on grounds that the rest of this paper is at least equally schematic. In contrast to some other explications of posthumanity, the one above does not require direct modification of human nature.58 This is because the relevant concept for the present discussion is that of a level of technological or economic development that would involve a radical change in the human condition, whether the change was wrought by biological enhancement or other causes.
Figure 3: A singularity scenario, and a more incremental ascent into a posthuman condition.
The two dashed lines in figure 3 differ in steepness. One of them depicts slow gradual growth that in the fullness of time rises into the posthuman level and beyond. The other depicts a period of extremely rapid growth in which humanity abruptly transitions into a posthuman condition. This latter possibility can be referred to as the singularity hypothesis.59 Proponents of the singularity hypothesis usually believe not only that a period of extremely rapid technological development will usher in posthumanity suddenly, but also that this transition will take place soon within a few decades. Logically, these two contentions are quite distinct.
Posted in Post Human
Comments Off on The Future of Humanity – Nick Bostrom’s Home Page
Technoself – Wikipedia, the free encyclopedia
Posted: August 10, 2015 at 5:40 pm
Technoself studies, commonly referred to as TSS, is an emerging, interdisciplinary domain of scholarly research dealing with all aspects of human identity in a technological society [1] focusing on the changing nature of relationships between the human and technology. As new and constantly changing experiences of human identity emerge due to constant technological change, technoself studies seeks to map and analyze these mutually influential developments with a focus on identity, rather than technical developments. Therefore, the self is a key concept of TSS. The term "technoself," advanced by Luppicini (2013), broadly denotes evolving human identity as a result of the adoption of new technology, while avoiding ideological or philosophical biases inherent in other related terms including cyborg, posthuman, transhuman, techno-human, beman (also known as bio-electric human), digital identity, avatar, and homotechnicus though Luppicini acknowledges that these categories "capture important aspects of human identity".[2] Technoself is further elaborated and explored in Luppicinis Handbook of Research on Technoself: Identity in a Technological Environment.
Technoself evolved from early groundwork in identity studies, philosophy of mind, and cognitive science.[1]Ren Descartes is often credited as one of the first identity theorists of Modernity to question the material world and the certainty of knowledge from the self. Despite heavy criticism, the question he posed regarding the necessary relation between the mind and body is still considered a prevalent theme in contemporary discussions of identity and technology.[3] Another major development in identity studies came from early Social Psychology, Sociology and Psychoanalysis. Beginning with Freud, the psychoanalytic tradition shed some light on the dynamics of identity and personality development. Erving Goffman expanded the inquiry of identity with his dramaturgical theory, which emphasized the centrality of the social realm and the notion of self-presentation to identity. Later, Foucault further expanded the area of inquiry by contemplating how technologies could facilitate the emergence of new ways of relating to oneself.[4]
The most entrenched area of technoself studies is revolved around ontological considerations and conceptualizations of technoself.[1] The effort to identify the essence of human being is frequent in philosophical circles and is entrenched within emerging theoretical scholarship on technoself.[1]DeGrazias (2005) examination on identify/numerical identity to shed light on the ethics of human enhancement. According to DeGrazia, human identity is divided into two parts: 1) numerical identity (concerns the continuity of an individual as the same object over time or across procedure), and 2) narrative identity (concerns the changes in self-perception experienced by an individual over time).[5] By dividing human identity into two parts, DeGrazia is facilitating a discussion on the ethics of human enhancements.[5] Meanwhile, Croon Fors [6](2012) research on the entanglement of the self and digitalization have helped frame ontological considerations related to the conceptualization of technoself studies.[1] Furthermore, the changing nature of identity is a common theme within Technoself studies.[1] As a result, this has given way for scholars to analyze questions such as: How are advances in sensing technologies, biometrics, and genetics changing the way we define and recognize identity? How are technologies changing the way people define themselves and present themselves in society? These types of questions are being heavily analyzed as the conceptualization of identity is changing rapidly.
Central to the understanding of the development of technoself studies as a field of research is the idea that human identity is shaped by the adoption of new technologies and the relationship between humans and technology. Advancements in digital technology have recently forced researchers to consider the conception of the self in relation to the increasing reliance of society on the use of technologies in daily tasks in people's personal and professional lives.[1] Here are some examples of digital technologies we rely on: cellphones, tablets, social media, etc. New technologies, particularly computer-mediated communication tools, have raised questions related to identity in relationship to privacy issues, virtual identity boundaries, online fraud, citizen surveillance, etc. These issues come as our perspective on technology shifts from one of functionality to one of interaction. According to John Lester, in the future "we won't simply enjoy using our tools, we will come to care for them"[1][7]
A cyborg (cybernetic organism) is a term referring to individuals with both biological and artificial parts. Cyborgs are known as being half-human, half machine organisms, due to the fact that they are always connected with technology. This term, which was coined in 1960 by Manfred Clynes, refers to and acknowledges those beings whose abilities have been enhanced due to the presence and advancement of technology. The notion of cyborg has played a part in breaking down boundaries between humans and non-humans living within a technologically advanced society. For example, those who have installed pacemakers, hearing aids, artificial body parts, cochlear implants as well as other technologies that may aid in enhancing an organisms abilities and capacities to perform, either physically or mentally.[1]Hugh Herr, an American rock climber, engineer, and biophysicist, has successfully invented the next generation of cyborg (bionic limbs and robotic prosthetics).[8] As the head of the Media Lab's Biomechatronics group in MIT, he shared his experience and presented the team achievement first time in TED talk show: Hugh Herr: The new bionics that let us run, climb and dance.
Transhuman is a concept that emerged as a result of the transhumanist movement which is centred around the notion of improving the abilities of human beings mainly through both scientific and technical means.' Unlike the posthuman concept, the notion of transhuman is based on human augmentation but does not commit itself to positing a new separate species.[1] The philosophy of transhumanism was developed in the 1990s by British philosopher Max More who articulated the principles of transhumanism as a futurist philosophy. However, the transhuman philosophy has also been subject to scrutiny by prominent scholars such as Francis Fukuyama.
Posthuman is a concept that aims towards signifying and characterizing a fresh and enhanced type of being. This organism is highly representative of a being that embraces drastic capabilities that exceed current human capabilities that are presently defining human beings. This posthuman state of identity has mainly resulted from the advancement of technological presence. According to Luppicini, posthuman capabilities "suggest a new type of being over and above human. This compromises the neutrality needed for a clear conception of human identity in the face of human-technological integration." This concept aims towards enabling a brighter future concerned with gaining a better perception of the world through various viewpoints.[1]
Homo technicus is a term "first coined by Galvin in 2003 to help refine the definition of human beings to more accurately reflect the evolving condition of human beings intertwined within advancing technological society".[9] It refers to the notion that human beings are technological by nature and evolve simultaneously with technology. Galvin states in his article titled ON TECHNOETHICS "mankind cannot do away with the technical dimension, going even to the point of considering this part of its constitution: mankind is technical by nature. Technology is not an addition to man but is, in fact, one of the ways in which mankind distinguishes itself from animals." [10] Luppicini builds upon the concept of homo technicus in his book Handbook of Research on Technoself: Identity in a Technological Society. Luppicini feels that the notion of homo technicus contributes to the conception of humans as technoselves in two ways. First it helps to solidify the idea of technology as being a key component in defining humans and society and secondly it demonstrates the importance of technology as a human creation that aligns with human values.[9] He further goes onto explain that human interactions with the material world around them helps to create meaning and this unique way of creating meaning has had an impact on how we have evolved as a species.
Also known as bio-electric human. A robot in the form of a human.
Avatars represent the individual, the individuals alter ego, or character(s) within virtual environments controlled by a human user. Avatars provide a unique opportunity to experiment with ones identity construction within virtual worlds (Turkle, 1995) and to do so with others. Examples of avatars can include personas in online games or virtual life simulations such as Second Life.
See the rest here:
Technoself - Wikipedia, the free encyclopedia
Posted in Post Human
Comments Off on Technoself – Wikipedia, the free encyclopedia
POST HUMAN EXHIBIT CATALOG ESSAY 1992-93 Jeffrey Deitch
Posted: July 10, 2015 at 7:40 am
POST HUMAN EXHIBIT CATALOG ESSAY 1992-93 Jeffrey Deitch
On most peoples beauty scale, Stacey Stetler would be a 10. A blond, blue-eyed, 5-foot-11 New York model, she has confidently sashayed down the runway for Yves Saint Laurent in Paris and has graced the covers of fashion magazines. But until recently, when Ms. Stetler looked in the mirror she saw less perfection and more flaws. "I was flat-chested," Ms. StetIer said. "You couldnt tell if I was coming or going. My back protruded almost as much as my front." Ms. Stetler enhanced her boyish figure by having breast implants. She is not alone.
The New York Times, 6 February 1992, front page
Stories about breast implants, crash diets, and mood drugs have moved from the health and beauty page to the front page. The public has been galvanized by explosive testimony about sexual harassment and by the sensational rape trials of public figures. Questions about the new boundaries of appropriate interpersonal behavior are attracting unprecedented interest. There is a growing sense that we should take control over our bodies and our social circumstances rather than just accepting what we inherited.
Social and scientific trends are converging to shape a new conception of the self, a new construction of what it means to be a human being. The matter-of-fact acceptance of ones "natural" looks and ones "natural" personality is being replaced by a growing sense that it is normal to reinvent oneself. The Freudian model of the "psychological person" is dissolving into a new model that encourages individuals to dispense with the anguished analysis of how subconscious childhood experiences molded their behavior. There is a new sense that one can simply construct the new self that one wants, freed from the constraints of ones past and ones inherited genetic code.
Human evolution may be entering a new phase that Charles Darwin never would have envisioned. The potential of genetic reconstitution may be quickly propelling us beyond Darwinian natural evolution and into a bold realm of artificial evolution. Our society will soon have access to the biotechnology that will allow us to make direct choices about how we want our species to further evolve. This new techno-evolutionary phase will bring us far beyond eugenics. Our childrens generation could very well be the last generation of "pure" humans.
This new sense of ones power to control and, if desired, reconstruct ones body has quickly developed a broad acceptance1 but there is still a significant segment of society that is deeply disturbed by its implications. The bitter debate over abortion rights is an example of how explosive the controversy over the limits of "natural" life will become. The battle over the abortion issue and the outcry over euthanasia and the right to choose suicide may be just the beginning of an enormous social conflict over ones freedom to use the new biotechnology to take greater control over ones body and to enhance the course of ones life.
The issue of using genetic engineering to "improve" the fetus will potentially become much more highly charged than the controversy over abortion. It may not be an exaggeration to say that it will become the most difficult moral and social issue that the human species has ever faced. Genetic engineering is not just another life-enhancing technology like aviation or telecommunications. Its continued development and application may force us to redefine the parameters of life.
Our consciousness of the self will have to undergo a profound change as we continue to embrace the transforming advances in biological and communications technologies. A of the self will inevitably take hold as ever more powerful body-altering techniques become commonplace. As radical plastic surgery computer-chip brain implants and gene-splicing become routine, the former structure of self will no longer correspond to the new structure of the body. A new post-human organization of personality will develop that reflects peoples adaptation to this new technology and its socioeconomic effects.
New approaches to self-realization are generally paralleled by new approaches to art. With each successive transformation of the social environment, great artists have both reflected and helped to define the new personality models that have developed out of societys absorption of technological, political, and social change. Looking back through the history of art, we can see how artists have portrayed the changes in models of self-realization that have accompanied profound changes in the social environment.
Link:
POST HUMAN EXHIBIT CATALOG ESSAY 1992-93 Jeffrey Deitch
Posted in Post Human
Comments Off on POST HUMAN EXHIBIT CATALOG ESSAY 1992-93 Jeffrey Deitch
zizek/post-human – lacan
Posted: at 7:40 am
Les particules elementaires, Michel Houellebecq's bestseller from 1998 which triggered a large debate all around Europe, and now finally available in English, is the story of radical DESUBLIMATION, if there ever was one. Bruno, a high-school teacher, is an undersexed hedonist, while Michel, his half-brother, is a brilliant but emotionally desiccated biochemist. Abandoned by their hippie mother when they were small, neither has ever properly recovered; all their attempts at the pursuit of happiness, whether through marriage, the study of philosophy, or the consumption of pornography, merely lead to loneliness and frustration. Bruno ends up in a psychiatric asylum after confronting the meaninglessness of the permissive sexuality (the utterly depressive descriptions of the sexual orgies between forty-somethings are among the most excruciating readings in contemporary literature), while Michel invents a solution: a new self-replicating gene for the post-human desexualized entity. The novel ends with a prophetic vision: in 2040, humanity collectively decides to replace itself with genetically modified asexual humanoids in order to avoid the deadlock of sexuality - these humanoids experience no passions proper, no intense self-assertion that can lead to destructive rage.
Almost four decades ago, Michel Foucault dismissed "man" as a figure in the sand that is now being washed away, introducing the (then) fashionable topic of the "death of man." Although Houellebecq stages this disappearance in much more naive literal terms, as the replacement of humanity with a new post-human species, there is a common denominator between the two: the disappearance of sexual difference. In his last works, Foucault envisioned the space of pleasures liberated from Sex, and one is tempted to claim that Houellebecq's post-human society of clones is the realization of the Foucauldian dream of the Selves who practice the "use of pleasures." While this solution is the fantasy at its purest, the deadlock to which it reacts is a real one: in our postmodern "disenchanted" permissive world, the unconstrained sexuality is reduced to an apathetic participation in collective orgies depicted in Les particules - the constitutive impasse of the sexual relationship (Jacques Lacan's il n'y a pas de rapport sexuel) seems to reach here its devastating apex.
We all know of Alan Turing's famous "imitation game" which should serve as the test if a machine can think: we communicate with two computer interfaces, asking them any imaginable question; behind one of the interfaces, there is a human person typing the answers, while behind the other, it is a machine. If, based on the answers we get, we cannot tell the intelligent machine from the intelligent human, then, according to Turing, our failure proves that machines can think. - What is a little bit less known is that in its first formulation, the issue was not to distinguish human from the machine, but man from woman. Why this strange displacement from sexual difference to the difference between human and machine? Was this due to Turing's simple eccentricity (recall his well-known troubles because of his homosexuality)? According to some interpreters, the point is to oppose the two experiments: a successful imitation of a woman's responses by a man (or vice versa) would not prove anything, because the gender identity does not depend on the sequences of symbols, while a successful imitation of man by a machine would prove that this machine thinks, because "thinking" ultimately is the proper way of sequencing symbols... What if, however, the solution to this enigma is much more simple and radical? What if sexual difference is not simply a biological fact, but the Real of an antagonism that defines humanity, so that once sexual difference is abolished, a human being effectively becomes indistinguishable from a machine.
Perhaps the best way to specify this role of sexual love is through the notion of reflexivity as "the movement whereby that which has been used to generate a system is made, through a changed perspective, to become part of the system it generates."1 This appearance of the generating movement within the generated system as a rule takes the form of its opposite; say, in the later stage of a revolutionary process when Revolution starts to devour its own children, the political agent which effectively set in motion the process is renegated into the role of its main obstacle, of the waverers or outright traitors who are not ready to follow the revolutionary logic to its conclusion. Along the same lines, is it not that, once the socio-symbolic order is fully established, the very dimension which introduced the "transcendent" attitude that defines a human being, namely SEXUALITY, the uniquely human sexual passion, appears as its very opposite, as the main OBSTACLE to the elevation of a human being to the pure spirituality, as that which ties him/her down to the inertia of bodily existence? For this reason, the end of sexuality in the much celebrated "posthuman" self-cloning entity expected to emerge soon, far from opening up the way to pure spirituality, will simultaneously signal the end of what is traditionally designated as the uniquely human spiritual transcendence. All the celebrating of the new "enhanced" possibilities of sexual life that Virtual Reality offers cannot conceal the fact that, once cloning supplements sexual difference, the game is over.
And, incidentally, with all the focus on the new experiences of pleasure that lay ahead with the development of Virtual Reality, direct neuronal implants, etc., what about new "enhanced" possibilities of TORTURE? Do biogenetics and Virtual Reality combined not open up new and unheard-of horizons of extending our ability to endure pain (through widening our sensory capacity to sustain pain, through inventing new forms of inflicting it) - perhaps, the ultimate Sadean image on an "undead" victim of the torture who can sustain endless pain without having at his/her disposal the escape into death, also waits to become reality? Perhaps, in a decade or two, our most horrifying cases of torture (say, what they did to the Chief-of-Staff of the Dominican Army after the failed coup in which the dictator Trujillo was killed - sewing his eyes together so that he wasn't able to see his torturers, and then for four months slowly cutting off parts of his body in most painful ways, like using clumsy scissors to detach his genitals) will appear as naive children's games.
The paradox - or, rather, the antinomy - of the cyberspace reason concerns precisely the fate of the body. Even advocates of cyberspace warn us that we should not totally forget our body, that we should maintain our anchoring in the "real life" by returning, regularly, from our immersion in cyberspace to the intense experience of our body, from sex to jogging. We will never turn ourselves into virtual entities freely floating from one to another virtual universe: our "real life" body and its mortality is the ultimate horizon of our existence, the ultimate, innermost impossibility that underpins the immersion in all possible multiple virtual universes. Yet, at the same time, in cyberspace the body returns with a vengeance: in popular perception, "cyberspace IS hardcore pornography," i.e. hardcore pornography is perceived as the predominant use of cyberspace. The literal "enlightenment," the "lightness of being," the relief/alleviation we feel when we freely float in cyberspace (or, even more, in Virtual Reality), is not the experience of being bodyless, but the experience of possessing another - aetheric, virtual, weightless - body, a body which does not confine us to the inert materiality and finitude, an angelic spectral body, a body which can be artificially recreated and manipulated. Cyberspace thus designates a turn, a kind of "negation of negation," in the gradual progress towards the disembodying of our experience (first writing instead of the "living" speech, then press, then the mass media, then radio, then TV): in cyberspace, we return to the bodily immediacy, but to an uncanny, virtual immediacy. In this sense, the claim that cyberspace contains a Gnostic dimension is fully justified: the most concise definition of Gnosticism is precisely that it is a kind of spiritualized materialism: its topic is not directly the higher, purely notional, reality, but a "higher" BODILY reality, a proto-reality of shadowy ghosts and undead entities.
This notion that we are entering a new era in which humanity will leave behind the inertia of the material bodies, was nicely rendered by Konrad Lorenz's somewhat ambiguous remark that we ourselves (the "actually existing" humanity) are the sought-after "missing link" between animal and man. Of course, the first association that imposes itself here is the notion that the "actually existing" humanity still dwells in what Marx designated as "pre-history," and that the true human history will begin with the advent of the Communist society; or, in Nietzsche's terms, that man is just a bridge, a passage between animal and overman. What Lorenz "meant" was undoubtedly situated along these lines, although with a more humanistic twist: humanity is still immature and barbarian, it did not yet reach the full wisdom. However, an opposite reading also imposes itself: the human being IS in its very essence a "passage," the finite opens into an abyss.
The ongoing decoding of the human body, the prospect of the formulation of each individual's genome, confronts us in a pressing way with the radical question of "what we are": am I that, the code that can be compressed onto a single CD? Are we "nobody and nothing," just an illusion of self-awareness whose only reality is the complex interacting network of neuronal and other links? The uncanny feeling generated by playing with toys like tamagochi concerns the fact that we treat a virtual non-entity as an entity: we act "as if" (we believe that) there is, behind the screen, a real Self, an animal reacting to our signals, although we know well that there is nothing and nobody "behind," just the digital circuitry. However, what is even more disturbing is the implicit reflexive reversal of this insight: if there is effectively no one out there, behind the screen, what if the same goes for myself? What if the "I," my self-awareness, is also merely a superficial "screen" behind which there is only a "blind" complex neuronal circuit? 2 Or, to make the same point from a different perspective: why are people so afraid of the air crash? It's not the physical pain as such - what causes such horror are the two or three minutes while the plane is falling down and one is fully aware that one will die shortly. Does the genome identification not transpose all of us into a similar situation? That is to say, the uncanny aspect of the genome identification concerns the temporal gap which separates the knowledge about what causes a certain disease from the development of the technical means to intervene and prevent this disease from evolving - the period of time in which we shall know for sure that, say, we are about to get a dangerous cancer, but will be unable to do anything to prevent it. And what about "objectively" reading our IQ or the genetic ability for other intellectual capacities? How will the awareness of this total self-objectivization affect our self-experience? The standard answer (the knowledge of our genome will enable us to intervene into our genome and change for the better our psychic and bodily properties) still begs the crucial question: if the self-objectivization is complete, who is the "I" who intervenes into "its own" genetic code in order to change it? Is this intervention itself not already objectivized in the totally scanned brain?
The "closure" anticipated by the prospect of the total scanning of the human brain does not reside only in the full correlation between the scanned neuronal activity in our brain and our subjective experience (so that a scientist will be able to give an impulse to our brain and then predict to what subjective experience this impulsive will give rise), but in the much more radical notion of bypassing the very subjective experience: what will be possible to identify through scanning will be DIRECTLY our subjective experience, so that the scientist will not even have to ask us what we experience - he will be able to READ IMMEDIATELY on his screen what we experience. (There is a further proof which points in the same direction: a couple of milliseconds before a human subject "freely" decides in a situation of choice, scanners can detect the change in the brain's chemical processes which indicates that the decision was already taken - even when we make a free decision, our consciousness seems just to register an anterior chemical process... The psychoanalytic-Schellingian answer to it is to locate freedom (of choice) at the unconscious level: the true acts of freedom are choices/decisions which we make while unaware of it - we never decide (in the present tense); all of a sudden, we just take note of how we have already decided.) On the other hand, one can argue that such a dystopian prospect involves the loop of a petitio principii: it silently presupposes that the same old Self which phenomenologically relies on the gap between "myself" and the objects "out there" will continue to be here after the completed self-objectivization.
Original post:
zizek/post-human - lacan
Posted in Post Human
Comments Off on zizek/post-human – lacan
Crysis 3 Walkthrough Gameplay Part 1 – Post Human – (Xbox 360) – Video
Posted: April 14, 2015 at 9:40 pm
Crysis 3 Walkthrough Gameplay Part 1 - Post Human - (Xbox 360)
Crysis 3 Walkthrough Gameplay Part 1 - Post Human - (Xbox 360) Crysis 3 Walkthrough Gameplay Part 1 This video includes the 1st part of Crysis 3 that is played on the Xbox 360. Crysis 3 is...
By: BasicHaddock4
More here:
Crysis 3 Walkthrough Gameplay Part 1 - Post Human - (Xbox 360) - Video
Posted in Post Human
Comments Off on Crysis 3 Walkthrough Gameplay Part 1 – Post Human – (Xbox 360) – Video