The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: December 2, 2016
Ayn Rand – The New York Times
Posted: December 2, 2016 at 12:36 pm
Ayn Rand's two most famous novels "The Fountainhead" (1943) and "Atlas Shrugged" (1957) are among the greatest word-of-mouth hits in American publishing. Both were scorned by the critics when they came out, went on to become enormous best-sellers, and to this day sell tens of thousands of copies annually. "Atlas Shrugged," Rand's magnum opus, is sometimes said to be the second-most influential book in American thought, next only to the Bible.
The reason for the books' success probably has less to do with their novelistic merits, or lack of them, than with the way they package in fictional form a philosophy Rand called Objectivism, which in effect turned the Judeo-Christian system on its head. In Rand's view, selfishness was good and altruism was evil, and the welfare of society was always subordinate to the self-interest of individuals, especially superior ones. In some ways, Objectivism is an extreme form of laissez-faire capitalism, a view that Rand came to naturally.
She was born in Russia in 1905, lived through the Russian Revolution, and by the time she emigrated to America, in 1926, determined to reinvent herself, she wanted no part of anything that resembled a state-run system. She sometimes wore a gold brooch shaped like a dollar sign, and the dollar sign is also the final image in "Atlas Shrugged," a novel in which liberals and humanitarians are ruinously taking over the world while the intellectual elite, led by the genius industrialist John Galt, hunker down in Colorado.
For a while in the '60s, Objectivism had almost cult status on some American campuses. Much of the fervor dwindled after Rands death in 1982, but the books continue to be rediscovered and passed from one initiate to another. Among the many people influenced by Rand are Camille Paglia, Hugh Hefner, Alan Greenspan and Angelina Jolie. -- Charles McGrath, Sept. 13, 2007.
Go to Home Page
View original post here:
Posted in Ayn Rand
Comments Off on Ayn Rand – The New York Times
Ayn Rand Student Conference 2016
Posted: at 12:36 pm
Its not uncommon to hear that free will is an illusion that belief in free will is incompatible with science.
Yet, the existence of free will lies at the heart of every important issue in your life. Understanding precisely what is and is not within the power of your free choice is crucial to your pursuit of knowledge, values, personal relationships and happiness.
Join us November 4 to 6 in Atlanta, GA, at the Ayn Rand Student Conference 2016 (#AynRandCon) for an in-depth exploration of the concept of free will from the perspective of Ayn Rands philosophy of Objectivism. Rand the novelist, philosopher and cultural icon famous for her bestselling novels The Fountainhead and Atlas Shrugged developed a new account of free will, one that underpins the distinctive view of good and evil and of heroism that runs through her novels.
Rejecting the false alternative of nature vs. nurture, Rand advanced a radical view of man, which holds that you are a being of self-made soul, capable of exercising fundamental control over your own thinking, actions and character. Far from viewing belief in free will as a superstition incompatible with science, Rand argued that the facts support the existence of free will and that its unscientific as well as disastrous personally and culturally to dismiss free will as illusory.
At #AynRandCon youll hear leading experts on Rands philosophy discuss the nature of free will and its implications for your life and for a range of current controversies, from inequality to free speech to U.S. foreign policy in the Middle East. Youll hear from practitioners inspired by Rands message to take control of their fates and build the kind of career and life they wanted. Youll meet other students who love Rands novels and are learning how to apply her ideas to their own lives. And youll have the chance to network with speakers, professionals and students.
The conference is brought to you by the Ayn Rand Institute in collaboration with STRIVE (STudents for Reason, Individualism, Value pursuit, and Enterprise) and is made possible by the generous support of the Michael and Andrea Leven Family Foundation, as well as by the support of the Charles Koch Foundation, Ellen and Harris Kenner, Chris J. Rufer, and Loren and Kathy Corle, RELCO LLC.
Thanks to these donors, students are able to attend this conference at little or no cost. All students will receive a scholarship covering their travel, lodging and registration expenses.
Apply to attend by October 10, 2016!
Excerpt from:
Posted in Ayn Rand
Comments Off on Ayn Rand Student Conference 2016
Let’s Turn Nauru Into Transtopia – blogspot.com
Posted: at 12:33 pm
Here's an off-the-wall idea that has some appeal to me ... as a long-time Transtopian fantasist and world traveler....
The desert island nation of Nauru needs money badly, and has a population of less than 15,000
There are problems with water supply, but they could surely be solved with some technical ingenuity.
The land area is about 8 square miles. But it could be expanded! Surely it's easier to extend an island with concrete platforms or anchored floating platforms of some other kind, than to seastead in the open ocean.
The country is a democracy. Currently it may not be possible to immigrate there except as a temporary tourist or business visitor. But I'd bet this could be made negotiable.
Suppose 15,000 adult transhumanists (along with some kids, one would assume) decided to emigrate to Nauru en masse over a 5-year period, on condition they could obtain full citizenship. Perhaps this could be negotiated with the Nauruan government.
Then after 5 years we would have a democracy in which transhumanists were the majority.
Isn't this the easiest way to create a transhumanist nation? With all the amazing future possibilities that that implies?
This would genuinely be of benefit to the residents of Nauru, which now has 90% unemployment. Unemployment would be reduced close to zero, and the economy would be tremendously enlarged. A win-win situation. Transhumanists would get freedom, and Nauruans would get a first-world economy.
Considerable infrastructure would need to be built. A deal would need to be struck with the government, in which, roughly,
To ensure employment of the relocated transhumanists, we would need to get a number of companies to agree to open Nauru offices. But this would likely be tractable, given the preference of firms to have offices in major tech centers. Living expenses in Nauru would be much lower than in, say, Silicon Valley, so expenses would be lower.
Tourism could become a major income stream, given the high density of interesting people which would make Nauru into a cultural mecca. Currently there is only one small beach on Nauru (which is said to be somewhat dirty), but creation of a beautiful artificial beach on the real ocean is not a huge technological feat.
It would also be a great place to experiment with aquaculture and vertical farming.
What say you? Let's do it!
P.S.
Other candidates for the tropical island Transtopia besides Nauru would be Tuvalu and Kiribati; but Kiribati's population is much larger, and Tuvalu is spread among many islands, and is also about to become underwater due to global warming. So Nauru would seem the number one option. Though, Tuvalu could be an interesting possibility also, especially if we offered to keep the island above water by building concrete platforms or some such (a big undertaking, but much easier than seasteading). This would obviously be a major selling point to the government.
Continued here:
Posted in Transtopian
Comments Off on Let’s Turn Nauru Into Transtopia – blogspot.com
Ascension of Jesus – Wikipedia
Posted: at 12:32 pm
The Ascension of Jesus (anglicized from the Vulgate Latin Acts 1:9-11 section title: Ascensio Iesu) is the departure of Christ from Earth into the presence of God. The well-known narrative in Acts 1 it takes place 40 days after the Resurrection: Jesus, in the company of the disciples, is taken up in their sight after warning them to remain in Jerusalem until the coming of the Holy Spirit; as he ascends a cloud hides him from their view, and two men in white appear to tell them that he will return "in the same way you have seen him go into heaven."
Heavenly ascents were fairly common in the time of Jesus, signifying divine approval or the deification of an exceptional man. In the Christian tradition, reflected in the major Christian creeds and confessional statements, the ascension is connected with the exultation of Jesus, meaning that through his ascension Jesus took his seat at the right hand of God: "He ascended into heaven, and is seated at the right hand of God the Father almighty." The Feast of the Ascension is celebrated on the 40th day of Easter, always a Thursday; the Orthodox tradition has a different calendar up to a month later than in the Western tradition, and while the Anglican communion continues to observe the feast, most Protestant churches have abandoned it. The Ascension of Jesus is an important theme in Christian art, the ascending Jesus often shown blessing an earthly group below him to signify his blessing the entire Church.
The world of the Ascension is a three-part universe with the heavens above, a flat earth centered on Jerusalem in the middle, and the underworld below. Heaven was separated from the earth by the firmament, the visible sky, a solid inverted bowl where God's throne sat "on the vaulted roof of earth."(Isaiah 40:22). Humans looking up from earth saw the floor of heaven, made of clear blue lapis-lazuli (Exodus 24:9-10), as was God's throne (Ezekiel 1:26).
Heavenly ascents were fairly common in the time of Jesus, signifying the means whereby a prophet could attain access to divine secrets, or divine approval granted to an exceptionally righteous individual, or the deification of an exceptional man. Figures familiar to Jews would have included Enoch (from the Book of Genesis and a popular non-Biblical work called 1 Enoch), the 5th century sage Ezra, Baruch the companion of the prophet Jeremiah (from a work called 2 Baruch, in which Baruch is promised he will ascend to heaven after 40 days)), Levi the ancestor of priests, the Teacher of Righteousness from the Qumran community, as well as Elijah and Moses, who was deified on entering heaven, and the children of Job, who according to the Testament of Job ascended heaven following their resurrection from the dead. Non-Jewish readers would have been familiar with the case of the emperor Augustus, whose ascent was witnessed by Senators, Romulus the founder of Rome, who, like Jesus, was taken to heaven in a cloud, the Greek hero Heracles (Hercules), and many others.
There is a broad consensus among scholars that the brief Ascension account in the Gospel of Mark is a later addition to the original version of that gospel.Luke-Acts, a single work from the same anonymous author, provides the only detailed account of the Ascension.Luke 24 tells how Jesus leads the eleven disciples to Bethany, a village on the Mount of Olives not far from Jerusalem, where he instructs them to remain in Jerusalem until the coming of the Holy Spirit and blesses them. "And it came to pass, while he blessed them, he parted from them, and was carried up into heaven. And they worshiped him, and returned to Jerusalem with great joy."
Acts 1 describes a meal on the Mount of Olives, where Jesus commands the disciples to await the coming of the Holy Spirit, a cloud takes him upward from sight, and two men in white appear to tell them (the disciples) that he will return "in the same way you have seen him go into heaven." Luke and Acts appear to describe the same event, but present quite different chronologies, Luke placing it on the same day as the Resurrection and Acts forty days afterwards;[20] various proposals have been put forward to resolve the contradiction, but the question remains open.
The Gospel of John has three references to ascension in Jesus' own words: "No one has ascended into heaven but he who descended from heaven, the son of man" (John 3:13); "What if you (the disciples) were to to see the son of man ascending where he was before?" (John 6:62); and to Mary Magdalene after his Resurrection, "Do not hold me, for I not yet ascended to my father..." (John20:17). In the first and second Jesus is claiming to be the apocalyptic "one like a son of man" of Daniel 7; the last has mystified commentators what should Mary be prohibited from touching the risen but not yet ascended Christ, while Thomas is later invited to do so?
Various epistles (Romans 8:34, Ephesians 1:19-20, Colossians 3:1, Philippians 2:9-11, 1 Timothy 3:16, and 1 Peter 3:21-22) also refer to an Ascension, seeming, like Luke-Acts and John, to equate it with the post-resurrection "exultation" of Jesus to the right hand of God.
The common thread linking all the New Testament Ascension references, reflected in the major Christian creeds and confessional statements, is the exultation of Jesus, meaning that through his ascension Jesus took his seat at the right hand of God in Heaven: "He ascended into heaven, and is seated at the right hand of God the Father almighty." It is interpreted more broadly as the culmination of the Mystery of the Incarnation, marking the completion of Jesus' physical presence among his apostles and consummating the union of God and man, as expressed in the Second Helvetic Confession:
Despite this, the Ascension itself has become an embarrassment. As expressed in a famous statement by theologian Rudolf Bultmann in his essay The New Testament and Mythology: "We no longer believe in the three-storied universe which the creeds take for granted... No one who is old enough to think for himself supposes that God lives in a local heaven ... And if this is so, the story of Christ's ... ascension into heaven is done with." Modern theologians have therefore de-mythologised their theology, abandoning a God who sits enthroned above Jerusalem for a heaven which is "the endless, self-sustaining life of God" and the Ascension "an emblem in space and time of God's eternal life."
The Feast of the Ascension is one of the ecumenical (i.e., universally celebrated) feasts of the Christian liturgical year, along with the Passion, Easter, and Pentecost. Ascension Day is traditionally celebrated on the sixth Thursday after Easter Sunday, the fortieth day from Easter day, although some Roman Catholic provinces have moved the observance to the following Sunday to facilitate the obligation to take Mass. Saint Jerome held that it was of Apostolic origin, but in fact the Ascension was originally part of Pentecost (the coming of the Holy Spirit, and developed as a separate celebration only slowly from the late 4th century onward. In the Catholic tradition it begins with a three-day "rogation" to ask for God's mercy, and the feast itself includes a procession of torches and banners symbolising Christ's journey to the Mount of Olives and entry into heaven, the extinguishing of the Paschal candle, and an all-night vigil; white is the liturgical colour. The orthodox tradition has a slightly different calendar up to a month later than in the Western tradition; the Anglican communion continues to observe the feast, but most Protestant churches have abandoned the traditional Christian calendar of feasts.
The Ascension has been a frequent subject in Christian art. By the 6th century the iconography of the Ascension had been established and by the 9th century Ascension scenes were being depicted on domes of churches. The Rabbula Gospels (c. 586) include some of the earliest images of the Ascension. Many ascension scenes have two parts, an upper (Heavenly) part and a lower (earthly) part. The ascending Christ may be carrying a resurrection banner or make a sign of benediction with his right hand. The blessing gesture by Christ with his right hand is directed towards the earthly group below him and signifies that he is blessing the entire Church. In the left hand, he may be holding a Gospel or a scroll, signifying teaching and preaching.
The Eastern Orthodox portrayal of the Ascension is a major metaphor for the mystical nature of the Church. In many Eastern icons the Virgin Mary is placed at the center of the scene in the earthly part of the depiction, with her hands raised towards Heaven, often accompanied by various Apostles. The upwards-looking depiction of the earthly group matches the Eastern liturgy on the Feast of the Ascension: "Come, let us rise and turn our eyes and thoughts high..."
The traditional site of the Ascension is Mount Olivet (the "Mount of Olives", on which the village of Bethany sits. Before the conversion of Constantine in 312 AD, early Christians honored the Ascension of Christ in a cave on the Mount, and by 384 the Ascension was venerated on the present site, uphill from the cave.[33]
Around the year 390 a wealthy Roman woman named Poimenia financed construction of the original church called "Eleona Basilica" (elaion in Greek means "olive garden", from elaia "olive tree," and has an oft-mentioned similarity to eleos meaning "mercy"). This church was destroyed by Sassanid Persians in 614. It was subsequently rebuilt, destroyed, and rebuilt again by the Crusaders. This final church was later destroyed by Muslims, leaving only a 12x12 meter octagonal structure (called a martyrium"memorial"or "Edicule") that remains to this day.[34] The site was ultimately acquired by two emissaries of Saladin in the year 1198 and has remained in the possession of the Islamic Waqf of Jerusalem ever since.
The Chapel of the Ascension today is a Christian and Muslim holy site now believed to mark the place where Jesus ascended into heaven; in the small round church/mosque is a stone imprinted with the footprints of Jesus.[33] The Russian Orthodox Church also maintains a Convent of the Ascension on the top of the Mount of Olives.
See the original post:
Posted in Ascension
Comments Off on Ascension of Jesus – Wikipedia
Tile Map Service – Wikipedia
Posted: at 12:31 pm
Tile Map Service or TMS, is a specification for tiled web maps, developed by the Open Source Geospatial Foundation. The definition generally requires a URI structure which attempts to fulfill REST principles. The TMS protocol fills a gap between the very simple standard used by OpenStreetMap and the complexity of the Web Map Service standard, providing simple urls to tiles while also supporting alternate spatial referencing system.
TMS is most widely supported by web mapping clients and servers; although there is some desktop support, the Web Map Service protocol is more widespread for enterprise mapping applications. The OpenLayers JavaScript library supports TMS natively, while the Google Maps API allows URL templating, which makes support possible for developers. TileCache is one of the most popular supporting servers, while other servers like mod_tile and TileLite focus on the de facto OpenStreetMap standard.
TMS served as the basis for the OpenGIS Web Map Tile Service OGC standard. [1]
Free software server implementation of the TMS specification:
Here is the original post:
Posted in Tms
Comments Off on Tile Map Service – Wikipedia
Neurotechnology and Society (20102060) – Lifeboat
Posted: at 12:31 pm
by Lifeboat Foundation Scientific Advisory Board member Zack Lynch. Overview Society shapes and is shaped by advancing technology. To illuminate the important societal implications of the NBIC (nano-bio-info-cogno) convergence it is critical to place it within a broad historical context. History sharpens unique issues that require attention versus ones that have more obvious trajectories. By viewing history as a series of techno-economic waves with accompanying socio-political responses, it is possible to begin to understand how NBIC technologies will have an impact on society. Waves of Techno-economic Change Since the time of the Industrial Revolution there has been a relatively consistent pattern of 50-year waves of techno-economic change. We are currently nearing the end of the fifth wave of information technology diffusion, while a sixth wave is emerging with converging advancements across the NBIC (nano-bio-info-cogno) space, making possible neurotechnology, the set of tools that can influence the human central nervous system, especially the brain. Each wave consists of a new group of technologies that make it possible to solve problems once thought intractable. The first wave, the water mechanization wave (17701830) in England, transformed productivity by replacing handcrafted production with water-powered machine-o-facture. The second wave (18201880), powered by a massive iron railroad build-out, accelerated the distribution of goods and services to distant markets. The electrification wave (18701920) made possible new metal alloys that created the foundation of the modern city. The development of skyscrapers, electric elevators, light bulbs, telephones, and subways were all a result of the new electricity infrastructure. At the same time, new techniques for producing inexpensive steel emerged, revamping the railroad systems, and making large-scale construction projects possible. The fourth wave (19101970), ushered in by inexpensive oil, motorized the industrial economy, making the inexpensive transportation of goods and services available to the masses. The most recent wave, the information technology wave (19602020), has made it possible to collect, analyze, and disseminate data, transforming our ability to track and respond to an ever-changing world. Driven by the microprocessors capacity to compute and communicate data at increasingly exponential rates, the current wave is the primary generator of economic and social change today. The nascent neurotechnology wave (20102060) is being accelerated by the development of nanobiochips and brain-imaging technologies that will make biological and neurological analysis accurate and inexpensive. Nanobiochips that can perform the basic bio-analysis functions (genomic, proteomic, biosimulation, and microfluidics) at a low cost will transform neurological analysis in a very similar fashion as the microprocessor did for data. Nano-imaging techniques will also play a vital role in making the analysis of neuro-molecular level events possible. When data from advanced biochips and brain imaging are combined they will accelerate the development of neurotechnology, the set of tools that can influence the human central nervous system, especially the brain. Neurotechnology will be used for therapeutic ends and to enable people to consciously improve emotional stability, enhance cognitive clarity, and extend sensory experiences. Techno-economic waves have pervasive effects throughout the economy and society. New low-cost inputs create new product sectors. They shift competitive behavior across the economy, as older sectors reinterpret how they create value. New low-cost inputs become driving sectors in their own right (e.g., canals, coal, electricity, oil, microchips, biochips). When combined with complementary technologies, each new low-cost input stimulates the development of new sectors (e.g., cotton textiles, railroads, electric products, automobiles, computers, neurofinance). Technological waves, because they embody a major jump up in productivity, open up an unusually wide range of investment and profit opportunities, leading to sustained rates of economic growth. Table 1. Six long waves of techno-economical development Long Wave Years New Inputs Driving Sector New Sectors Mechanization 17701830 Canals, water power Agriculture, cotton spinning Iron tools, canal transportation Railroadization 18201880 Coal, iron, steam power Railroads, locomotives, machine tools Steam shipping, telegraphy Electrification 18701920 Electricity, steel, copper Steel products, electricity Construction, precision machine tools Motorization 19101970 Oil Automobile, oil refining Aircraft, construction, services Information 19602020 Microprocessor Microchips, computers Networking, global finance, e-commerce Neurotechnology 20102060 Biochip, brain imaging, ??? Biotechnology, nanotechnology Neuroceuticals, bio-education Neurotechnology Like any new technology, neurotechnology represents both promises and problems. On the upside, neurotechnology represents new cures for mental illness, new opportunities for economic growth and a potential flowering of artistic expression. These benefits are countered by the potential use of neurotechnology for coercive purposes or its use as neuroweapons that can selectively erase memories. The diffusion of the neurotechnology will have an impact on businesses, politics and human culture in the following ways: New Industries: As brain imaging advances, neuromarketing will become a significant growth sector as the trillion-per-year advertising and marketing industries leverage brain scanning technology to better understand how and why people react to different market campaigns. Neurotechnology will also have an impact on education. As more people live longer and global competition intensifies, people will need to learn new skills throughout their lives. Regulated neuroceuticals represent the tools workers will use to succeed at continuous education. Adult neuroeducation will emerge as a significant industry, teaching individuals how to leverage neuroceuticals to acquire knowledge faster. Using cogniceuticals to increase memory retention, emoticeuticals to decrease stress, and sensoceuticals to add a meaningful pleasure gradient, neuroeducation will allow people to acquire and retain information faster. Imagine learning Arabic in one year rather than ten, or calculus in eight weeks. New Products: For example, neuroceuticals that can temporarily improve different aspects of mental health will become possible. Unlike todays psychopharmaceuticals, neuroceuticals are neuromodulators that have high efficacy and negligible side effects. By being able to target multiple subreceptors in specific neural circuits, neuroceuticals will create the possibility for dynamic intracellular regulation of an individuals neurochemistry. Neuroceuticals will be used for therapy and improvement. Neuroceuticals can be categorized into three broad groups cogniceuticals that focus on decision-making, learning, attention, and memory processes; emoticeuticals that influence feelings, moods, motivation, and awareness; and sensoceuticals that can restore and extend the capacity of our senses, allowing people to see, smell, taste, and hear in different ways. Competitive Advantage: Mental health is the ultimate competitive weapon. Mental health underpins communication, creativity and employee productivity. Individuals who use neurotechnology to understand how their emotions affect their financial decisions will become more productive and will attain neurocompetitive advantage. Neurotechnology-enabled traders will be equipped with emotional forecasting systems that provide them with real-time neurofeedback on their expected emotional bias for a given trade. To further reduce forecasting error, hormone-triggered emoticeuticals will keep traders from entering hot states, where they are known to make less accurate decisions. While some countries may choose to ban them, performance-enabling neuroceuticals will emerge as significant productivity tools. Public Policy: Neuroethicists are already confronting issues of brain privacy and cognitive liberty. As the competitive edge provided by neurotechnology becomes apparent, the ethical debate will evolve into a discussion of the right to enable individuals to use these new tools to improve themselves vs. uneven access to what others will describe as unfair performance improvement. In the legislative arena the competitive necessity of using these new tools will cause great concern over whether or not they will be required in order to just compete in tomorrows global economy. Mental Health: Today, five of the ten leading causes of disability worldwide major depression, schizophrenia, bipolar disorders, substance abuse, and obsessive-compulsive disorders are mental issues. These problems are as relevant in developing countries as they are in rich ones. And all predictions point toward a dramatic increase in mental illnesses as people live longer. New treatments for mental disorders are driving neurotechnologys early development. By 2020, biochips will have radically altered the drug development process, reducing the time to develop new therapies from 15 to 2 years while slashing the cost of drug development from $800 million to $10 million. In addition, entirely novel ways to treat disease at the molecular level will extend life expectancy and improve mental health. New Behaviors: Because our mental perspective slants our thinking, self-reflection and recollection of events, even a slight shift in human perception, will alter how people learn, feel, and react to personal problems, economic crises, and cultural rhetoric. When humans can better control their emotions, how will this affect personal relationships, political opinion and cultural beliefs? When we can enhance memory recall and accelerate learning, how will this influence competitive advantage in the workplace? As we can safely extend our sense of sight, hearing and taste, what might this mean for artistic exploration and human happiness? Patterns in the Location of Production: India and China will likely develop regional clusters of neurotechnology firms as political and cultural views on human testing create the necessary conditions for technological experimentation and development. Conclusion By viewing recent history as a series of techno-economic waves ushered in by a new low-cost input, it is possible to see that neurotechnology will lead to substantial economic, political, and social change. Building on advances in brain science and biotechnology, neurotechnology, the set of tools that influence the human brain, will allow people to experience life in ways that are currently unattainable. Neurotechnology will enable people to consciously improve emotional stability, enhance cognitive clarity, and extend sensory experiences. As people begin to experience life less constrained by ones evolutionarily influenced brain chemistry, neurotechnology will give rise to a new type of human society, a post-industrial, post-informational, neurosociety.
More:
Posted in Neurotechnology
Comments Off on Neurotechnology and Society (20102060) – Lifeboat
Johns Hopkins University | Coursera
Posted: at 12:31 pm
Statistics for Genomic Data Science
Starts Dec 19, 2016
Introduction to Systematic Review and Meta-Analysis
Starts Nov 28, 2016
Principles of fMRI 2
Starts Nov 28, 2016
Systems Thinking In Public Health
Starts Dec 12, 2016
Advanced Linear Models for Data Science 1: Least Squares
Starts Nov 28, 2016
Introduction to Neurohacking In R
Starts Dec 12, 2016
Building Data Visualization Tools
Starting December 12th, 2016
Design and Interpretation of Clinical Trials
Starts Dec 05, 2016
Ruby on Rails Web Services and Integration with MongoDB
Starts Dec 05, 2016
Rails with Active Record and Action Pack
Starts Dec 05, 2016
Major Depression in the Population: A Public Health Approach
Starts Dec 05, 2016
Introduction to Genomic Technologies
Starts Dec 19, 2016
Genomic Data Science Capstone
Starts Dec 05, 2016
R Programming Capstone
Starting January 18th, 2016
Training and Learning Programs for Volunteer Community Health Workers
Starts Jan 16, 2017
Mathematical Biostatistics Boot Camp 2
Starts Dec 19, 2016
Reproducible Research
Starts Nov 28, 2016
Statistical Inference
Starts Nov 28, 2016
The Data Scientists Toolbox
Starts Nov 28, 2016
Single Page Web Applications with AngularJS
Starts Dec 05, 2016
Exploratory Data Analysis
Starts Nov 28, 2016
Health for All Through Primary Health Care
Starts Dec 25, 2016
Developing Data Products
Starts Nov 28, 2016
Getting and Cleaning Data
Starts Nov 28, 2016
Ruby on Rails: An Introduction
Starts Dec 05, 2016
Introduction to the Biology of Cancer
Starts Dec 05, 2016
Data Science in Real Life
Starts Nov 28, 2016
Statistical Reasoning for Public Health 2: Regression Methods
Starts Nov 28, 2016
HTML, CSS, and Javascript for Web Developers
Starts Dec 05, 2016
Statistical Reasoning for Public Health 1: Estimation, Inference, & Interpretation
Starts Jan 16, 2017
Understanding Cancer Metastasis
Starts Dec 12, 2016
Systems Science and Obesity
Starts Nov 28, 2016
Data Science Capstone
Starts Feb 06, 2017
Command Line Tools for Genomic Data Science
Starts Dec 19, 2016
The R Programming Environment
Starts Nov 28, 2016
Psychological First Aid
Starts Dec 12, 2016
Mathematical Biostatistics Boot Camp 1
Starts Dec 19, 2016
R Programming
Starts Nov 28, 2016
Building R Packages
Starts Nov 28, 2016
Capstone: Photo Tourist Web Application
Starting January 17, 2017
Confronting Gender Based Violence: Global Lessons for Healthcare Workers
Starts Jan 09, 2017
Community Change in Public Health
Starts Nov 28, 2016
Managing Data Analysis
Starts Nov 28, 2016
Principles of fMRI 1
Starts Dec 19, 2016
Advanced R Programming
Starts Nov 28, 2016
Chemicals and Health
Starts Dec 26, 2016
Python for Genomic Data Science
Starts Dec 19, 2016
Advanced Linear Models for Data Science 2: Statistical Linear Models
Starts Dec 05, 2016
Algorithms for DNA Sequencing
Starts Dec 19, 2016
Practical Machine Learning
Starts Nov 28, 2016
Go here to see the original:
Posted in Neurohacking
Comments Off on Johns Hopkins University | Coursera
Political Correctness Watch
Posted: at 12:30 pm
The subtext here is that male ballet dancers are frequently homosexual -- and a mother is entitled to discourage her son from such an unhealthy and unhappy lifestyle. Just for starters, there is a very high incidence of spousal abuse among homosexual couples
It may have once been traditional for boys to play football and girls to do ballet but nowadays many children feel free to take up activities regardless of gender.
However, one pushy parent took to Mumsnet to ask for advice on how to discourage her son from taking ballet lessons.
The woman said her son is an aspiring model and explained that she doesn't think the extra-curricular activity 'is going to fit in'.
In her post, Mumsnet user Ironriver said: 'How do I put my son off wanting to do ballet? I'm showing him how cool football, rugby and karate are but he's having none of it. 'He does modelling and I don't think ballet is going to fit in. Lots of the boys do football and other sports so I would like him to do that. Any ideas?'
Many commenters were outraged at the mother's behaviour and suggested she should let her son pursue his own interests.
Concerned commenter OohhThatsMe said: 'Your poor child, having such a sexist mother.'
Shocked reader coolaschmoola added: 'Stop being so bloody sexist and let him do the thing he is interested in and actually wants to do.
'It's 2016! Boys don't just play football. Just like not all girls do ballet.'
Other commenters were surprised that the woman had already decided her should would become a model.
Dodobookends said: 'He's nine and you have already chosen his career for him? Absurd.'
Some even suggested that taking up ballet would be beneficial to any future modelling aspirations.
OlennasWimple said: 'Ballet would give him excellent posture, teach him to move well and have a better idea how to use his body effectively. 'And less chance he'll break his nose or get a cauliflower ear.'
OohhThatsMe added: 'Actually ballet would REALLY help a modelling career. In what way would football do that?
'Look at the girls doing modelling - most will have studied ballet.'
SOURCE
Israeli Bill to Hush Mosque Call to Prayer Stokes Controversy Among Muslims--Others Too
Proposed legislation in Israels parliament to prohibit the use of loudspeakers to transmit the five-times daily Muslim call to prayer is causing dismay among adherents of more than one religious group.
A preliminary vote on the so-called muezzin bill (a muezzin is the mosque official who recites the call to prayer) is scheduled for early next week.
It is not clear how the legislation, if adopted, would impact numerous areas of Israel and the West Bank that are under complex jurisdictional ruling and home to a mixture of religions.
In Jerusalem and elsewhere throughout the country, the three monotheistic faiths contribute to the cacophony of sounds at various times and on different days of the week.
The daily Muslim calls to prayer begin at about 4 a.m. and can be heard to differing degrees, depending on where you are. Where mosques are in close proximity to one another, there is a lot of overlap and duplication.
In Jerusalem, the Jewish shabbat alarm, which is essentially an air-raid siren, sounds every Friday at sundown to tell residents the sabbath has begun. Church bells ring on Sunday and important holidays.
Yaakov Litzman, Israels ultra-Orthodox deputy health minister, initially blocked the bill over concerns that it could be extended to include the shabbat alarm. Last week, Litzman withdrew his opposition after a loophole was added for the alarm, Haaretz reported.
In Bethlehem, which is heavily dependent on Christian pilgrims for tourism at several points during the year, the towns main tourist center is home to a mosque with a loudspeaker set at a very high volume. The mosque towers over Manger Square, and faces the Church of the Nativity, the traditional birthplace of Jesus.
The towns Christmas tree stands right in front of the church and numerous Christmas holiday traditions take place in or near the square.
Local business owners, many of whom are Arab Christians, dont seem to mind the blend of sounds, though.
Im not against it, for sure, said Sami Khouri, general manager of the Visit Palestine visitor center and gift shop-cafe a few hundred feet from Manger Square. Turning down the volume is somewhat okay, but preventing them from doing it isnt right.
Khouri, who also runs a tourism company and lives in Jerusalem, says its just part of life in the region.
Even where I live in Jerusalem, there are two mosques [making the call to prayer] nearby, five times a day. I just think this is co-existence, he said. The mosque has been there for who knows how long and we also ring the church bells. For tourists, its part of the flavor. For me its part of the sounds of Jerusalem, the ambience.
However, Khouri and others do suggest that if multiple mosques are situated in a given area they could possibly coordinate their broadcasts. The caveat is popular sentiment, but is not part of the bill before the Israeli parliament.
Some areas in the West Bank technically under full Palestinian Authority control have protested by staging multifaith demonstrations, with hundreds of Muslims, Christians, and Jewish Samaritans singing the call to prayer together.
Nablus is the largest Palestinian city in the West Bank and home to hundreds of mosques, which together produce a wall of uncoordinated sound.
The ultra-Orthodox Jewish community is almost evenly divided on the issue, according to a poll on one of the communitys websites, Kikar HaShabat (Sabbath Corner). The poll found that 42 percent of respondents were against the bill.
There are also individuals working together behind the scenes, with unlikely, discreet alliances between some Arab and ultra-Orthodox lawmakers, according to a report in Al-Monitor.
Disputes over mosque calls to prayer are not uncommon, both in Western and Muslim countries. In 2004, some of the 23,000 residents of the Detroit suburb of Hamtramck, Michigan were at odds over mosque loudspeakers, with some telling local media they were simply too loud.
In Dubai in 2011, the volume of a mosque was checked twice for decibel level after residents complained about crying children being woken up at 4 a.m.
An online Indonesian housing forum for expats recommends visiting a potential new home to make sure you can handle the disruption to the peace and quiet of your home during the call to prayer.
SOURCE
The left is creating a new kind of apartheid
The student union at Kings College London will field a team in University Challenge that contains at least 50 per cent self-defining women, trans or non-binary students. The only bad thing Ken Livingstone could bring himself to say about the brutal dictator Fidel Castro was that initially he wasnt very good on lesbian and gay rights. The first page of Hillary Clintons campaign website (still up) has links to African Americans for Hillary, Latinos for Hillary, Asian Americans and Pacific islanders for Hillary, Women for Hillary, Millennials for Hillary, but none to men for Hillary, let alone white people for Hillary.
Since when did the left insist on judging people by to paraphrase Martin Luther King the colour of their skin rather than the content of their character? The left once admirably championed the right of black people, women and gays to be treated the same as white, straight men. With only slightly less justification, it then moved on to pushing affirmative action to redress past prejudice. Now it has gone further, insisting everybody is defined by his or her identity and certain victim identities must be favoured.
Given the history of such stereotyping, it is baffling that politicians on the left cannot see where this leads. The prime exponents of identity politics in the past were the advocates of apartheid, of antisemitism, and of treating women as the legal chattels of men. We are sleepwalking our way to segregation, Trevor Phillips says.
Identity politics is thus very old-fashioned. Christina Hoff Sommers, author of Who Stole Feminism, says equality feminism fair treatment, respect and dignity is being eclipsed in universities by a Victorian fainting couch feminism, which views women as fragile flowers who require safe spaces, trigger warnings and special protection from micro-invalidations. Sure enough, when she said this at Oberlin College, Ohio, 35 students and a therapy dog sought refuge in a safe room.
It is just bad biology to focus on race, sex or sexual orientation as if they mattered most about people. Weve known for decades and Marxist biologists such as Dick Lewontin used to insist on this point that the genetic differences between two human beings of the same race are maybe ten times as great as the average genetic difference between two races. Race really is skin deep. Sex goes deeper, for sure, because of developmental pathways, but still the individual differences between men and men, or women and women, or gays and gays, are far more salient than any similarities.
The Republican sweep in the American election cannot be blamed solely on the culture wars, but they surely played a part. Take the bathroom wars that broke out during the early stages of the campaign. North Carolinas legislature heavy-handedly required citizens to use toilets that corresponded to their birth gender. The Obama administration heavy-handedly reacted by insisting that every school district in the country should do no such thing or lose its federal funding. This was a gift to conservatives: Should a grown man pretending to be a woman be allowed to use . . . the same restroom used by your daughter? Your wife?, asked Senator Ted Cruz.
White men played the identity card at the American ballot box There is little doubt that to some extent white men played the identity card at the ballot box in reaction to the identity politics of the left. In a much-discussed essay for The New York Times after the election, Mark Lilla of Columbia University mused that Hillary Clintons tendency to slip into the rhetoric of diversity, calling out explicitly to African-American, Latino, LGBT and women voters at every stop was a mistake: If you are going to mention groups in America, you had better mention all of them.
He argues that the fixation on diversity in our schools and the press has produced a generation of liberals and progressives narcissistically unaware of conditions outside their self-defined groups, and indifferent to the task of reaching out to Americans in every walk of life . . . By the time they reach college many assume that diversity discourse exhausts political discourse, and have shockingly little to say about such perennial questions as class, war, the economy and the common good. As many students woke up to discover on November 9, identity politics is expressive, not persuasive.
Last week, in an unbearably symbolic move, Hampshire College in Massachusetts removed the American flag a symbol of unity if ever there was one from campus in order to make students feel safer. The university president said the removal would enable us to instead focus our efforts on racist, misogynistic, Islamophobic, anti-immigrant, antisemitic and anti-LGBTQ rhetoric and behaviours. There are such attitudes in America, for sure, but I am willing to bet they are not at their worst at Hampshire College, Massachusetts.
The one group that is increasingly excluded from campuses, with never a peep of complaint from activists, is conservatives. Data from the Higher Education Research Institute show the ratio of left-wing professors to right-wing professors went from 2:1 in 1995 to 6:1 today. The 1 is usually in something such as engineering and keeps his or her head down. Fashionable joke: whats the opposite of diversity? University.
This is not a smug, anti-American argument. British universities are hurtling down the same divisive path. Feminists including Germaine Greer, Julie Bindel and Kate Smurthwaite have been no-platformed at British universities, along with speakers for Ukip and Israel, but not Islamic State. Universities are becoming like Victorian aunts, brooking no criticism of religion, treating women as delicate flowers and turning up their noses at Jews.
The government is conducting an independent review into Britains sharia courts, which effectively allow women to be treated differently if they are Muslim. The review is chaired by a Muslim and advised by two imams. And far too many government forms still insist on knowing whether the applicant is (I have taken the list from the Office for National Statistics guidance): Gypsy or Irish Traveller, White and Black Caribbean, White and Black African, White and Asian, Indian, Pakistani, Bangladeshi, Chinese, African, Caribbean, Arab, or any other ethnic group. So bleeding what?
The left has vacated the moral high ground on which it won so many fine battles to treat human beings equally. The right must occupy that ground and stand for universal human values and equal treatment for all.
SOURCE
Fake news and posttruth: the handmaidens of Western relativism
It isnt Macedonian teens who killed truth and objectivity
Internet-savvy 16-year-old boys in Macedonia are undermining Western journalism and democracy. Have you ever encountered a faker news story than that? This is the great irony of the fake-news panic that has swept the Western media in recent days, with observers now claiming that the promotion of made-up news on Facebook may have swung the election for Donald Trump and done GBH to the Western ideals of objectivity and reason: it is underpinned by illusions of its own; by a refusal to grapple with hard truths about the Wests own jettisoning of those values; and by an urge to invent bogeymen that is every bit as dislocated from reality as are those myth-peddling kids in the East.
Still reeling from the failure of their idol Hillary Clinton to get to the White House, mainstream observers and politicians this week came up with another thing to blame: BS news. They claim the spread of stories like The pope loves Trump and Hillary is a paedophile, many of which originate on phoney-news websites in Eastern Europe and get loads of likes among Westerners on Facebook, is a threat to truth and to the very practice of democracy. Angela Merkel bemoaned the fake sites, bots, trolls which manipulate public opinion and make politics and democracy harder. President Obama slammed this active misinformation, arguing that if everything seems to be the same and no distinctions are made, then we lose so much of what weve gained in terms of democratic freedoms.
Liberal columnists, wounded that so much of the public ignored their overtures first on Brexit and then on Trump, claim good, decent, supposedly elitist journalism must now assert itself. Our role in seeking the truth must be harnessed with steely determination, says one. CNNs Christiane Amanpour says the tsunami of fake-news sites is an affront to journalism and the thing that journalism helps to facilitate: democracy. We must now fight hard for the truth in this world where the Oxford English Dictionary just announced that its word of 2016 [is] post-truth, she says. Numerous hacks have been despatched to Macedonia and Russia to confront the fresh-faced youths who run these fake-news sites for cash. How teens in the Balkans are duping Trump supporters, says one headline. Russian propaganda effort helped spread fake news during election, says another. The image were left with is of dastardly Easterners suckering stupid Westerners and undermining the democratic tradition, and now pain-faced, well-minded columnists must stand up to this foreign threat to reason.
Its the fakest news story of the week. It might not be as utterly invented as the one about Hillarys people abusing children in a pizza restaurant in Washington, DC. But it involves a profounder avoidance of truth, a deeper unwillingness to face up to facts. In particular the fact that the rise of fake news, alternative news and conspiracy theories speaks not to the wicked interventions of myth-spreaders from without, but to the corrosion of reason within, right here in the West. It speaks to the declining moral and cultural authority of our own political and media class. It is the Western worlds own abandonment of objectivity, and loss of legitimacy in the eyes of its populace, that has nurtured something of a free-for-all on the facts and news front. Those Macedonian kids arent denting democracy or damaging objectivity theyre merely milking a Western crisis of objectivity that began long before they were born.
The first striking thing about the fake-news panic is its naked paternalism. The suggestion is that voters, especially those of a low-information, redneck variety, were hoodwinked into voting Trump by outlandish stories about how evil Hillary is. Fake news whacks people who could not recognise [or] fact-check, says Amanpour. Its a post-truth era where you can play [people] like a fiddle, says a liberal writer in the US. A Guardian columnist says people easily believe lies that play to their prejudices and then pass them on thoughtlessly. Were given the impression that masses of people are incapable of deciphering fact from fiction. They cast their votes on the basis of a daft pizza-paedo link they saw on Facebook. With a loud sneer, observers write off the general publics capacity for reason and willingness to engage seriously with democratic decisions. Ironically, this demeaning of the demos, this calling into question of the very idea that underpins modern politics that the public is reasoned and must be allowed to steer the fate of their nation does far greater damage to the value and standing of democracy than any spotty Macedonian with a laptop could ever do.
Then came the paternalistic solutions. We need new gatekeepers, columnists claim: professionals who have the resources and brains to work out whats true and whats a lie and ensure that people see more of the former. Obama and others suggest Facebook must get better at curating news, sorting truth from falsehood on behalf of its suggestible users. The suggestion is that the internet, having thrown open the world of reportage and commentary to everyone, having enabled anyone with a computer or phone to say their piece, has disoriented truth and democracy and now must be tamed, or at least better managed.
This echoes the elite fears that greeted the invention of the printing press in the 15th century. Then, the religious authorities the gatekeepers of their day worried that all sorts of heresy might now find its way into the publics minds and hearts, unfiltered by their wise, godly counsel. Todays aspiring gatekeepers panic that fake news will get into and warp the minds of the little people in this era when knowledge filtering has been stripped back even further, so that increasingly the citizen stands alone before the claims and counter-claims of those who publish. And apparently this fake news often contains heresies of its own. In his interview with the New Yorker, Obama strikingly bemoaned the fake news of climate-change scepticism, where an explanation of climate change from a Nobel Prize-winning physicist looks exactly the same on your Facebook page as the denial of climate change by somebody on the Koch brothers payroll. This cuts to the 15th-century-echoing fear that motors the panic over fake news: the belief that it will allow not only outright lies, but new heresies, new blasphemies, different ways of thinking, to make an appeal to peoples beliefs and convictions. The call to filter social media is a paternalistic call to protect the public from bad or mad or dangerous thoughts, in a similar way that early clampdowns on the printing press were designed to keep evil from the swarm.
What this censorious, anti-demos view overlooks is the positive side to todays unprecedented throwing-open of debate and news and politics: the fact that it implicitly calls on the citizen to use his own mental and moral muscles, to confront the numerous different versions of the world offered to him and decide which one sounds most right. Surely the internets downside of fake news is more than outweighed by its invitation to us to negotiate the rapids of public debate for ourselves and make up our own minds? Ideally, in a democracy, everybody would agree that climate change is a consequence of man-made behaviour, because thats what 99 per cent of scientists tell us, said Obama in his handwringing over fake news. No. The ideal thing in a democracy isnt that we believe something because scientists, or politicians, or priests, have told us its true; its that we believe something because we have considered it, thought about it, weighed it up against other things, and then deployed our own judgement. Believing something because others tell you its true isnt democracy its oligarchy.
Even the extent to which fake news is a bad thing and of course it can be its rise is not a result of wicked foreign poking into Western politics and debate. Rather, it speaks to the hollowing-out of the whole idea of truth in the West, to the march of the relativistic notion that objectivity is not only difficult but undesirable. The image of the old gatekeepers of knowledge, or just news, being elbowed aside either by new technologies or by interfering Easterners is wrong; it is more accurate to say that these gatekeepers gave up, and abandoned their posts, on the basis that it is arrogant to assume that any one way of seeing or reporting the world is better than another.
For the past two decades, Western news reporting has openly called into question its own definitiveness. It has thrown open news items to ceaseless commenting below the line, on the basis that news coverage is a partnership, as the BBCs Richard Sandbrook said in 2005. It celebrated citizen journalism as a realer, less top-down form of newsgathering. And it has jettisoned the very thing that distinguished it from other, more opinionated views on world events: its objectivity. From the rise of the journalism of attachment in the 1990s, in which journalists eschewed the apparently cold, forensic habit of objectivity and took sides with the most victimised groups in certain conflicts and situations, to the medias embrace of data journalism in the 2000s, where churning through thousands of leaked documents took the place of discovering stories and faithfully reporting them, Western journalism has redefined its mission from one of objectively discovering truth to simply offering its increasingly technical or emotional take on what might, or might not, have happened.
Journalists have explicitly disavowed objectivity, and with it their gatekeeping role. It is time to toss out objectivity as a goal, said Harvard journalism expert Dan Gilmor in 2005. By 2010, even Time magazine, self-styled epitome of the Western journalistic style, was celebrating The End of Objectivity. The new-media openness [has] upended the old medias poker-faced stoicism and its about time, it said. The Western media started to replace the ideal of objectivity with values such as fairness, transparency and balance. And as one European observer pointed out, these are very different to objectivity: where objectivity points to the active quest for truth, these newer, more technical values reduce the news media to just another voice among the many voices in a pluralistic world. When someone like Amanpour says Western journalism and democracy are in mortal peril, largely thanks to foreign powers like Russia paying to churn out false news, she overlooks journalisms weakening of its own ideals and authority, including by her and others in the 1990s when they ditched objectivity in preference for taking sides in conflicts like the one in Bosnia. She conspiratorially displaces on to Russia a crisis of objectivity that has its origins in the newsrooms and academies and political chambers of the West.
The abandonment of objectivity in journalism did not happen in a vacuum. It sprung from, and in turn intensified, a rejection of reason in the West, a disavowal of the idea of truth, and its replacement either by the far more technical ambition of being evidence-based or by highly emotional responses to world events. Indeed, the greatest irony in the fake-news panic, and in the whole post-Brexit, post-Trump talk of a new post-truth era, is that it was the very guardians of Western culture and knowledge, the very establishment now horrified by how the little people think and vote, who made us post-truth; who oversaw the turn against Enlightenment in the academy, the calling into question of male science, the throttling of the idea of any one, clear morality to which people might subscribe, and the rubbishing of the entire project of objectivity, even of news as we understood it. When Obama says we live in an era where everything seems to be the same and no distinctions are made, he isnt wrong. Only that refusal to distinguish, to judge, to elevate truer things over questionable things, is not down to Facebook or Macedonians or allegedly dumb Trump voters it is an accomplishment of the very post-Enlightenment, self-doubting, technocratic elites Obama is part of.
And what happens when you give up your conviction that truth can be discovered, and instead promote the idea that all ways of looking at the world, and interpreting the world, and feeling the world, have validity? You disorientate public discussion. You slay your own cultural authority. You create a situation where people doubt you, often with good reason, and go looking for other sources of information. You create the space for other claims of truth, some of them good and exciting, some of them mad and fake. Dont blame Russia, or us, for the crisis of journalism and democracy or for our so-called post-truth times. You did this. You, the gatekeepers. Well be our own gatekeepers now, thanks.
SOURCE
*************************
Political correctness is most pervasive in universities and colleges but I rarely report the incidents concerned here as I have a separate blog for educational matters.
American "liberals" often deny being Leftists and say that they are very different from the Communist rulers of other countries. The only real difference, however, is how much power they have. In America, their power is limited by democracy. To see what they WOULD be like with more power, look at where they ARE already very powerful: in America's educational system -- particularly in the universities and colleges. They show there the same respect for free-speech and political diversity that Stalin did: None. So look to the colleges to see what the whole country would be like if "liberals" had their way. It would be a dictatorship.
For more postings from me, see TONGUE-TIED, GREENIE WATCH, EDUCATION WATCH INTERNATIONAL, FOOD & HEALTH SKEPTIC, AUSTRALIAN POLITICS and DISSECTING LEFTISM. My Home Pages are here or here or here. Email me (John Ray) here.
***************************
Read more:
Posted in Political Correctness
Comments Off on Political Correctness Watch
Social Origins of Eugenics
Posted: at 12:30 pm
Scientific Origins of Eugenics
Elof Carlson, State University of New York at Stony Brook
The eugenics movement arose in the 20th century as two wings of a common philosophy of human worth. Francis Galton, who coined the term eugenics in 1883, perceived it as a moral philosophy to improve humanity by encouraging the ablest and healthiest people to have more children. The Galtonian ideal of eugenics is usually termed positive eugenics. Negative eugenics, on the other hand, advocated culling the least able from the breeding population to preserve humanity's fitness. The eugenics movements in the United States, Germany, and Scandinavia favored the negative approach.
The notion of segregating people considered unfit to reproduce dates back to antiquity. For example, the Old Testament describes the Amalekites a supposedly depraved group that God condemned to death. Concerns about environmental influences that might damage heredity leading to ill health, early death, insanity, and defective offspring were formalized in the early 1700s as degeneracy theory. Degeneracy theory maintained a strong scientific following until late in the 19th century. Masturbation, then called onanism, was presented in medical schools as the first biological theory of the cause of degeneracy. Fear of degeneracy through masturbation led Harry Clay Sharp, a prison physician in Jeffersonville, Indiana, to carry out vasectomies on prisoners beginning in 1899. The advocacy of Sharp and his medical colleagues, culminated in an Indiana law mandating compulsory sterilization of "degenerates." Enacted in 1907, this was the first eugenic sterilization law in the United States.
By the mid-19th century most scientists believed bad environments caused degenerate heredity. Benedict Morel's work extended the causes of degeneracy to some legitimate agents including poisoning by mercury, ergot, and other toxic substances in the environment. The sociologist Richard Dugdale believed that good environments could transform degenerates into worthy citizens within three generations. This position was a backdrop to his very influential study on The Jukes (1877), a degenerate family of paupers and petty criminals in Ulster County, New York. The inheritance of acquired (environmental) characters was challenged in the 1880s by August Weismann, whose theory of the germ plasm convinced most scientists that changes in body tissue (the soma) had little or no effect on reproductive tissue (the germ plasm). At the beginning of the 20th century, Weismann's views were absorbed by degeneracy theorists who embraced negative eugenics as their favored model.
Adherents of the new field of genetics were ambivalent about eugenics. Most basic scientists including William Bateson in Great Britain, and Thomas Hunt Morgan in the United States shunned eugenics as vulgar and an unproductive field for research. However, Bateson's and Morgan's contributions to basic genetics were quickly absorbed by eugenicists, who took interest in Mendelian analysis of pedigrees of humans, plants, and animals. Many eugenicists had some type of agricultural background. Charles Davenport and Harry Laughlin, who together ran the Eugenics Record Office, were introduced through their shared interest in chicken breeding. Both also were active in Eugenics Section of the American Breeder's Association (ABA). Davenport's book, Eugenics: The Science of Human Improvement through Better Breeding, had a distinct agricultural flavor, and his affiliation with the ABA was included under his name on the title page. Agricultural genetics also provided the favored model for negative eugenics: human populations, like agricultural breeds and varieties, had to be culled of their least productive members, with only the healthiest specimens used for breeding.
Evolutionary models of natural selection and dysgenic (bad) hereditary practices in society also contributed to eugenic theory. For example, there was fear that highly intelligent people would have smaller families (about 2 children), while the allegedly degenerate elements of society were having larger families of four to eight children. Public welfare might also play a role in allowing less fit people to survive and reproduce, further upsetting the natural selection of fitter people.
Medicine also put its stamp on eugenics. Physicians like Anton Ochsner and Harry Sharp were convinced that social failure was a medical problem. Italian criminologist and physician Cesare Lombroso popularized the image of an innate criminal type that was thought to be a reversion or atavism of a bestial ancestor of humanity. When medical means failed to help the psychotic, the retarded, the pauper, and the vagrant, eugenicists shifted to preventive medicine. The German physician-legislator Rudolph Virchow, advocated programs to deal with disease prevention on a large scale. Virchow's public health movement was fused with eugenics to form the racial hygiene movement in Germany and came to America through physicians he trained.
Eugenicists argued that "defectives" should be prevented from breeding, through custody in asylums or compulsory sterilization. Most doctors probably felt that sterilization was a more humane way of dealing with people who could not help themselves. Vasectomy and tubal ligation were favored methods, because they did not alter the physiological and psychological contribution of the reproductive organs. Sterilization allowed the convicted criminal or mental patient to participate in society, rather than being institutionalized at public expense. Sterilization was not viewed as a punishment because these doctors believed (erroneously) that the social failure of "unfit" people was due to an irreversibly degenerate germ plasm.
See the article here:
Posted in Eugenics
Comments Off on Social Origins of Eugenics
Mind uploading – Transhumanism Wiki – Wikia
Posted: at 12:30 pm
In transhumanism and science fiction, mind uploading (also occasionally referred to by other terms such as mind transfer, whole brain emulation, or whole body emulation) refers to the hypothetical transfer of a human mind to a substrate different from a biological brain, such as a detailed computer simulation of an individual human brain.
The human brain contains a little more than 100 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The brain contains cell types other than neurons (such as glial cells), some of which are structurally similar to neurons, but the information processing of the brain is thought to be conducted by the network of neurons.
Current biomedical and neuropsychological thinking is that the human mind is a product of the information processing of this neural network. To use an analogy from computer science, if the neural network of the brain can be thought of as hardware, then the human mind is the software running on it.
Mind uploading, then, is the act of copying or transferring this "software" from the hardware of the human brain to another processing environment, typically an artificially created one.
The concept of mind uploading then is strongly mechanist, relying on several assumptions about the nature of human consciousness and the philosophy of artificial intelligence. It assumes that strong AI machine intelligence is not only possible, but is indistinguishable from human intelligence, and denies the vitalist view of human life and consciousness.
Mind uploading is completely speculative at this point in time; no technology exists which can accomplish this.
The relationship between the human mind and the neural circuitry of the brain is currently poorly understood. Thus, most theoretical approaches to mind uploading are based on the idea of recreating or simulating the underlying neural network. This approach would theoretically eliminate the need to understand how such a system works if the component neurons and their connections can be simulated with enough accuracy.
It is unknown how precise the simulation of such a neural network would have to be to produce a functional simulation of the brain. It is possible, however, that simulating the functions of a human brain at the cellular level might be much more difficult than creating a human level artificial intelligence, which relied on recreating the functions of the human mind, rather than trying to simulate the underlying biological systems.[citation needed]
Thinkers with a strongly mechanistic view of human intelligence (such as Marvin Minsky) or a strongly positive view of robot-human social integration (such as Hans Moravec and Ray Kurzweil) have openly speculated about the possibility and desirability of this.
In the case where the mind is transferred into a computer, the subject would become a form of artificial intelligence, sometimes called an infomorph or "nomorph." In a case where it is transferred into an artificial body, to which its consciousness is confined, it would also become a robot. In either case it might claim ordinary human rights, certainly if the consciousness within was feeling (or was doing a good job of simulating) as if it were the donor.
Uploading consciousness into bodies created by robotic means is a goal of some in the artificial intelligence community. In the uploading scenario, the physical human brain does not move from its original body into a new robotic shell; rather, the consciousness is assumed to be recorded and/or transferred to a new robotic brain, which generates responses indistinguishable from the original organic brain.
The idea of uploading human consciousness in this manner raises many philosophical questions which people may find interesting or disturbing, such as matters of individuality and the soul. Vitalists would say that uploading was a priori impossible. Many people also wonder whether, if they were uploaded, it would be their sentience uploaded, or simply a copy.
Even if uploading is theoretically possible, there is currently no technology capable of recording or describing mind states in the way imagined, and no one knows how much computational power or storage would be needed to simulate the activity of the mind inside a computer. On the other hand, advocates of uploading have made various estimates of the amount of computing power that would be needed to simulate a human brain, and based on this a number have estimated that uploading may become possible within decades if trends such as Moore's Law continue.[citation needed]
If it is possible for human minds to be modeled and treated as software objects which can be instanced multiple times, in multiple processing environments, many potentially desirable possibilities open up for the individual.
If the mental processes of the human mind can be disassociated from its original biological body, it is no longer tied to the limits and lifespan of that body. In theory, a mind could be voluntarily copied or transferred from body to body indefinitely and therefore become immortal, or at least exercise conscious control of its lifespan.
Alternatively, if cybernetic implants could be used to monitor and record the structure of the human mind in real time then, should the body of the individual be killed, such implants could be used to later instance another working copy of that mind. It is also possible that periodic backups of the mind could be taken and stored external to the body and a copy of the mind instanced from this backup, should the body (and possibly the implants) be lost or damaged beyond recovery. In the latter case, any changes and experiences since the time of the last backup would be lost.
Such possibilities have been explored extensively in fiction: This Number Speaks, Nancy Farmer's The House of the Scorpion, Newton's Gate, John Varley's Eight Worlds series, Greg Egan's Permutation City, Diaspora, Schild's Ladder and Incandescence, the Revelation Space series, Peter Hamilton's Pandora's Star duology, Bart Kosko's Fuzzy Time, Armitage III series, the Takeshi Kovacs universe, Iain M. Banks Culture novels, Cory Doctorow's Down and Out in the Magic Kingdom, and the works of Charles Stross. And in television sci-fi shows: Battlestar Galactica, Stargate SG-1, among others.
Another concept explored in science fiction is the idea of more than one running "copy" of a human mind existing at once. Such copies could either be full copies, or limited subsets of the complete mentality designed for a particular limited functions. Such copies would allow an "individual" to experience many things at once, and later integrate the experiences of all copies into a central mentality at some point in the future, effectively allowing a single sentient being to "be many places at once" and "do many things at once".
The implications of such entities have been explored in science fiction. In his book Eon, Greg Bear uses the terms "partials" and "ghosts", while Charles Stross's novels Accelerando and Glasshouse deal with the concepts of "forked instances" of conscious beings as well as "backups".
In Charles Sheffield's Tomorrow and Tomorrow, the protagonist's consciousness is duplicated thousands of times electronically and sent out on probe ships and uploaded into bodies adapted to native environments of different planets. The copies are eventually reintegrated back into the "master" copy of the consciousness in order to consolidate their findings.
Such partial and complete copies of a sentient being again raise issues of identity and personhood: is a partial copy of sentient being itself sentient? What rights might such a being have? Since copies of a personality are having different experiences, are they not slowly diverging and becoming different entities? At what point do they become different entities?
If the body and the mind of the individual can be disassociated, then the individual is theoretically free to choose their own incarnation. They could reside within a completely human body, within a modified physical form, or within simulated realities. Individuals might change their incarnations many times during their existence, depending on their needs and desires.
Choices of the individuals in this matter could be restricted by the society they exist within, however. In the novel Eon by Greg Bear, individuals could incarnate physically (within "natural" biological humans, or within modified bodies) a limited number of times before being legally forced to reside with the "city memory" as infomorphic "ghosts".
Once an individual is moved to virtual simulation, the only input needed would be energy, which would be provided by large computing device hosting those minds. All the food, drink, moving, travel or any imaginable thing would just need energy to provide those computations.
Almost all scientists, thinkers and intelligent people would be moved to this virtual environment once they die. In this virtual environment, their brain capacity would be expanded by speed and storage of quantum computers. In virtual environment idea and final product are not different. This way more and more innovations will be sent to real world and it will speed up our technological development.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands of such venture are likely to be immense.
Henry Markram, lead researcher of the "Blue Brain Project", has stated that "it is not [their] goal to build an intelligent neural network", based solely on the computational demands such a project would have[1].
Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power may become available within a few decades, though it would probably require advances beyond the integrated circuit technology which has dominated since the 1970s. Several new technologies have been proposed, and prototypes of some have been demonstrated, such as the optical neural network based on the silicon-photonic chip (harnessing special physical properties of Indium Phosphide) which Intel showed the world for the first time on September 18, 2006.[3] Other proposals include three-dimensional integrated circuits based on carbon nanotubes (researchers have already demonstrated individual logic gates built from carbon nanotubes[4]) and also perhaps the quantum computer, currently being worked on internationally as well as most famously by computer scientists and physicists at the IBM Almaden Research Center, which promises to be useful in simulating the behavior of quantum systems; such ability would enable protein structure prediction which could be critical to correct emulation of intracellular neural processes.
Present methods require use of massive computational power (as the BBP does with IBM's Blue Gene Supercomputer) to use the essentially classical computing architecture for serial deduction of the quantum mechanical processes involved in ab initio protein structure prediction. If necessary, should the quantum computer become a reality, its capacity for exactly such rapid calculations of quantum mechanical physics may well help the effort by reducing the required computational power per physical size and energy needs, as Markram warns would be needed (and thus why he thinks it would be difficult, besides unattractive) should an entire brain's simulation, let alone emulation (at both cellular and molecular levels) be feasibly attempted. Reiteration may also be useful for distributed simulation of a common, repeated function (e.g., proteins).
Ultimately, nano-computing is projected by some[citation needed] to hold the requisite capacity for computations per second estimated necessary, in surplus. If Kurzweil's Law of Accelerating Returns (a variation on Moore's Law) shows itself to be true, the rate of technological development should accelerate exponentially towards the technological singularity, heralded by the advent of viable though relatively primitive mind uploading and/or "strong" (human-level) AI technologies, his prediction being that the Singularity may occur around the year 2045.[5]
The structure of a neural network is also different from classical computing designs. Memory in a classical computer is generally stored in a two state design, or bit, although one of the two components is modified in dynamic RAM and some forms of flash memory can use more than two states under some circumstances. Gates inside central processing units will often also use this two state or digital type of design as well. In some ways a neural network or brain could be thought of like a memory unit in a computer, but with an extremely vast number of states, corresponding with the total number of neurons. Beyond that, whether the action potential of a neuron will form, based upon the summation of the inputs of different dendrites, might be something that is more analog in nature than that which happens in a computer. One great advantage that a modern computer has over a biological brain, however, is that the speed of each electronic operation in a computer is many orders of magnitude faster than the time scales involved for the firing and transmission of individual nerve impulses. A brain, however, uses far more parallel processing than exists in most classical computing designs, and so each of the slower neurons can make up for it by operating at the same time.
There are many ethical issues concerning mind uploading. Viable mind uploading technology might challenge the ideas of human immortality, property rights, capitalism, human intelligence, an afterlife, and the Abrahamic view of man as created in God's image. These challenges often cannot be distinguished from those raised by all technologies that extend human technological control over human bodies, e.g. organ transplant. Perhaps the best way to explore such issues is to discover principles applicable to current bioethics problems, and question what would be permissible if they were applied consistently to a future technology. This points back to the role of science fiction in exploring such problems, as powerfully demonstrated in the 20th century by such works as Brave New World and Nineteen Eighty-Four, each of which frame current ethical problems in a future environment where those have come to dominate the society.
Another issue with mind uploading is whether an uploaded mind is really the "same" sentience, or simply an exact copy with the same memories and personality. Although this difference would be undetectable to an external observer (and the upload itself would probably be unable to tell), it could mean that uploading a mind would actually kill it and replace it with a clone. Some people would be unwilling to upload themselves for this reason. If their sentience is deactivated even for a nanosecond, they assert, it is permanently wiped out. Some more gradual methods may avoid this problem by keeping the uploaded sentience functioning throughout the procedure.
True mind uploading remains speculative. The technology to perform such a feat is not currently available, however a number of possible mechanisms, and research approaches, have been proposed for developing mind uploading technology.
Since the function of the human mind, and how it might arise from the working of the brain's neural network, are poorly understood issues, many theoretical approaches to mind uploading rely on the idea of emulation. Rather than having to understand the functioning of the human mind, the structure of underlying neural network is captured and simulated with a computer system. The human mind then, theoretically, is generated by the simulated neural network in an identical fashion to it being generated by the biological neural network.
These approaches require only that we understand the nature of neurons and how their connections function, that we can simulate them well enough, that we have the computational power to run such large simulations, and that the state of the brain's neural network can be captured with enough fidelity to create an accurate simulation.
A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, thus capturing the structure of the neurons and their interconnections[6]. The exposed surface of frozen nerve tissue would be scanned (possibly with some variant of an electron microscope) and recorded, and then the surface layer of tissue removed (possibly with a conventional cryo-ultramicrotome if scanning along an axis, or possibly through laser ablation if scans are done radially "from the outside inwards"). While this would be a very slow and labor intensive process, research is currently underway to automate the collection and microscopy of serial sections[7]. The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.
There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique[7]. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods which could then be read via confocal laser scanning microscopy[citation needed].
A more advanced hypothetical technique that would require nanotechnology might involve infiltrating the intact brain with a network of nanoscale machines to "read" the structure and activity of the brain in situ, much like the electrode meshes used in current brain-computer interface research, but on a much finer and more sophisticated scale. The data collected from these probes could then be used to build up a simulation of the neural network they were probing, and even check the behavior of the model against the behavior of the biological system in real time.
In his 1998 book, Mind children, Hans Moravec describes a variation of this process. In it, nanomachines are placed in the synapses of the outer layer of cells in the brain of a conscious living subject. The system then models the outer layer of cells and recreates the neural net processes in whatever simulation space is being used to house the uploaded consciousness of the subject. The nanomachines can then block the natural signals sent by the biological neurons, but send and receive signals to and from the simulated versions of the neurons. Which system is doing the processing biological or simulated can be toggled back and forth, both automatically by the scanning system and manually by the subject, until it has been established that the simulation's behavior matches that of the biological neurons and that the subjective mental experience of the subject is unchanged. Once this is the case, the outer layer of neurons can be removed and their function turned solely over to the simulated neurons. This process is then repeated, layer by layer, until the entire biological brain of the subject has been scanned, modeled, checked, and disassembled. When the process is completed, the nanomachines can be removed from the spinal column of the subject, and the mind of the subject exists solely within the simulated neural network.
Alternatively, such a process might allow for the replacement of living neurons with artificial neurons one by one while the subject is still conscious, providing a smooth transition from an organic to synthetic brain - potentially significant for those who worry about the loss of personal continuity that other uploading processes may entail. This method has been likened to upgrading the whole internet by replacing, one by one, each computer connected to it with similar computers using newer hardware.
While many people are more comfortable with the idea of the gradual replacement of their natural selves than they are with some of the more radical and discontinuous mental transfer, it still raises questions of identity. Is the individual preserved in this process, and if not, at what point does the individual cease to exist? If the original entity ceases to exist, what is the nature and identity of the individual created within the simulated neural network, or can any individual be said to exist there at all? This gradual replacement leads to a much more complicated and sophisticated version of the Ship of Theseus paradox.
It may also be possible to use advanced neuroimaging technology (such as Magnetoencephalography) to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. However, current imaging technology lacks the resolution needed to gather the information needed for such a scan.
Such a process would leave the original entity intact, but the existence, nature, and identity of the resulting being in the simulated network are still open philosophical questions.
Another recently conceived possibility[citation needed] is the use of genetically engineered viruses to attach to synaptic junctions, and then release energy-emitting molecular compounds, which could be detected externally, and used to generate a functional model of the synapses in question, and, given enough time, the whole brain and nervous system.
An alternate set of possible theoretical approaches to mind uploading would require that we first understand the functions of the human mind sufficiently well to create abstract models of parts, or the totality, of human mental processes. It would require that strong AI be not only a possibility, but that the techniques used to create a strong AI system could also be used to recreate a human type mentality.
Such approaches might be more desirable if the abstract models required less computational power to execute than the neural network simulation of the emulation techniques described above.
Another theoretically possible method of mind uploading from organic to inorganic medium, related to the idea described above of replacing neurons one at a time while consciousness remained intact, would be a much less precise but much more feasible (in terms of technology currently known to be physically possible) process of "cyborging". Once a given person's brain is mapped, it is replaced piece-by-piece with computer devices which perform the exact same function as the regions preceding them, after which the patient is allowed to regain consciousness and validate that there has not been some radical upheaval within his own subjective experience of reality. At this point, the patient's brain is immediately "re-mapped" and another piece is replaced, and so on in this fashion until, the patient exists on a purely hardware medium and can be safely extricated from the remaining organic body.
However, critics contend[citation needed] that, given the significant level of synergy involved throughout the neural plexus, alteration of any given cell that is functionally correspondent with (a) neighboring cell(s) may well result in an alteration of its electrical and chemical properties that would not have existed without interference, and so the true individual's signature is lost. Revokability of that disturbance may be possible with damage anticipation and correction (seeing the original by the particular damage rendered unto it, in reverse chronological fashion), although this would be easier in a stable system, meaning a brain subjected to cryosleep (which would imbue its own damage and alterations).[citation needed]
It has also been suggested (for example, in Greg Egan's "jewelhead" stories[8]) that a detailed examination of the brain itself may not be required, that the brain could be treated as a black box instead and effectively duplicated "for all practical purposes" by merely duplicating how it responds to specific external stimuli. This leads into even deeper philosophical questions of what the "self" is.
On June 6, 2005 IBM and the Swiss Federal Institute of Technology in Lausanne announced the launch of a project to build a complete simulation of the human brain, entitled the "Blue Brain Project".[9] The project will use a supercomputer based on IBM's Blue Gene design to map the entire electrical circuitry of the brain. The project seeks to research aspects of human cognition, and various psychiatric disorders caused by malfunctioning neurons, such as autism. Initial efforts are to focus on experimentally accurate, programmed characterization of a single neocortical column in the brain of a rat, as it is very similar to that of a human but at a smaller scale, then to expand to an entire neocortex (the alleged seat of higher intelligence) and eventually the human brain as a whole.
It is interesting to note that the Blue Brain project seems to use a combination of emulation and simulation techniques. The first stage of their program was to simulate a neocortical column at the molecular level. Now the program seems to be trying to create a simplified functional simulation of the neocortical column in order to simulate many of them, and to model their interactions.
With most projected mind uploading technology it is implicit that "copying" a consciousness could be as feasible as "moving" it, since these technologies generally involve simulating the human brain in a computer of some sort, and digital files such as computer programs can be copied precisely. It is also possible that the simulation could be created without the need to destroy the original brain, so that the computer-based consciousness would be a copy of the still-living biological person, although some proposed methods such as serial sectioning of the brain would necessarily be destructive. In both cases it is usually assumed that once the two versions are exposed to different sensory inputs, their experiences would begin to diverge, but all their memories up until the moment of the copying would remain the same.
By many definitions, both copies could be considered the "same person" as the single original consciousness before it was copied. At the same time, they can be considered distinct individuals once they begin to diverge, so the issue of which copy "inherits" what could be complicated. This problem is similar to that found when considering the possibility of teleportation, where in some proposed methods it is possible to copy (rather than only move) a mind or person. This is the classic philosophical issue of personal identity. The problem is made even more serious by the possibility of creating a potentially infinite number of initially identical copies of the original person, which would of course all exist simultaneously as distinct beings.
Philosopher John Locke published "An Essay Concerning Human Understanding" in 1689, in which he proposed the following criterion for personal identity: if you remember thinking something in the past, then you are the same person as he or she who did the thinking. Later philosophers raised various logical snarls, most of them caused by applying Boolean logic, the prevalent logic system at the time. It has been proposed that modern fuzzy logic can solve those problems,[10] showing that Locke's basic idea is sound if one treats personal identity as a continuous rather than discrete value.
In that case, when a mind is copied -- whether during mind uploading, or afterwards, or by some other means -- the two copies are initially two instances of the very same person, but over time, they will gradually become different people to an increasing degree.
The issue of copying vs moving is sometimes cited as a reason to think that destructive methods of mind uploading such as serial sectioning of the brain would actually destroy the consciousness of the original and the upload would itself be a mere "copy" of that consciousness. Whether one believes that the original consciousness of the brain would transfer to the upload, that the original consciousness would be destroyed, or that this is simply a matter of definition and the question has no single "objectively true" answer, is ultimately a philosophical question that depends on one's views of philosophy of mind.
Because of these philosophical questions about the survival of consciousness, there are some who would feel more comfortable about a method of uploading where the transfer is gradual, replacing the original brain with a new substrate over an extended period of time, during which the subject appears to be fully conscious (this can be seen as analogous to the natural biological replacement of molecules in our brains with new ones taken in from eating and breathing, which may lead to almost all the matter in our brains being replaced in as little as a few months[11]). As mentioned above, this would likely take place as a result of gradual cyborging, either nanoscopically or macroscopically, wherein the brain (the original copy) would slowly be replaced bit by bit with artificial parts that function in a near-identical manner, and assuming this was possible at all, the person would not necessarily notice any difference as more and more of their brain became artificial. A gradual transfer also brings up questions of identity similar to the classical Ship of Theseus paradox, although the above-mentioned natural replacement of molecules in the brain through eating and breathing brings up these questions as well.
A computer capable of simulating a person may require microelectromechanical systems (MEMS), or else perhaps optical or nano computing for comparable speed and reduced size and sophisticated telecommunication between the brain and body (whether it exists in virtual reality, artificially as an android, or cybernetically as in sync with a biological body through a transceiver), but would not seem to require molecular nanotechnology.
If minds and environments can be simulated, the Simulation Hypothesis posits that the reality we see may in fact be a computer simulation, and that this is actually the most likely possibility.[12]
Uploading is a common theme in science fiction. Some of the earlier instances of this theme were in the Roger Zelazny 1968 novel Lord of Light and in Frederik Pohl's 1955 short story "Tunnel Under the World." A near miss was Neil R. Jones' 1931 short story "The Jameson Satellite", wherein a person's organic brain was installed in a machine, and Olaf Stapledon's "Last and First Men" (1930) had organic human-like brains grown into an immobile machine.
Another of the "firsts" is the novel Detta r verkligheten (This is reality), 1968, by the renowned philosopher and logician Bertil Mrtensson, in which he describes people living in an uploaded state as a means to control overpopulation. The uploaded people believe that they are "alive", but in reality they are playing elaborate and advanced fantasy games. In a twist at the end, the author changes everything into one of the best "multiverse" ideas of science fiction. Together with the 1969 book Ubik by Philip K. Dick it takes the subject to its furthest point of all the early novels in the field.
Frederik Pohl's Gateway series (also known as the Heechee Saga) deals with a human being, Robinette Broadhead, who "dies" and, due to the efforts of his wife, a computer scientist, as well as the computer program Sigfrid von Shrink, is uploaded into the "64 Gigabit space" (now archaic, but Fred Pohl wrote Gateway in 1976). The Heechee Saga deals with the physical, social, sexual, recreational, and scientific nature of cyberspace before William Gibson's award-winning Neuromancer, and the interactions between cyberspace and "meatspace" commonly depicted in cyberpunk fiction. In Neuromancer, a hacking tool used by the main character is an artificial infomorph of a notorious cyber-criminal, Dixie Flatline. The infomorph only assists in exchange for the promise that he be deleted after the mission is complete.
In the 1982 novel Software, part of the Ware Tetralogy by Rudy Rucker, one of the main characters, Cobb Anderson, has his mind uploaded and his body replaced with an extremely human-like android body. The robots who persuade Anderson into doing this sell the process to him as a way to become immortal.
In the 1997 novel "Shade's Children" by Garth Nix, one of the main characters Shade (a.k.a. Robert Ingman) is an uploaded consciousness that guides the other characters through the post-apocolyptic world in which they live.
The fiction of Greg Egan has explored many of the philosophical, ethical, legal, and identity aspects of mind uploading, as well as the financial and computing aspects (i.e., hardware, software, processing power) of maintaining "copies". In Egan's Permutation City and Diaspora, "copies" are made by computer simulation of scanned brain physiology. Also, in Egan's "Jewelhead" stories, the mind is transferred from the organic brain to a small, immortal backup computer at the base of the skull, with the organic brain then being surgically removed.
The Takeshi Kovacs novels by Richard Morgan was set in a universe where mind transfers were a part of standard life. With the use of cortical stacks, which record a person's memories and personality into a device implanted in the spinal vertebrae, it was possible to copy the individual's mind to a storage system at the time of death. The stack could be uploaded to a virtual reality environment for interrogation, entertainment, or to pass the time for long distance travel. The stack could also be implanted into a new body or "sleeve" which may or may not have biomechanical, genetic, or chemical "upgrades" since the sleeve could be grown or manufactured. Interstellar travel is most often accomplished by digitized human freight ("dhf") over faster-than-light needlecast transmission.
In the "Requiem for Homo Sapiens" series of novels by David Zindell (Neverness, The Broken God, The Wild, and War in Heaven), the verb "cark" is used for uploading one's mind (and also for changing one's DNA). Carking is done for soul-preservation purposes by the members of the Architects church, and also for more sinister (or simply unknowable) purposes by the various "gods" that populate the galaxy such gods being human minds that have now grown into planet- or nebula-sized synthetic brains. The climax of the series centers around the struggle to prevent one character from creating a Universal Computer (under his control) that will incorporate all human minds (and indeed, the entire structure of the universe).
In the popular computer game Total Annihilation, the 4,000-year war that eventually culminated with the destruction of the Milky Way galaxy was started over the issue of mind transfer, with one group (the Arm) resisting another group (the Core) who were attempting to enforce a 100% conversion rate of humanity into machines, because machines are durable and modular, thereby making it a "public health measure."
In the popular science fiction show Stargate SG-1 the alien race who call themselves the Asgard rely solely on cloning and mind transferring to continue their existence. This was not a choice they made, but a result of the decay of the Asgard genome due to excessive cloning, which also caused the Asgard to lose their ability to reproduce. In the episode "Tin Man", SG-1 encounter Harlan, the last of a race that transferred their minds to robots in order to survive. SG-1 then discover that their minds have also been transferred to robot bodies. Eventually they learn that their minds were copied rather than uploaded and that the "original" SG-1 are still alive.
The Thirteenth Floor is a film made in 1999 directed by Josef Rusnak. In the film, a scientific team discovers a technology to create a fully functioning virtual world which they could experience by taking control of the bodies of simulated characters in the world, all of whom were self-aware. One plot twist was that if the virtual body a person had taken control of was killed in the simulation while they were controlling it, then the mind of the simulated character the body originally belonged to would take over the body of that person in the "real world".
The Matrix is a film released the same year as The Thirteenth Floor that has the same kind of solipsistic philosophy. In The Matrix, the protagonist Neo finds out that the world he has been living in is nothing but a simulated dreamworld. However, this should be considered as virtual reality rather than mind uploading, since Neo's physical brain still is required to reside his mind. The mind (the information content of the brain) is not copied into an emulated brain in a computer. Neo's physical brain is connected into the Matrix via a brain-machine interface. Only the rest of the physical body is simulated. Neo is disconnected from this dreamworld by human rebels fighting against AI-driven machines in what seems to be a neverending war. During the course of the movie, Neo and his friends are connected back into the Matrix dreamworld in order to fight the machine race.
In the series Battlestar Galactica the antagonists of the story are the Cylons, sentient computers created by man which developed to become nearly identical to human beings. When they die they rely on mind transferring to keep on living so that "death becomes a learning experience".
The 1995 movie Strange Days explores the idea of a technology capable of recording a conscious event. However, in this case, the mind itself is not uploaded into the device. The recorded event, which time frame is limited to that of the recording session, is frozen in time on a data disc much like today's audio and video. Wearing the "helmet" in playback mode, another person can experience the external stimuli interpretation of the brain, the memories, the feelings, the thoughts and the actions that the original person recorded from his/her life. During playback, the observer temporarily quits his own memories and state of consciousness (the real self). In other words, one can "live" a moment in the life of another person, and one can "live" the same moment of his/her life more than once. In the movie, a direct link to a remote helmet can also be established, allowing another person to experience a live event.
Followers of the Ralian religion advocate mind uploading in the process of human cloning to achieve eternal life. Living inside of a computer is also seen by followers as an eminent possibility.[13]
However, mind uploading is also advocated by a number of secular researchers in neuroscience and artificial intelligence, such as Marvin Minsky. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in Cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, Ph.D., who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.
Many Transhumanists look forward to the development and deployment of mind uploading technology, with many predicting that it will become possible within the 21st century due to technological trends such as Moore's Law. Many view it as the end phase of the Transhumanist project, which might be said to begin with the genetic engineering of biological humans, continue with the cybernetic enhancement of genetically engineered humans, and finally obtain with the replacement of all remaining biological aspects.
The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer.
Raymond Kurzweil, a prominent advocate of transhumanism and the likelihood of a technological singularity, has suggested that the easiest path to human-level artificial intelligence may lie in "reverse-engineering the human brain", which he usually uses to refer to the creation of a new intelligence based on the general "principles of operation" of the brain, but he also sometimes uses the term to refer to the notion of uploading individual human minds based on highly detailed scans and simulations. This idea is discussed on pp. 198-203 of his book The Singularity is Near, for example.
Hans Moravec describes and advocates mind uploading in both his 1988 book Mind Children: The Future of Robot and Human Intelligence and also his 2000 book Robot: Mere Machine to Transcendent Mind. Moravec is referred to by Marvin Minsky in Minsky's essay Will Robots Inherit the Earth?.[14]
fr:Tlchargement de l'esprit ja: ru:
Originally posted here:
Posted in Mind Uploading
Comments Off on Mind uploading – Transhumanism Wiki – Wikia