The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Transhuman News
Isaac Asimov, the candy store kid who dreamed up robots – Salon
Posted: March 10, 2020 at 11:42 pm
The year 2020 marks a milestone in the march of robots into popular culture: the 100th anniversary of the birth of science fiction writer Isaac Asimov. Asimov coined the word 'robotics', invented the much-quoted Three Laws governing robot behavior, and passed on many myths and misconceptions that affect the way we feel about robots today.
A compulsive writer and homebodypossibly, an agoraphobicAsimov hated to travel: ironically, for a writer who specialized in fantastic tales often set on distant worlds, he hadn't been in an airplane since being flown home from Hawaii by the US Army after being released from service just before a test blast of the atomic bomb on the Bikini Atoll. (Asimov once grimly observed that this stroke of luck probably saved his life by preventing him from getting leukemia, one of the side effects that afflicted many servicemen who were close to the blast.)
By 1956, Asimov had completed most of the stories that cemented his reputation as the grand master of science fiction, and set the ground rules for a new field of study called "robotics," a word he made up. Researchers like Marvin Minsky of MIT and William Shockley of Bell Labs had been doing pioneering work into Artificial Intelligence and Robotics since the early 1950s, but they were not well-known outside of the scientific and business communities. Asimov, on the other hand, was famous, his books so commercially successful that he quit his job as a tenured chemistry professor at Boston College to write full-time. Asimov's 1950 short story collection, I, Robot, put forward a vision of the robot as humanity's friend and protector, at a time when many humans were wondering if their own species could be trusted not to self-destruct.
Born in January 1920, or possibly October 1919the exact date was uncertain because birth records weren't kept in the little Russian village where he came fromAsimov emigrated to Brooklyn in 1922 with his parents. Making a go of life in America turned out to be tougher than they expected, until his father scraped together enough money to buy a candy store. That decision would have a seismic impact on Isaac's future, and on robotics research and the narratives we tell ourselves about human-robot relationships to this day.
As a kid, Isaac worked long hours in the store where he became interested in two attractions that pulled in customers: a slot machine that frequently needed to be dismantled for repairs; and pulp fiction magazines featuring death rays and alien worlds. Soon after the first rocket launches in the mid-1920s, scientists announced that space travel was feasible, opening the door to exciting tales of adventure in outer space. Atomic energythe source of the death rayswas also coming into public consciousness as a potential "super weapon." But both atomic bombs and space travel were still very much in the realm of fiction; few people actually believed they'd see either breakthrough within their lifetimes.
Advertisement:
The genre of the stories in the pulps wasn't new. Fantastical tales inspired by science and technology went back to the publication of Mary Shelley's Frankenstein in 1818, which speculated about the use of a revolutionary new energy source, electricity, to reanimate life. Jules Verne, H. P. Lovecraft, H. G. Welles, and Edgar Rice Burroughs all wrote novels touching on everything from time travel, to atomic-powered vehicles, to what we now call genetic engineering. But the actual term, "science fiction," wasn't coined by any of them: that distinction goes to Hugo Gernsbeck, editor of the technical journal, Modern Electrics, whose name would eventually be given to the HUGO, the annual award for the best science fiction writing.13
Gernsbeck's interest in the genre started with a field that was still fairly new in his time: electrical engineering. Even in 1911, the nature of electricity was not fully understood, and random electrocutions were not uncommon; electricians weren't just tradesmen, but daredevils, taking their lives in their hands every time they wired a house or lit up a city street.14 Gernsbeck, perhaps gripped by the same restless derring-do as his readers, wasn't satisfied with writing articles about induction coils. In 1911, he penned a short story set in the twenty-third century and serialized it over several issues of Modern Electrics, a decision that must have baffled some of the electricians who made up his subscribers. At first, Gernsbeck called his mash-up of science and fiction "scientifiction," mercifully changing that mouthful to "science fiction." He went on to publish a string of popular magazines, including Science Wonder Stories, Wonder Stories, Science, and Astounding. (Gernsbeck's rich imagination didn't stretch far enough to come up with more original titles.)
Asimov's father stocked Gernsbeck's magazines in the candy store because they sold like hotcakes, but he considered them out-and-out junk. Young Isaac was forbidden to waste time reading about things that didn't exist and never would, like space travel and atomic weapons.
Despite (or possibly because of) his father's objections, Isaac began secretly reading every pulp science fiction magazine that appeared in the store, handling each one so carefully that Asimov Senior never knew they had been opened. Isaac finally managed to convince his father that one of Gernsbeck's magazines, Science Wonder Stories, had educational valueafter all, the word "science" was in the title, wasn't it?15
Isaac sold his first short story when he was still an eighteen-year-old high school student, naively showing up at the offices of Amazing Stories to personally deliver it to the editor, John W. Campbell. Campbell rejected the story (eventually published by a rival Gernsbeck publication, Astounding) but encouraged Isaac to send him more. Over time, Campbell published a slew of stories that established Isaac, while still a university student, as a handsomely paid writer of science fiction.
When you read those early stories today, Asimov's weaknesses as a writer are painfully glaring. With almost no experience of the world outside of his school, the candy store, and his Brooklyn neighborhood and no exposure to contemporary writers of his time like Hemingway or FitzgeraldIsaac fell back on the flat, stereotypical characters and clichd plots of pulp fiction. Isaac did have one big thing going for him, though: a science education.
By the early 1940s, Asimov was a graduate student in chemistry at Columbia University, as well as a member of the many science fiction fan clubs springing up all over Brooklyn whose members' obsession with the minutiae of fantastical worlds would be familiar to any ComicCon fan in a Klingon costume today. Asimov wrote stories that appealed to this newly emerging geeky readership, staying close enough to the boundaries of science to be plausible, while still instinctively understanding how to create wondrous fictional worlds.
The working relationship between Asimov and his editor, Campbell, turned into a highly profitable one for both publisher and author. But as Asimov improved his writing and tackled more complex themes, he ran into a roadblock: Campbell insisted that he would only publish human- centered stories. Aliens could appear as stock villains but humans always had to come out on top. Campbell didn't just believe that people were superior to aliens, but that some peoplewhite Anglo-Saxons were superior to everyone else. Still a relatively young writer and unwilling to walk away from his lucrative gig with Campbell, Asimov looked for ways to work around his editor's prejudices. The answer: write about robots. Asimov's mechanical beings were created by humans, in their own image; as sidekicks, helpers, proxies, and, eventually, replacements. Endowed with what Asimov dubbed "positronic brains," his imaginary robots were even more cleverly constructed than the slot machine in the candy store.
Never a hands-on guy himself, Asimov was nonetheless interested in how mechanisms worked. Whenever the store's one-armed bandit had to be serviced, Isaac would watch the repairman open the machine and expose its secrets. The slot machine helped him imagine the mechanical beings in his stories.
Although Asimov can be credited with kick-starting a generation's love affair with robots, he was far from their inventor. (Even I, Robot borrowed its title from a 1939 comic book of the same name written by a pair of brothers who called themselves Eando Binder, the name eventually bestowed on the beer-swilling, cigar-smoking robot star of the TV show, Futurama.) But in writing his very first robot story, Asimov was both jumping on a new obsession of the 1920s, and mining old, deep myths going back to ancient Jewish tales of the golem, which was a man made of mud and magically brought to life, as well as stories as diverse as Pygmalion, Pinocchio, and engineering wonders like the eighteenth century, chess-playing Mechanical Turk, and other automatons.
Robots have an ancient history and a surprisingly whimsical one. Automatons have been frog marching, spinet playing, and minuet dancing their way out of the human imagination for hundreds, if not thousands, of years, but it wasn't until the machine age of the early twentieth century that robots appeared as thinking, reasoning substitute humans. The word robotCzech for "mechanical worker"wasn't coined in a patent office or on a technical blueprint, but as the title of a fantastical play by Karel Capek, Rossum's Universal Robots, which was first performed in 1920, the reputed year of Isaac Asimov's birth. In adopting robots as his main characters, and the challenges and ethics of human life in a robotic world as one of his central themes, Asimov found his voice as a writer. His robots are more sympathetic and three-dimensional than his human characters. In exploring the dynamics of human-robot partnershipsas Asimov would do particularly well in detective/robot "buddy" stories, such as his 1954 novel Caves of Steel he invented a subgenre within the broader world of science fiction.
Asimov's humanoid robots were governed by the Three Laws of Robotics. More whimsical than scientific, they established ground rules for an imaginary world where humans and mechanical beings coexisted. Eventually, the Three Laws were quoted by researchers in two academic fields that were still unnamed in the 1940s: artificial intelligence and robotics.
First published by Astounding magazine in 1942 as part of Asimov's fourth robot story "Runaround", the Three Laws stated that:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
According to Asimov's biographer Michael Wilson in Isaac Asimov: A Life of the Grand Master of Science Fiction (New York, Carrol & Graff, 2005), "Asimov was flattered that he had established a set of pseudoscientific laws. Despite the fact that in the early 1940s the science of robotics was a purely fictional thing, he somehow knew that one day they would provide the foundation for a real set of laws."
The Three Laws would continue to appear not only in the world of robot-driven books and filmslike Aliens (1986), where the laws are synopsized by the synthetic human Bishop when trying to reassure the robot-phobic heroine Ellen Ripleybut by some real-world roboticists and AI researchers, who are now considering how to develop a moral code for machines that may one day have to make independent, life-or-death decisions.
Excerpt from:
Isaac Asimov, the candy store kid who dreamed up robots - Salon
Posted in Genetic Engineering
Comments Off on Isaac Asimov, the candy store kid who dreamed up robots – Salon
The Robots Are Coming – Boston Review
Posted: at 11:42 pm
Image: Adobe Stock
In the overhyped age of deep learning, rumors of thinking robots are greatly exaggerated. Still, we cannot afford to leave decisions about the development of even this sort of AI in the hands of those who stand to reap vast profits from its use.
EditorsNote: The philosopher Kenneth A. Taylor passed away suddenly this winter. Boston Review is proud to publish this essay, which grows out of talks Ken gave throughout 2019, in collaboration with his estate. Preceding it is an introductory note by Kens colleague,John Perry.
In memoriam Ken Taylor
On December 2, 2019, a few weeks after his sixty-fifth birthday, Ken Taylor announced to all of his Facebook friends that the book he had been working on for years, Referring to the World, finally existed in an almost complete draft. That same day, while at home in the evening, Ken died suddenly and unexpectedly. He is survived by his wife, Claire Yoshida; son, Kiyoshi Taylor; parents, Sam and Seretha Taylor; brother, Daniel; and sister, Diane.
Ken was an extraordinary individual. He truly was larger than life. Whatever the task at handwhether it was explaining some point in the philosophy of language, coaching Kiyoshis little league team, chairing the Stanford Philosophy department and its Symbolic Systems Program, debating at Stanfords Academic Senate, or serving as president of the Pacific Division of the American Philosophical Association (APA)Ken went at it with ferocious energy. He put incredible effort into teaching. He was one of the last Stanford professors to always wear a tie when he taught, to show his respect for the students who make it possible for philosophers to earn a living doing what we like to do. His death leaves a huge gap in the lives of his family, his friends, his colleagues, and the Stanford community.
Ken went to college at Notre Dame. He entered the School of Engineering, but it didnt quite satisfy his interests so he shifted to the Program of Liberal Studies and became its first African American graduate. Ken came from a religious family, and never lost interest in the questions with which religion deals. But by the time he graduated he had become a naturalistic philosopher; his senior essay was on Kant and Darwin.
Ken was clearly very much the same person at Notre Dame that we knew much later. Here is a memory from a Katherine Tillman, a professor in the Liberal Studies Program:
This is how I remember our beloved and brilliant Ken Taylor: always with his hand up in class, always with that curious, questioning look on his face. He would shift a little in his chair and make a stab at what was on his mind to say. Then he would formulate it several more times in questions, one after the other, until he felt he got it just right. And he would listen hard, to his classmates, to his teachers, to whomever could shed some light on what it was he wanted to know. He wouldnt give up, though he might lean back in his chair, fold his arms, and continue with that perplexed look on his face. He would ask questions about everything.Requiescat in pace.
From Notre Dame Taylor went to the University of Chicago; there his interests solidified in the philosophy of language. His dissertation was on reference, the theory of how words refer to things in the world; his advisor was the philosopher of language Leonard Linsky. We managed to lure Taylor to Stanford in 1995, after stops at Middlebury, the University of North Carolina, Wesleyan, the University of Maryland, and Rutgers.
In 2004 Taylor and I launched the pubic radio program Philosophy Talk, billed as the program that questions everythingexcept your intelligence. The theme song is Nice Work if You Can Get It, which expresses the way Ken and I both felt about philosophy. The program dealt with all sorts of topics. We found ourselves reading up on every philosopher we discussedfrom Plato to Sartre to Rawlsand on every topic with a philosophical dimension, from terrorism and misogyny to democracy and genetic engineering. I grew pretty tired of this after a few years. I had learned all I wanted to know about imporant philosophers and topics. I couldnt wait after each Sundays show to get back to my world: the philosophy of language and mind. But Ken seemed to love it more and more with each passing year. He loved to think; he loved forming opinions, theories, hypotheses and criticisms on every possible topic; and he loved talking about them with the parade of distinguished guests that joined us.
Until the turn of the century Kens publications lay pretty solidly in the philosophy of language and mind and closely related areas. But later we begin to find things like How to Vanquish the Still Lingering Shadow of God and How to Hume a Hegel-Kant: A Program for the Naturalization of Normative Consciousness. Normativitythe connection between reason, duty, and lifeis a somewhat more basic issue in philosophy than proper names. By the time of his 2017 APA presidential address, Charting the Landscape of Reason, it seemed to me that Ken had clearly gone far beyond issues of reference, and not only on Sunday morning for Philosophy Talk. He had found a broader and more natural home for his active, searching, and creative mind. He had become a philosopher who had interesting things to say not only about the most basic issues in our field but all sorts of wider concerns. His Facebook page included a steady stream of thoughtful short essays on social, political, and economic issues. As the essay below shows, he could bring philosophy, cognitive science, and common sense to bear on such issues, and wasnt afraid to make radical suggestions.
Some of us are now finishing the references and preparing an index for Referring to the World, to be published by Oxford University Press. His next book was to be The Natural History of Normativity. He died as he was consolidating the results of thirty-five years of exciting productive thinking on reference, and beginning what should have been many, many more productive and exciting years spent illuminating reason and normativity, interpreting the great philosophers of the past, and using his wisdom to shed light on social issuesfrom robots to all sort of other things.
His loss was not just the loss of a family member, friend, mentor and colleague to those who knew him, but the loss, for the whole world, of what would have beenan illuminating and important body of philosophical and practical thinking. His powerful and humane intellect will be sorely missed.
John Perry
Among the works of man, which human life is rightly employed in perfecting and beautifying, the first in importance surely is man himself. Supposing it were possible to get houses built, corn grown, battles fought, causes tried, and even churches erected and prayers said, by machineryby automatons in human formit would be a considerable loss to exchange for theseautomatons even the men and women who at present inhabit the more civilized parts of the world, and who assuredly are but starved specimens of what nature can and will produce. Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.
John Stuart Mill, On Liberty (1859)
Some believe that we are on the cusp of a new age. The day is coming when practically anything that a human can doat least anything that the labor market is willing to pay a human being a decent wage to dowill soon be doable more efficiently and cost effectively by some AI-driven automated device. If and when that day does arrive, those who own the means of production will feel ever increasing pressure to discard human workers in favor of an artificially intelligent work force. They are likely to do so as unhesitatingly as they have always set aside outmoded technology in the past.
We are very unlikely to be inundated anytime soon with a race of thinking robotsat least not if we mean by thinking that peculiar thing that we humans do, done in precisely the way that we humans do it.
To be sure, technology has disrupted labor markets before. But until now, even the most far reaching of those disruptions have been relatively easy to adjust to and manage. That is because new technologies have heretofore tended to displace workers from old jobs that either no longer needed to be doneor at least no longer needed to be done by humansinto either entirely new jobs that were created by the new technology, or into old jobs for which the new technology, directly or indirectly, caused increased demand.
This time things may be radically different. Thanks primarily to AIs presumed potential to equal or surpass every human cognitive achievement or capacity, it may be that many humans will be driven out of the labor market altogether.
Yet it is not necessarily time to panic. Skepticism about the impact of AI is surely warranted on inductive grounds alone. Way back in 1956, at the Dartmouth Summer Research Project on Artificial Intelligence, an event that launched the first AI revolution, the assembled gaggle of AI pioneersall ten of thembreathlessly anticipated that the mystery of fully general artificial intelligence could be solved within a couple of decades at most. In 1961, Minsky, for example, was confidently proclaiming, We are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. Well over a half century later, we are still waiting for the revolution to be fully achieved.
AI has come a long way since those early days: it is now a very big deal. It is a major focus of academic research, and not just among computer scientists. Linguists, psychologists, the legal establishment, the medical establishment, and a whole host of others have gotten into the act in a very big way. AI may soon be talking to us in flawless and idiomatic English, counseling us on fundamental life choices, deciding who gets imprisoned for how long, and diagnosing our most debilitating diseases. AI is also big business. The worldwide investment in AI technology, which stood at something like $12 billion in 2018, will top $200 billion by 2025. Governments are hopping on the AI bandwagon. The Chinese envision the development of a trillion-dollar domestic AI industry in the relatively near term. They clearly believe that the nation that dominates AI will dominate the world. And yet, a sober look at the current state of AI suggests that its promise and potential may still be a tad oversold.
Excessive hype is not confined to the distant past. One reason for my own skepticism is the fact that in recent years the AI landscape has come to be progressively more dominated by AI of the newfangled deep learning variety, rather than by AI of the more or less pass logic-based symbolic processing varietyaffectionately known in some quarters, and derisively known in others, as GOFAI (Good Old Fashion Artificial Intelligence).
It was mostly logic-based, symbolic processing GOFAI that so fired the imaginations of the founders of AI back in 1956. Admittedly, to the extent that you measure success by where time, money, and intellectual energy are currently being invested, GOFAI looks to be something of dead letter. I dont want to rehash the once hot theoretical and philosophical debates over which approach to AIlogic-based symbolic processing, or neural nets and deep learningis the more intellectually satisfying approach. Especially back in the 80s and 90s, those debates raged with what passes in the academic domain as white-hot intensity. They no longer do, but not because they were decisively settled in favor of deep learning and neural nets more generally. Its more that machine learning approaches, mostly in the form of deep learning, have recently achieved many impressive results. Of course, these successes may not be due entirely to the anti-GOFAI character of these approaches. Even GOFAI has gotten into the machine learning act with, for example, Bayesian networks. The more relevant divide may be between probabilistic approaches of various sorts and logic-based approaches.
It is important to distinguish AI-as-engineering from AI-as-cognitive-science. The former is where the real money turns out to be.
However exactly you divide up the AI landscape, it is important to distinguish what I call AI-as-engineering from what I call AI-as-cognitive-science. AI-as-engineering isnt particularly concerned with mimicking the precise way in which the human mind-brain does distinctively human things. The strategy of engineering machines that do things that are in some sense intelligent, even if they do what they do in their own way, is a perfectly fine way to pursue artificial intelligence. AI-as-cognitive science, on the other hand, takes as its primary goal that of understanding and perhaps reverse engineering the human mind. AI pretty much began its life by being in this business, perhaps because human intelligence was the only robust model of intelligence it had to work with. But these days, AI-as-engineering is where the real money turns out to be.
Though there is certainly value in AI-as-engineering, I confess to still have a hankering for AI-as-cognitive science. And that explains why I myself still feel the pull of the old logic-based symbolic processing approach. Whatever its failings, GOFAI had as one among its primary goals that of reverse engineering the human mind. Many decades later, though we have definitely made some progress, we still havent gotten all that far with that particular endeavor. When it comes to that daunting task, just about all the newfangled probability and statistics-based approaches to AImost especially deep learning, but even approaches that have more in common with GOFAI like Bayesian Netsstrike me as if not exactly nonstarters, then at best only a very small part of the truth. Probably the complete answer will involve some synthesis of older approaches and newer approaches and perhaps even approaches we havent even thought of yet. Unfortunately, however, although there are a few voices starting to sing such an ecumenical tune; neither ecumenicalism nor intellectual modesty are exactly the rage these days.
Back when the competition over competing AI paradigms was still a matter of intense theoretical and philosophical dispute, one of the advantages often claimed on behalf of artificial neural nets over logic-based symbolic approaches was that the former but not the latter were directly neuronally inspired. By directly modeling its computational atoms and computational networks on neurons and their interconnections, the thought went, artificial neural nets were bound to be truer to how the actual human brain does its computing than its logic-based symbolic processing competitor could ever hope to be.
Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world.
This is not the occasion to debate such claims at length. My own hunch is that there is little reason to believe that deep learning actually holds the key to finally unlocking the mystery of general purpose, humanlike intelligence. Despite being neuronally inspired, many of the most notable successes of the deep learning paradigm depend crucially on the ability of deep learning architectures to do something that the human brain isnt all that good at: extracting highly predictive, though not necessarily deeply explanatory patterns, on the basis of being trained up, via either supervised or unsupervised learning, on huge data sets, consisting, from the machine eye point of view, of a plethora of weakly correlated feature bundles, without the aid of any top-down direction or built-in worldly knowledge. That is an extraordinarily valuable and computationally powerful, technique for AI-as-engineering. And it is perfectly suited to the age of massive data, since the successes of deep learning wouldnt be possible without big data.
Its not that we humans are pikers at pattern extraction. As a species, we do remarkably well at it, in fact. But I doubt that the capacity for statistical analysis of huge data sets is the core competence on which all other aspects of human cognition are ultimately built. But heres the thing. Once youve invented a really cool new hammerwhich deep learning very much isits a very natural human tendency to start looking for nails to hammer everywhere. Once you are on the lookout for nails everywhere, you can expect to find a lot more of them than you might have at first thought, and you are apt to find some of them in some pretty surprising places.
But if its really AI-as-cognitive science that you are interested in, its important not to lose sight of the fact that it may take a bit more than our cool new deep learning hammer to build a humanlike mind. You cant let your obsession with your cool new hammer make you lose sight of the fact that in some domains, the human mind seems to deploy quite a different trick from the main sorts of tricks that are at the core not only of deep learning but also other statistical paradigms (some of which, again, are card carrying members of the GOFAI family). In particular, the human mind is often able to learn quite a lot from relatively little and comparatively impoverished data. This remarkable fact has led some to conjecture that human mind must come antecedently equipped with a great deal of endogenous, special purpose, task specific cognitive structure and content. If true, that alone would suffice to make the human mind rather unlike your typical deep learning architecture.
Indeed, deep learning takes quite the opposite approach. A deep learning network may be trained up to represent words, say, as points in a micro-featural vector space of, say, three hundred dimensions, and on that basis of such representations, it might learn, after many epochs of training, on a really huge data set, to make the sort of pragmatic inferencesfrom say, John ate some of the cake to John did not eat all of the cakethat humans make quickly, easily and naturally, without a lot of focused training of the sort required by deep learning and similar such approaches. The point is that deep learning can learn to do various cool thingsthings that one might once have thought only human beings can doand although they can do some of those things quite well, it still seems highly unlikely that they do those cool things in precisely the way that we humans do.
I stress again, though, that if you are not primarily interested in AI-as-cognitive science, but solely in AI-as-engineering, you are free to care not one whit whether deep learning architectures and its cousins hold the ultimate key to understanding human cognition in all its manifestations. You are free to embrace and exploit the fact that such architectures are not just good, but extraordinarily good, at what they do, at least when they are given large enough data sets to work with. Still, in thinking about the future of AI, especially in light of both our darkest dystopian nightmares and our brightest utopian dreams, it really does matter whether we are envisioning a future shaped by AI-as-engineering or AI-as-cognitive-science. If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robotsat least not if we mean by thinking that peculiar thing that we humans do, done in precisely the way that we humans do it.
Once youve invented a new hammerwhich deep learning very much isits a very natural human tendency to start looking for nails to hammer everywhere.
Deep learning and its cousins may do what they do better than we could possibly do what they do. But that doesnt imply that they do what we do better than we do what we do. If so, then, at the very least, we neednt fear, at least not yet, that AI will radically outpace humans in our most characteristically human modes of cognition. Nor should we expect the imminent arrival of the so-called singularity in which human intelligence and machine intelligence somehow merge to create a super intelligence that surpasses the limits of each. Given that we still havent managed to understand the full bag of tricks our amazing minds deploy, we havent the slightest clue as to what such a merger would even plausibly consist in.
Nonetheless, it would still be a major mistake to lapse into a false sense of security about the potential impact of AI on the human world. Even if current AI is far from being the holy grail of a science of mind that finally allows us to reverse engineer it, it will still allow us to the engineer extraordinarily powerful cognitive networks, as I will call them, in which human intelligence and artificial intelligence of some kind or other play quite distinctive roles. Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing what I will call the division of cognitive labor between human and artificial intelligence within engineered cognitive networks will be with us to stay. And it will almost certainly be a rather fraught and urgent matter. And this will be thanks in large measure to the power of AI-as-engineering rather than to the power of AI-as-cognitive-science.
Indeed, there is a distinct possibility that AI-as-engineering may eventually reduce the role of human cognitive labor within future cognitive networks to the bare minimum. It is that possibilitynot the possibility of the so-called singularity or the possibility that we will soon be surrounded by a race of free, autonomous, creative, or conscious robots, chafing at our undeserved dominance over themthat should now and for the foreseeable future worry us most. Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world. It will not necessarily do so by superseding human intelligence, but simply by displacing a great deal of it within various engineered cognitive networks. And if thats right, it simply wont take the arrival of anything close to full-scale super AI, as we might call it, to radically disrupt, for good or for ill, the built cognitive world.
Start with the fact that much of the cognitive work that humans are currently tasked to do within extant cognitive networks doesnt come close to requiring the full range of human cognitive capacities to begin with. A human mind is an awesome cognitive instrument, one of the most powerful instruments that nature has seen fit to evolve. (At least on our own lovely little planet! Who knows what sorts of minds evolution has managed to design on the millions upon millions of mind-infested planets that must be out there somewhere?) But stop and ask yourself, how much of the cognitive power of her amazing human mind does a coffee house Barista, say, really use in her daily work?
Not much, I would wager. And precisely for that reason, its not hard to imagine coffee houses of the future in which more and more of the cognitive labor that needs doing within them is done by AI finely tuned to cognitive loads they will need to carry within such cognitive networks. More generally, it is abundantly clear that much of the cognitive labor that needs doing within our total cognitive economy that now happens to be performed by humans is cognitive labor for which we humans are often vastly overqualified. It would be hard to lament the off-loading of such cognitive labor onto AI technology.
Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing the division of cognitive labor between human and artificial intelligence will be with us to stay.
But there is also a flip side. The twenty-first century economy is already a highly data-driven economy. It is likely to become a great deal more so, thanksamong other thingsto the emergence of the internet of things. The built environment will soon be even more replete with so-called smart devices. And these smart devices will constantly be collecting, analyzing and sharing reams and reams of data on every human being who interacts with them. It will not be just the usual suspects, like our computers, smart phones or smart watches, that are so engaged. It will be our cars, our refrigerators, indeed every system or appliance in every building in the world. There will be data-collecting monitors of every sortheart monitors, sleep monitors, baby monitors. There will be smart roads, smart train tracks. There will be smart bridges that constantly monitor their own state and automatically alert the transportation department when they need repair. Perhaps they will shut themselves down and spontaneously reroute traffic while they are waiting for the repair crews to arrive. It will require an extraordinary amount of cognitive labor to keep such a built environment running smoothly. And for much of that cognitive labor, we humans are vastly underqualified. Try, for example, running a data mining operation using nothing but human brain power. Youll see pretty quickly that human brains are not at all the right tool for the job, I would wager.
Perhaps what should really worry us, I am suggesting, is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other cognitive labor will leave us open to something of an AI pincer attack. AI-as-engineering may give us the power to design cognitive networks in which each node is exquisitely fine-tuned to the cognitive load it is tasked to carry. Since distinctively human intelligence will often be either too much or too little for the task at hand, future cognitive networks may assign very little cognitive labor to humans. And that is precisely how it might come about that the demand for human cognitive labor within the overall economy may be substantially diminished. How should we think about the advance of AI in light of its capacity to allow us to re-imagine and re-engineer our cognitive networks in this way? That is the question I address in the remainder of this essay.
There may be lessons to be learned from the ways that we have coped with disruptive technological innovations of the past. So perhaps we should begin by looking backward rather than forward. The first thing to say is that many innovations of the past are now widely seen as good things, at least on balance. They often spared humans work that payed dead-end wages, or work that was dirty and dangerous, or work that was the source of mind-numbing drudgery.
What should really worry us is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other will leave us open to something of an AI pincer attack.
But we should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come. Even looking backward, we can see that new and disruptive technologies have sometimes been the culprit in increasing rather than decreasing the drudgery and oppressiveness of work. They have also served to rob work of a sense of meaning and purpose. The assembly line is perhaps the prime example. The rise of the assembly line doubtlessly played a vital role in making the mass production and distribution of all manner of goods possible. It made the factory worker vastly more productive than, say, the craftsman of old. In so doing, it increased the market for mass produced goods, while simultaneously diminishing the market for the craftsmans handcrafted goods. As such, it played a major role in increasing living standards for many. But it also had the downside effect of turning many human agents into mere appendages within a vast, impersonal and relentless mechanism of production.
All things considered, it would be hard to deny that trading in skilled craftsmanship for unskilled or semiskilled factory labor was a good thing. I do not intend to relitigate that choice here. But it is worth asking whether all things really were consideredand considered not just by those who owned the means of production but collectively by all the relevant stakeholders. I am no historian of political economy. But I venture the conjecture that the answer to that question is a resounding no. More likely than not, disruptive technological change was simply foisted on society as a whole, primarily by those who owned and controlled the means of production, and primarily to serve their own profit, with little, if any intentionality or democratic deliberation and participation on the part of a broader range of stakeholders.
Given the disruptive potential even of AI-as-engineering, we cannot afford to leave decisions about the future development and deployment of even this sort of AI solely in the hands of those who stand to make vast profits from its use. This time around, we have to find a way to ensure that all relevant stakeholders are involved and that we are more intentional and deliberative in our decision making than we were about the disruptive technologies of the past.
I am not necessarily advocating the sort of socialism that would require the means of production to be collectively owned or regulated. But even if we arent willing to go so far as collectively seizing the machines, as it were, we must get past the point of treating not just AI but all technology as a thing unto itself, with a life of its own, whose development and deployment is entirely independent of our collective will. Technology is never self-developing or self-deploying. Technology is always and only developed and deployed by humans, in various political, social, and economic contexts. Ultimately, it is and must be entirely up to us, and up to us collectively, whether, how, and to what end it is developed and deployed. As soon as we lose sight of the fact that it is up to us collectively to determine whether AI is to be developed and deployed in a way that enhances the human world rather than diminishes it, it is all too easy to give in to either utopian cheerleading or dystopian fear mongering. We need to discipline ourselves not to give into either prematurely. Only such discipline will afford us the space to consider various tradeoffs deliberatively, reflectively and intentionally.
We should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come.
Utopian cheerleaders for AI often blithely insist that it is more likely to decrease rather than increase the amount of dirt, danger, or drudgery to which human workers are subject. As long as AI is not turned against usand why should we think that it would be?it will not eliminate the work for which we humans are best suited, but only the work that would be better left to machines in the first place.
I do not mean to dismiss this as an entirely unreasonable thought. Think of coal mining. Time was when coal mining was extraordinarily dangerous and dirty work. Over 100,000 coal miners died in mining accidents in the U.S. alone during the twentieth centurynot to mention the amount of black lung disease they suffered. Thanks largely to automation and computer technology, including robotics and AI technology, your average twenty-first-century coal industry worker relies a lot more on his or her brains than on mere brawn and is subject to a lot less danger and dirt than earlier generations of coal miners were. Moreover, it takes a lot fewer coal miners to extract more coal than the coal miners of old could possibly hope to extract.
To be sure, thanks to certain other forces having nothing to do with the AI revolution, the number of people dedicated to extracting coal from the earth will likely diminish even further in the relatively near term. But that just goes to show that even if we could manage to tame AIs effect on the future of human work, weve still got plenty of other disruptive challenges to face as we begin to re-imagine and re-engineer the made human world. But that just gives us even more reason to be intentional, reflective, and deliberative in thinking about the development and deployment of new technologies. Whatever one technology can do on its own to disrupt the human world, the interactive effects of multiple apparently independent technologies can greatly amplify the total level of disruption to which we may be subject.
I suppose that, if we had to choose, utopian cheerleading would at least feel more satisfying and uplifting than dystopian fear mongering. But we shouldnt be blind to the fact that any utopian buzz we may fall into while contemplating the future may serve to blind us to the fact that AI is very likely to transformperhaps radicallyour collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall in the first place. The point is that that boundary is likely to be drawn, erased, and redrawn by the progress of AI. And as our conception of the proper boundary evolves, our conception of what we humans are here for is likely to evolve right along with it.
The upshot is clear. If it is only relative to our sense of where the boundary is properly drawn that we could possibly know whether to embrace or recoil from the future, then we are now currently in no position to judge on behalf of our future selves which outcomes are to be embraced and which are to be feared. Nor, perhaps, are we entitled to insist that our current sense of where the boundary should be drawn should remain fixed for all time and circumstances.
To drive this last point home, it will help to consider three different cognitive networks in which AI already plays, or soon can be expected to play, a significant role: the air traffic control system, the medical diagnostic and treatment system, and what Ill call the ground traffic control system. My goal in so doing is to examine some subtle ways in which our sense of proper boundaries may shift.
We cannot afford to leave decisions about the future development and deployment even of AI-as-engineering solely in the hands of those who stand to make vast profits from its use.
Begin with the air traffic control system, one of the more developed systems in which brain power and computer power have been jointly engineered to cooperate in systematically discharging a variety of complex cognitive burdens. The system has steadily evolved over many decades into a system in which a surprising amount of cognitive work is done by software rather than humans. To be sure, there are still many humans involved. Human pilots sit in every cockpit and human brains monitor every air traffic control panel. But it is fair to say that humans, especially human pilots, no longer really fly airplanes on their own within this vast cognitive network. Its really the system as a whole that does the flying. Indeed, its only on certain occasions, and on an as needed basis, that the human beings within the system are called upon to do anything at all. Otherwise, they are mostly along for the ride.
This particular human-computer cognitive network works extremely well for the most part. It is extraordinarily safe in comparison with travel by automobile. And it is getting safer all the time. Its ever-increasing safety would seem to be in large measure due to the fact that more and more of the cognitive labor done within the system is being offloaded onto machine intelligence and taken away from human intelligence. Indeed, I would hazard the guess that almost no increases in safety have resulted from taking burdens away from algorithms and machines and giving them to humans instead.
To be sure, this trend started long before AI had reached anything like its current level of sophistication. But with the coming of age of AI-as-engineering you can expect that the trend will only accelerate. For example, starting in the 1970s, decades of effort went into building human-designed rules meant to provide guidance to pilots as to which maneuvers executed in which order would enable them to avoid any possible or pending mid-air collision. In more recent years, engineers have been using AI techniques to help design a new collision avoidance system that will make possible a significant increase in air safety. The secret to the new system is that instead of leaving the discovery of optimal rules of the airways to human ingenuity, the problem has been turned over to the machines. The new system uses computational techniques to derive an optimized decision logic that better deals with various sources of uncertainty and better balances competing system objectives than anything that we humans would be likely to think up on our own. The new system, called Airborne Collision Avoidance System (ACAS) X, promises to pay considerable dividends by reducing both the risks of mid-air collision and the need for alerts that call for corrective maneuvers in the first place.
In all likelihood, the system will not be foolproofprobably no system will ever be. But in comparison with automobile travel, air travel is already extraordinarily safe. Its not because the physics makes flying inherently safer than driving. Indeed, there was a time when flying was much riskier than it currently is. What makes air travel so much safer is primarily the differences between the cognitive networks within which each operates. In the ground traffic control system, almost none of the cognitive labor has been off loaded onto intelligent machines. Within the air traffic control system, a great deal of it has.
To be sure, every now and then, the flight system will call on a human pilot to execute a certain maneuver. When it does, the system typically isnt asking for anything like expert opinion from the human. Though it may sometimes need to do that, in the course of its routine, day-to-day operations, the system relies hardly at all on the ingenuity or intuition of human beings, including human pilots. When the system does need a human pilot to do something, it usually just needs the human to expertly execute a particular sequence of maneuvers. Mostly things go right. Mostly the humans do what they are asked to do, when they are asked to do it. But it should come as no surprise that when things do go wrong, it is quite often the humans and not the machines that are at fault. Humans too often fail to respond, or they respond with the wrong maneuver, or they execute the needed maneuver but in an untimely fashion.
Utopian buzz may serve to blind us to the fact that AI is very likely to transformperhaps radicallyour collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall.
I have focused on the air traffic control system because it is a relatively mature and stable cognitive network in which a robust balance between human and machine cognitive labor has been achieved over time. Given its robustness and stability and the degree of safety it provides, its pretty hard to imagine anyone having any degree of nostalgia for the days when that task of navigating the airways fell more squarely on the shoulders of human beings and less squarely on machines. On the other hand, it is not at all hard to imagine a future in which the cognitive role of humans is reduced even further, if not entirely eliminated. No one would now dream of traveling on an airplane that wasnt furnished with the latest radar system or the latest collision avoidance software. Perhaps the day will soon come when no would dream of traveling on an airplane piloted by, of all things, a human being rather than by a robotic AI pilot.
I suspect that what is true of the air traffic control system may eventually be true of many of the cognitive networks in which human and machine intelligence systematically interact. We may find that the cognitive labor that was once assigned to the human nodes has been given over to intelligent machines for narrow economic reasons aloneespecially if we fail to engage in collective decision making that is intentional, deliberative, and reflective and thereby leave ourselves to the mercy of the short-term economic interests of those who currently own and control the means of production.
We may comfort ourselves that even in such an eventuality, that which is left to us humans will be cognitive work of very high value, finely suited to the distinctive capacities of human beings. But I do not know what would now assure us of the inevitability of such an outcome. Indeed, it may turn out that there isnt really all that much that needs doing within such networks that is best done by human brains at all. It may be, for example, that within most engineered cognitive networks, the human brains that still have a place within them will mostly be along for the ride. Both possibilities are, I think, genuinely live options. And if I had to place a bet, I would bet that for the foreseeable future the total landscape of engineered cognitive networks will increasingly contain engineered networks of both kinds.
In fact, the two system I mentioned earlierthe medical diagnostic and treatment system and the ground transportation systemalready provide evidence of my conjecture. Start with the medical diagnostic and treatment system. Note that a great deal of medical diagnosis involves expertise at interpreting the results of various forms of medical imaging. As things currently stand, it is mostly human beings that do the interpreting. But an impressive variety of machine learning algorithms that can do at least as well as humans are being developed at a rapid pace. For example, CheXNet, developed at Stanford, promises to equal or exceed the performance of human radiologists in the diagnosis a wide variety of difference diseases from X-ray scans. Partly because of the success of CheXNEt and other machine learning algorithms, Geoffrey Hinton, the founding father of deep learning, has come to regard radiologists as an endangered species. On his view, medical schools ought to stop training radiologists beginning right now.
Even if Hinton is right, that doesnt mean that all the cognitive work done by the medical diagnostic and treatment system will soon be done by intelligent machines. Though human-centered radiology may soon come to seem quaint and outmoded, there is, I think, no plausible short- to medium-term future in which human doctors are completely written out of the medical treatment and diagnostic system. For one thing, though the machines beat humans at diagnosis, we still outperform the machines when it comes to the treatmentperhaps because humans are much better at things like empathy than any AI system is now or is likely to be anytime soon. Still, even if the human doctors are never fully eliminated from the diagnostic and treatment cognitive network, it is likely that their enduring roles within such networks will evolve so much that human doctors of tomorrow will bear little resemblance to human doctors of today.
We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst.
By contrast, there is a quite plausible near- to medium-term future in which human beings within the ground traffic control system are gradually reduced to the status of passengers. Someday in the not terribly distant future, our automobiles, buses, trucks, and trains will likely be part of a highly interconnected ground transportation system in which much of the cognitive labor is done by intelligent machines rather than human brains. The system will involve smart vehicles in many different configurations, each loaded with advanced sensors that allow them collect, analyze, and act on huge stores of data, in coordination with each other, the smart roadways on which they travel, and perhaps some centralized information hub that is constantly monitoring the whole. Within this system, our vehicles will navigate the roadways and railways safely and smoothly with very little guidance from humans. Humans will be able to direct the system to get this or that cargo or passenger from here to there. But the details will be left to the system to work out without much, if any, human intervention.
Such a development, if and when it comes to full fruition, will no doubt be accompanied by quantum leaps in safety and efficiency. But no doubt it would be a major source of a possibly permanent and steep decrease in the net demand for human labor of the sort that we referred to at the outset. All around the world, many millions of human beings make their living by driving things from one place to another. Labor of this sort has traditionally been rather secure. It cannot possibly be outsourced to foreign competitors. That is, you cannot transport beer, for example, from Colorado to Ohio by hiring a low-wage driver operating a truck in Beijing. But it may soon be the case that we can outsource such work after all. Not to foreign laborers but to intelligent machines, right here in our midst!
I end where I began. The robots are coming. Eventually, they may come for every one of us. Walls will not contain them. We cannot outrun them. Nor will running faster than the next human being suffice to save us from them. Not in the long run. They are relentless, never breaking pace, never stopping to savor their latest prey before moving on to the next.
If we cannot stop or reverse the robot invasion of the built human world, we must turn and face them. We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst. Should we seek to regulate their development and deployment? Should we accept the inevitability that we will lose much work to them? If so, perhaps we should rethink the very basis of our economy. Nor is it merely questions of money that we must face. There are also questions of meaning. What exactly will we do with ourselves if there is no longer any economic demand for human cognitive labor? How shall we find meaning and purpose in a world without work?
These are the sort of questions that the robot invasion will force us to confront. It should be striking that these are also the questions presaged in my prescient epigraph from Mill. Over a century before the rise of AI, Mill realized that the most urgent question raised by the rise of automation would not be the question of whether automata could perform certain tasks faster or cheaper or more reliably than human beings might. Instead, the most urgent question is what we humans would become in the process of substituting machine labor for human labor. Would such a substitution enhance us or diminish us? That has, in fact, has always been the most urgent question raised by disruptive technologies, though we have seldom recognized it.
This time around, may we face the urgent question head on. And may we do so collectively, deliberatively, reflectively, and intentionally.
Read more:
The Robots Are Coming - Boston Review
Posted in Genetic Engineering
Comments Off on The Robots Are Coming – Boston Review
Atopic dermatitis (eczema) – Symptoms and causes – Mayo Clinic
Posted: March 9, 2020 at 1:45 pm
Overview
Atopic dermatitis (eczema) is a condition that makes your skin red and itchy. It's common in children but can occur at any age. Atopic dermatitis is long lasting (chronic) and tends to flare periodically. It may be accompanied by asthma or hay fever.
No cure has been found for atopic dermatitis. But treatments and self-care measures can relieve itching and prevent new outbreaks. For example, it helps to avoid harsh soaps, moisturize your skin regularly, and apply medicated creams or ointments.
Atopic dermatitis (eczema) signs and symptoms vary widely from person to person and include:
Atopic dermatitis most often begins before age 5 and may persist into adolescence and adulthood. For some people, it flares periodically and then clears up for a time, even for several years.
See a doctor if you or your child:
Seek immediate medical attention for your child if the rash looks infected and he or she has a fever.
Healthy skin helps retain moisture and protects you from bacteria, irritants and allergens. Eczema is related to a gene variation that affects the skin's ability to provide this protection. This allows your skin to be affected by environmental factors, irritants and allergens.
In some children, food allergies may play a role in causing eczema.
The primary risk factor for atopic dermatitis is having a personal or family history of eczema, allergies, hay fever or asthma.
Complications of atopic dermatitis (eczema) may include:
The following tips may help prevent bouts of dermatitis (flares) and minimize the drying effects of bathing:
Try to identify and avoid triggers that worsen the condition. Things that can worsen the skin reaction include sweat, stress, obesity, soaps, detergents, dust and pollen. Reduce your exposure to your triggers.
Infants and children may experience flares from eating certain foods, including eggs, milk, soy and wheat. Talk with your child's doctor about identifying potential food allergies.
Take a bleach bath. The American Academy of Dermatology recommends considering a bleach bath to help prevent flares. A diluted-bleach bath decreases bacteria on the skin and related infections. Add 1/2 cup (118 milliliters) of household bleach, not concentrated bleach, to a 40-gallon (151-liter) bathtub filled with warm water. Measures are for a U.S.-standard-sized tub filled to the overflow drainage holes.
Soak from the neck down or just the affected areas of skin for about 10 minutes. Do not submerge the head. Take a bleach bath no more than twice a week.
March 06, 2018
See the original post here:
Atopic dermatitis (eczema) - Symptoms and causes - Mayo Clinic
Posted in Eczema
Comments Off on Atopic dermatitis (eczema) – Symptoms and causes – Mayo Clinic
Mum says son with eczema was able to have his first bubble bath thanks to new product – Manchester Evening News
Posted: at 1:45 pm
A mum says her little boy has been able to enjoy his first ever bubble bath thanks to a new product.
Annie Burton's two-year-old son George has eczema and she found even sensitive brands would cause a flare up.
But she says the new Once Upon a Foxx range has worked wonders and the youngster can now have a bath filled with bubbles.
Annie said: "After using the bubble bath for the first time I was anxious to see if he had any sort of reaction, I was amazed that there was no sign of any rash or redness.
He has now used it numerous times with no problems. I have one very happy excited child at bath time every night, the pictures speak for themselves."
The organic range has been launched by Alderley Edge-based mum-of-two Angela Foxx, who spent 18 months developing the items after her daughter Harper, now two, suffered with eczema and sensitive skin.
Nothing on the market was working so I set out to create an all organic product with only the best natural ingredients," said Angela, also mum to four-year-old Aston.
And Harper and George aren't the only children whose eczema has cleared since using the products.
Three-year-old Carter Fiorini suffered terribly on his arms and body and his mum Gemma shared photos showing before and after he started using the range.
She said: "Once Upon aFoxxproducts were life changing for us. We tried every product on the market and everything irritated his skin, even products that were supposed to be organic.
"Its so nice to have some beautiful smelling, foaming products rather than medicated fragrance free yucky ones."
The range includes a foaming shampoo, foaming body wash, bubble bath, body lotion and detangling spray and is made up of completely natural, organic ingredients.
The range, which launched in November, is available online and will soon be available in two big stores, but the details of those can't yet be revealed.
Have you found something that works particularly well for your child's eczema without using a prescribed cream or ointment? Let us know in the comments or on our Manchester Family Facebook page.
Get breaking news first on the free Manchester Evening News app - download it here for your Apple or Android device. You can also get a round-up of the biggest stories sent direct to your inbox every day with the MEN email newsletter - subscribe here . And you can follow us on Facebook here .
See the original post:
Mum says son with eczema was able to have his first bubble bath thanks to new product - Manchester Evening News
Posted in Eczema
Comments Off on Mum says son with eczema was able to have his first bubble bath thanks to new product – Manchester Evening News
View: Can daily application of an emollient from birth prevent the development of atopic eczema? – Hospital Healthcare Europe
Posted: at 1:45 pm
Rod Tucker BPharm PhD 5 March, 2020
But could regular emollient use from birth actually prevent the development of the condition?
Atopic eczema is a highly pruritic, inflammatory skin condition which affects 20% of children.1 The condition develops during infancy and classically leads to food allergies, asthma and allergic rhinitis in what has been termed the atopic march.2 A family history of atopic disease is an important risk factor for the development of atopic eczema. In addition, the presence of atopic eczema increases the risk of IgE-mediated food allergies and for example, infants with the condition are six times more likely to develop egg allergies.3 While the precise cause of eczema remains uncertain, the condition is characterised by a defective skin barrier and there is evidence that genetically determined loss-of-function mutations in the gene that codes for filaggrin, a protein that has an important role in skin barrier function, may contribute to eczema development during infancy.4 The presence of defective skin barrier serves as a possible entry route for allergens and this has been proposed as a possible route to sensitisation and the subsequent development of a peanut allergy.5
Emollients are the cornerstone of eczema management and are recommended for all patients with the condition.6 An emollient provides a water impermeable barrier over the surface of the skin which serves to both prevent water loss and ingression of potential allergens and irritants. Given this dual role, is it possible that treatment with emollients soon after birth could actually prevent the development of atopic eczema and the ensuing atopic march? This was the question posed in the barrier enhancement for eczema prevention (BEEP) study published in the Lancet.7 The study was based on the observations of a pilot study undertaken by the same group which found that the incidence of atopic eczema was lower (22% vs 43%) in 124 infants treated with the daily application of an emollient from birth.8 But this was not simply blue sky thinking: several lines of evidence had pointed to a role for emollients in preventing inflammation as well as work which illustrated how the barrier dysfunction in atopic eczema appeared to be a secondary phenomena to subclinical inflammation present in dry atopic skin.9
The BEEP study recruited 1394 high risk (that is, where at least one 1st degree relative had either eczema, allergic rhinitis or asthma) babies who were randomised to either once daily application of an emollient (Diprobase or Doublebase gel) to the whole body excluding the scalp or best practice skin-care advice (the control group). This latter group received advice to use mild cleansers and shampoos specifically formulated for infants but also to avoid soaps, bubble bath and baby wipes.
The primary outcome measure was a diagnosis of eczema at 2 years of age. The results showed no difference: eczema was present at 2 years of age in 23% of infants assigned to daily emollient use and 25% in the control group. There were also no significant differences in the incidence of food allergies or other allergic diseases and the authors were at a loss to explain their findings.
An alternative strategy to reducing food allergies is through early exposure to potentially allergenic foods in order to allow the development of tolerance and this was the subject of the preventing atopic dermatitis and allergies (PreventADALL) study which was also published in the same issue of the Lancet. In PreventADALL, Norwegian researchers explored the dual approach of daily emollient use and early introduction of potential allergic foodstuffs such as peanut butter, wheat porridge and eggs10 and the incidence of eczema was recorded after 12 months. The study had four arms: control (no advice); skin emollients; early feeding and finally combined emollient and early feeding. The incidence of eczema at 12 months was 8% (control group), 11% (emollient group), 9% (food group) and 5% (combined group) and these differences were not significant. In other words, neither a combination of daily emollient use and early introduction of potential allergenic foods reduced the development of atopic eczema at 12 months.
The results of both studies although disappointing, recognise the limited value of these primary prevention strategies. However, these results do not undermine the importance of regular emollient use in the management of established atopic eczema.
Whether changing the composition of an emollient makes any difference remains to be seen and is the subject of the on-going PEBBLES study.11
References
See original here:
View: Can daily application of an emollient from birth prevent the development of atopic eczema? - Hospital Healthcare Europe
Posted in Eczema
Comments Off on View: Can daily application of an emollient from birth prevent the development of atopic eczema? – Hospital Healthcare Europe
Heart attack symptoms: The painful skin condition that could increase risk by 50 percent – Express
Posted: at 1:45 pm
Heart attacks happen when the flow of blood to the heart is blocked, a mechanism commonly triggered by a build-up of fat, cholesterol and other substances, which form a plaque in the arteries that feed the heart. This process does not happen overnight, rather it is the accumulation of unhealthy lifestyle decisions taken over a period of time. As a result, there is ample opportunity to avert the risk of having a heart attack.
It is vital to heed the warning signs that foreshadow a heart attack so you can take steps to prevent it.
While most people would associate heart attack signs with chest pain, there are a surprising number of symptoms that show up in different places in the body.
For example, people who suffer from severe eczema may be at a greater risk of having a heart attack, according to a study published in the BMJ.
Eczema is a common skin condition that is characterised by itchy and inflamed patches of skin.
READ MORE:Best supplements for the heart: The supplement proven to reduce your risk of heart disease
Each was matched with up to five people of similar age and sex who didn't have eczema.
After a five-year follow-up, the researchers found that people with severe eczema had a 40 percent to 50 percent increased risk of heart attack, atrial fibrillation, and death from heart disease, as well as a 20 percent higher risk of stroke.
The risks remained even after the researchers accounted for confounding factors such as weight, smoking, and alcohol use.
In light of the findings, the study authors suggested that people with severe eczema should be screened for risk factors for heart disease, such as high blood pressure and elevated cholesterol.
Read more from the original source:
Heart attack symptoms: The painful skin condition that could increase risk by 50 percent - Express
Posted in Eczema
Comments Off on Heart attack symptoms: The painful skin condition that could increase risk by 50 percent – Express
‘The best emollient is one the patient uses’ – prescribing emollients in primary care – Pulse
Posted: at 1:45 pm
The current situation
In 2018, the NHS Clinical Commissioners, an independent collective voice of CCGs, published their recommendations for conditions for which over the counter items should not routinely be prescribed in primary care.1 This included advice that treatment should not normally be offered for the mild irritant dermatitis or mild dry skin.
Then, in June 2019, the NHS Clinical Commissioners published further guidance to CCGs.2 This included a short section on the use of bath additives and shower preparations for dry and pruritic conditions.
Around the country, many CCGs took these two directives as a green light to discourage the prescription of emollients generally and subsequently many patients with significant and serious skin conditions have complained that they are being denied effective treatment for their skin conditions. That surely was never the intention of this advice?
In my experience, most eczema will not settle unless emollients are used in place of soaps and other harsh detergents, and leave-on emollients are used regularly. Poorly managed eczema is not only distressing and disabling, but also results in a greater use of topical steroids and greater use of antibiotics both topically and systemically. That can be improved by using quality emollients.3
There are several issues that should be considered when choosing an emollient. Cost is one of them but, unlike the advice I see from several CCGs around the country, should not be the overriding factor.
First and foremost is patient preference - there is no point recommending an emollient that wont be used! Greasier emollients will have greater barrier protecting effects but are unacceptable for many people while they are up and about, however they are often tolerated as a leave-on at bedtime. Most ointments are markedly water-repellent, but Hydromol and Epaderm ointments are water-miscible and can be used as soap substitutes. Emollient creams can also be used as soap substitutes, however that would be extravagant with the more expensive, sophisticated emollients.
The constituents of an emollient are clearly critical. Sodium lauryl sulfate (SLS) should never be put on the skin as it will aggravate eczema4,5 and damages normal skin.6 Emulsifying ointment contains 3% SLS so shouldnt go anywhere near the skin and Aqueous cream (which is a dilution of emulsifying ointment in water) is also dangerous.
All emollients are combustible, but those containing paraffin are highly flammable.7 Those with high concentrations are especially dangerous and include all the ointments. It is essential that patients are warned about this risk. For example, even pyjamas, that have been washed after being worn by someone using a paraffin-based emollient, are highly combustible.8 The only emollient that I know of that doesnt have any paraffin is Aproderm colloidal oat cream.
Some emollients prevent water loss from the skin for prolonged periods of time (this can be measured as trans-epidermal water loss, or TEWL). Emollients with a long TEWL protection are not only convenient, but also more cost-effective as they do not need to applied throughout the day. Balneum and CeraVe for example have a TEWL of 24hrs and emollients containing Povidone (such as Doublebase Dayleve or Oilatum) have a TEWL of at least 12 hours.
Most quality emollients contain humectants, which are hygroscopic molecules that hold water in the stratum corneum. They complement the bodys natural moisturising factors, enhancing hydration of dry skin conditions, supporting the skin barrier and extending the TEWL. Typical humectants include: urea, glycerol, lactic acid or even sodium pyrollidine carboxylate.
There are several emollients on the market now with colloidal oatmeal. This has been shown to have soothing, anti-inflammatory effects, which are particularly useful in inflammatory skin conditions such as atopic eczema.
Adex gel is a relatively new emollient. It is essentially Doublebase gel with the addition of nicotinamide and is indicated for dry or inflamed skin. Nicotinamide has direct anti-itch and anti-inflammatory properties and I have found it particularly useful in patients with active inflammatory skin diseases including eczema, psoriasis and even rosacea. Lipikar AP+ Baume contains niacinamide, also has a humectant (7% glycerine) and is licensed from birth. Uniquely, it contains the prebiotic aqua posae filiformis, which I really like as it supports the normal healthy microbiome.
CeraVe is a new emollient, extremely popular in the USA, and contains three essential ceramides - oily molecules that fill the intercellular space between epidermocytes, sealing the skin barrier. It also has two powerful humectants (7.5% glycerine and hyaluronic acid), so it is powerfully hydrating. It is an interesting emollient with a very long TEWL, complimented also by its unique Multivesicular Emulsion Technology, resulting in the slow release of its ingredients over 24 hours.
As well as being an excellent humectant, 5% urea has mild anti-pruritic effects. At higher concentrations, urea can help thin the skin. Most high urea emollients are very expensive, but I think Flexitol is a reasonably priced and a useful option for hyperkeratotic skin conditions. CeraVe SA Soothing Cream is a new emollient that contains 10% urea, as well as a gentle concentration of salicylic acid (0.5%). This combination makes it an ideal choice for dry scaly conditions such as keratosis pilaris. The CeraVe range dont have an NHS tariff yet but they are superb emollients for patients who are prepared to buy a quality emollient themselves.
Some pump dispensers are wasteful (for example Aveeno). Others are weak and the delay in refilling can be frustrating (for example Imuderm, but also all the Exma range). Most pump dispensers cost around 2.50, which is a huge element of the total cost of a 500gm bottle of emollient! However, a quality pump dispenser, such as the Rieke pump used in the Aproderm range, allows 98% efficiency with minimal wastage, as well as delivering a consistent 4g per actuation. It also does not allow air back into the pump. which means fewer preservatives are necessary. Similarly the Doublebase and Adex pumps and bottle design are lovely, with 98% efficiency.
There is no excuse for dispensing a cream in a tub and so I cannot endorse those (e.g. Aquamax).
Epimax comes in a mayonnaise style tube, which makes it cheap and ideal as a soap substitute. However, it does draw air back into the tube risking contamination. Because of this, Epimax colloidal needs a strong preservative and some patients complain this stings their inflamed skin. Furthermore, some patients tell me they have problems squeezing the tube, which can also become slippery.
Ointments are so thick that they are always dispensed in a pot. However, patients must be instructed to take some out with a spatula or spoon, as otherwise the ointment could become contaminated within a week.9
Once a patient has confirmed they like an emollient, it should be prescribed in adequate quantities. NICE recommended 250gm/week for children with atopic eczema10and I would suggest adults need double that 2kg a month!
Patients should be instructed on how to apply their emollient, dabbing it onto the skin and stroking it down the body and limbs in the same direction as the hairs. The whole skin in eczema is abnormal and so the whole body needs to treated with a leave-on emollient. The best time to apply an emollient is straight after washing, when the skin is fully accessible and moist. That way, the emollient will help to trap some moisture in the skin. It is important for patients to understand that dry skin conditions, such as eczema and psoriasis, need long-term treatment and that emollients should not be stopped once a flare has settled.
Finally, dont forget to warn your patients not only about the risk of flammability, but also that the shower tray or bath could be rendered dangerously slippery and the drains may clog up unless they are regularly cleared.
More:
'The best emollient is one the patient uses' - prescribing emollients in primary care - Pulse
Posted in Eczema
Comments Off on ‘The best emollient is one the patient uses’ – prescribing emollients in primary care – Pulse
Woman tells of how 17 beauty product massively improved her eczema – RSVP Live
Posted: at 1:44 pm
A woman has told of how a 17 beauty product massively improved her eczema.
Lydia Finnegan has suffered from the crippling skin condition for years, but thanks to trial and error with various beauty products, she has been able to manage her eczema and now has smooth and clear skin thanks to using a coffee-based scrub.
Writing in a blog post on her site Itchie Scratchie, Lydia explained that just over four years ago she was on the verge of giving up on controlling her eczema.
She said: I scratched and scratched and bled and scratched more. I wasn't sleeping. Every part of my body was covered in red raw, itchy, flaky, bleeding, blistered hell, and I mean EVERY part.
Eczema hit me like a ton of bricks, I had never suffered like this before, but it spiralled to the point where I had no control. I was blaming everyone for everything.
But blogger Lydia found that the coffee scrub which she saw on TV show Dragons Den completely transformed her skin.
She said on her blog post that Grounded Coffee Body scrub worked absolute miracles on her skin and cleared it up completely.
The product costs 17 and you can pick it up online here.
I've been using the scrub for a few years now. This scrub was on Dragon's Den - a guy made this to treat his girlfriend's eczema and it worked, she wrote.
I use the grapefruit one - it is the softest texture and smells amazing. It is full of vitamins and moisturising ingredients. The coffee/salt exfoliates any dry skin if I leave it on for more than minutes, I notice instantly smoother skin.
Be super gentle when you put it on though. It feels GREAT to itch with, so you can irritate your skin by over-doing it.
Lydia also credits Aveeno Dermexa Emollient Cream as a winning product, as when used with the body scrub she noticed a huge difference in how soft her skin felt.
So, not only did the condition of her skin improve, Lydia noticed that her mental health did too thanks to her clearer skin.
She said: I used them together, the Aveeno strengthened my skin barrier and again and the scrub helped with exfoliation and hydration plus my general mental health and state of mind improved.
Visit link:
Woman tells of how 17 beauty product massively improved her eczema - RSVP Live
Posted in Eczema
Comments Off on Woman tells of how 17 beauty product massively improved her eczema – RSVP Live
How to Heal Dry, Cracked Hands From Washing Your Hands So Damn Much – Self
Posted: at 1:44 pm
With news of more and more possible cases of the new coronavirus in the U.S., now is the time to get your handwashing game on point. But all that washing might also have you worrying aboutor dealing withan incredibly common skin issue: dry, itchy, red, painful skin on your hands.
So we spoke to experts about how to manage and heal those dry, cracked hands in the safest way possible.
Dry skin happens on your hands for basically the same reasons it happens elsewhere on your body, Shari Marchbein, M.D., dermatologist and clinical assistant professor of dermatology at NYU School of Medicine, tells SELF.
The outer protective layer of your skin, the stratum corneum, helps seal hydration into the skin. Its made up of skin cells, which act like bricks, and lipids (fats), which act like mortar. So, if theres something wrong with the skin barrieryoure losing lipids, for instancethen moisture will be more likely to escape from the skin.
When you wash your hands, youre literally drawing moisture out of the skin and stripping it of the natural healthy fats that are supposed to be there, Dr. Marchbein says. And things like using hot water, using harsh antibacterial soaps, and not moisturizing afterward can make all of that worse.
On the milder end, you might feel like the skin on your hands is red, dry, tight, or a little itchy. But on the more severe end, you can experience a lot of irritation, intense itchiness, and even cracks in the skin, which can actually increase your risk for infection. People who are prone to eczema may even need prescription topical treatments to manage symptoms like these.
So, yes, its great that youre being diligent about washing your hands. But if you dont also take some precautions, your hands will not be happy with you.
Here are some easy, expert-approved ways to keep your hands clean and moisturized.
1. Use gentle hand soaps.
Hand washes with antibacterial ingredients as well as alcohol-based hand sanitizers can be especially harsh and drying on your skin, Dr. Marchbein says.
Plus, you dont really need to use those types of soaps to get rid of germsthe friction created by the mechanical act of washing of your hands as well as the surfactant cleaning ingredients in the soaps is what actually removes the microbes from your hands. Although our understanding of how the new coronavirus spreads and how to protect ourselves from it is still developing, regularly washing your hands with soap and water (especially before touching your face and before/after eating) is one of a few tried-and-true public health strategies the CDC is recommending right now to prevent the spread of this particular virus.
So, yes, that does mean that actually washing your hands correctlyand for at least 20 secondsis absolutely necessary.
2. Wash with lukewarm water.
Washing your hands with water thats excessively hot or cold is, simply, uncomfortable. Plus, using hot water is an easy way to dry out your skin even more, Dr. Marchbein says. Thats why she recommends using a comfortable lukewarm water temperature.
3. Put hand cream on slightly damp hands.
After washing, dry your hands, but not fully. When theyre still a little bit damp, thats the perfect time to use your hand cream, Dr. Marchbein says, because youll be helping to seal that water into the skin.
However, try not to use communal hand creams if you can help it, James D. Cherry, M.D., M.Sc., distinguished research professor of pediatrics at the David Geffen School of Medicine at UCLA and attending physician for pediatric infectious diseases at Mattel Childrens Hospital UCLA, tells SELF, because these can easily become contaminated. Instead, its worth buying and keeping your own personal hand cream with you or at your desk, he says. (Personally, this writer prefers these K-beauty hand creams for their portability, absorption, and lovely scents.)
Read more:
How to Heal Dry, Cracked Hands From Washing Your Hands So Damn Much - Self
Posted in Eczema
Comments Off on How to Heal Dry, Cracked Hands From Washing Your Hands So Damn Much – Self
Eczema sufferer reveals that Dragon’s Den product dramatically improved her skin – FM104
Posted: at 1:44 pm
Lydia Finnegan has suffered from eczema for years, but now she has revealed that a product that she saw on Dragon's Den has played a massive part in improving her skin.
Introducing herItchie Scratchie blog, Lydia wrote: "I have tried and tested many skin products and I will post numerous blogs on what caused my eczema, how I initially tried to deal with it (and failed), how I finally managed to get rid of it and how I manage it now."
In her blog, she explains how just over 4 years ago, she had almost given up hope of finding a way to bring her eczema under control. "I scratched and scratched and bled and scratched more. I wasn't sleeping.Every part of my body was covered in red raw, itchy, flaky, bleeding, blistered hell, and I mean EVERY part." she said.
She continued: "Eczema hit me like a ton of bricks, I had never suffered like this before, but it spiralled to the point where I had no control. I was blaming everyone for everything."
However, the 27 year old blogger now claims that a coffee scrub which she saw on Dragon's Den has played a role in the dramatic improvements of her skin.
She revealedthat Grounded's Coffee Body Scrub has made her skinnoticeably better and that it works as a great exfoliator.
"I've been using the scrub for a few years now. This scrub was on Dragon's Den - a guy made this to treat his girlfriend's eczema and it worked!"
She continued: "I use the grapefruit one - it is the softest texture and smells amazing. It is full of vitamins and moisturising ingredients. The coffee/salt exfoliates any dry skin if I leave it on for more than minutes, I notice instantly smoother skin."
Grounded's Coffee Body Scrub in Grapefruit17
"Be super gentle when you put it on though" she explained.It feels GREAT to itch with, so you can irritate your skin by over-doing it (which I have done MANY times because my self-control is ZERO when it comes to itching."
Lydia then spoke to Tyla and told them:"It made my skin feel so soft - like noticeably better and a great exfoliator. Full of good ingredients that I know moisturise my skin.
"It can sometimes tingle when putting on sore skin - and I wouldn't put on oozy or cracked skin - because it'd be too harsh/painful and could irritate skin more. Just once a week, gently, works wonders."
She also revealed that she swears by the Aveeno moisturiser to compliment her use of the body scrub.
"The difference in terms of how soft your skin is is genuinely instant - but over a number of months of weekly scrubs and daily use of Aveeno emollient cream I saw a difference," she told Tyla.
"I used them together, the Aveeno strengthened my skin barrier again and the scrub helped with exfoliation and hydration plus my general mental health and state of mind improved."
You can check out the collection of Grounded's CoffeeBody Scrubs from their website and the range is between 11 and 17.
Read the original here:
Eczema sufferer reveals that Dragon's Den product dramatically improved her skin - FM104
Posted in Eczema
Comments Off on Eczema sufferer reveals that Dragon’s Den product dramatically improved her skin – FM104