The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Genetic Engineering
The Robots Are Coming – Boston Review
Posted: March 10, 2020 at 11:42 pm
Image: Adobe Stock
In the overhyped age of deep learning, rumors of thinking robots are greatly exaggerated. Still, we cannot afford to leave decisions about the development of even this sort of AI in the hands of those who stand to reap vast profits from its use.
EditorsNote: The philosopher Kenneth A. Taylor passed away suddenly this winter. Boston Review is proud to publish this essay, which grows out of talks Ken gave throughout 2019, in collaboration with his estate. Preceding it is an introductory note by Kens colleague,John Perry.
In memoriam Ken Taylor
On December 2, 2019, a few weeks after his sixty-fifth birthday, Ken Taylor announced to all of his Facebook friends that the book he had been working on for years, Referring to the World, finally existed in an almost complete draft. That same day, while at home in the evening, Ken died suddenly and unexpectedly. He is survived by his wife, Claire Yoshida; son, Kiyoshi Taylor; parents, Sam and Seretha Taylor; brother, Daniel; and sister, Diane.
Ken was an extraordinary individual. He truly was larger than life. Whatever the task at handwhether it was explaining some point in the philosophy of language, coaching Kiyoshis little league team, chairing the Stanford Philosophy department and its Symbolic Systems Program, debating at Stanfords Academic Senate, or serving as president of the Pacific Division of the American Philosophical Association (APA)Ken went at it with ferocious energy. He put incredible effort into teaching. He was one of the last Stanford professors to always wear a tie when he taught, to show his respect for the students who make it possible for philosophers to earn a living doing what we like to do. His death leaves a huge gap in the lives of his family, his friends, his colleagues, and the Stanford community.
Ken went to college at Notre Dame. He entered the School of Engineering, but it didnt quite satisfy his interests so he shifted to the Program of Liberal Studies and became its first African American graduate. Ken came from a religious family, and never lost interest in the questions with which religion deals. But by the time he graduated he had become a naturalistic philosopher; his senior essay was on Kant and Darwin.
Ken was clearly very much the same person at Notre Dame that we knew much later. Here is a memory from a Katherine Tillman, a professor in the Liberal Studies Program:
This is how I remember our beloved and brilliant Ken Taylor: always with his hand up in class, always with that curious, questioning look on his face. He would shift a little in his chair and make a stab at what was on his mind to say. Then he would formulate it several more times in questions, one after the other, until he felt he got it just right. And he would listen hard, to his classmates, to his teachers, to whomever could shed some light on what it was he wanted to know. He wouldnt give up, though he might lean back in his chair, fold his arms, and continue with that perplexed look on his face. He would ask questions about everything.Requiescat in pace.
From Notre Dame Taylor went to the University of Chicago; there his interests solidified in the philosophy of language. His dissertation was on reference, the theory of how words refer to things in the world; his advisor was the philosopher of language Leonard Linsky. We managed to lure Taylor to Stanford in 1995, after stops at Middlebury, the University of North Carolina, Wesleyan, the University of Maryland, and Rutgers.
In 2004 Taylor and I launched the pubic radio program Philosophy Talk, billed as the program that questions everythingexcept your intelligence. The theme song is Nice Work if You Can Get It, which expresses the way Ken and I both felt about philosophy. The program dealt with all sorts of topics. We found ourselves reading up on every philosopher we discussedfrom Plato to Sartre to Rawlsand on every topic with a philosophical dimension, from terrorism and misogyny to democracy and genetic engineering. I grew pretty tired of this after a few years. I had learned all I wanted to know about imporant philosophers and topics. I couldnt wait after each Sundays show to get back to my world: the philosophy of language and mind. But Ken seemed to love it more and more with each passing year. He loved to think; he loved forming opinions, theories, hypotheses and criticisms on every possible topic; and he loved talking about them with the parade of distinguished guests that joined us.
Until the turn of the century Kens publications lay pretty solidly in the philosophy of language and mind and closely related areas. But later we begin to find things like How to Vanquish the Still Lingering Shadow of God and How to Hume a Hegel-Kant: A Program for the Naturalization of Normative Consciousness. Normativitythe connection between reason, duty, and lifeis a somewhat more basic issue in philosophy than proper names. By the time of his 2017 APA presidential address, Charting the Landscape of Reason, it seemed to me that Ken had clearly gone far beyond issues of reference, and not only on Sunday morning for Philosophy Talk. He had found a broader and more natural home for his active, searching, and creative mind. He had become a philosopher who had interesting things to say not only about the most basic issues in our field but all sorts of wider concerns. His Facebook page included a steady stream of thoughtful short essays on social, political, and economic issues. As the essay below shows, he could bring philosophy, cognitive science, and common sense to bear on such issues, and wasnt afraid to make radical suggestions.
Some of us are now finishing the references and preparing an index for Referring to the World, to be published by Oxford University Press. His next book was to be The Natural History of Normativity. He died as he was consolidating the results of thirty-five years of exciting productive thinking on reference, and beginning what should have been many, many more productive and exciting years spent illuminating reason and normativity, interpreting the great philosophers of the past, and using his wisdom to shed light on social issuesfrom robots to all sort of other things.
His loss was not just the loss of a family member, friend, mentor and colleague to those who knew him, but the loss, for the whole world, of what would have beenan illuminating and important body of philosophical and practical thinking. His powerful and humane intellect will be sorely missed.
John Perry
Among the works of man, which human life is rightly employed in perfecting and beautifying, the first in importance surely is man himself. Supposing it were possible to get houses built, corn grown, battles fought, causes tried, and even churches erected and prayers said, by machineryby automatons in human formit would be a considerable loss to exchange for theseautomatons even the men and women who at present inhabit the more civilized parts of the world, and who assuredly are but starved specimens of what nature can and will produce. Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.
John Stuart Mill, On Liberty (1859)
Some believe that we are on the cusp of a new age. The day is coming when practically anything that a human can doat least anything that the labor market is willing to pay a human being a decent wage to dowill soon be doable more efficiently and cost effectively by some AI-driven automated device. If and when that day does arrive, those who own the means of production will feel ever increasing pressure to discard human workers in favor of an artificially intelligent work force. They are likely to do so as unhesitatingly as they have always set aside outmoded technology in the past.
We are very unlikely to be inundated anytime soon with a race of thinking robotsat least not if we mean by thinking that peculiar thing that we humans do, done in precisely the way that we humans do it.
To be sure, technology has disrupted labor markets before. But until now, even the most far reaching of those disruptions have been relatively easy to adjust to and manage. That is because new technologies have heretofore tended to displace workers from old jobs that either no longer needed to be doneor at least no longer needed to be done by humansinto either entirely new jobs that were created by the new technology, or into old jobs for which the new technology, directly or indirectly, caused increased demand.
This time things may be radically different. Thanks primarily to AIs presumed potential to equal or surpass every human cognitive achievement or capacity, it may be that many humans will be driven out of the labor market altogether.
Yet it is not necessarily time to panic. Skepticism about the impact of AI is surely warranted on inductive grounds alone. Way back in 1956, at the Dartmouth Summer Research Project on Artificial Intelligence, an event that launched the first AI revolution, the assembled gaggle of AI pioneersall ten of thembreathlessly anticipated that the mystery of fully general artificial intelligence could be solved within a couple of decades at most. In 1961, Minsky, for example, was confidently proclaiming, We are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. Well over a half century later, we are still waiting for the revolution to be fully achieved.
AI has come a long way since those early days: it is now a very big deal. It is a major focus of academic research, and not just among computer scientists. Linguists, psychologists, the legal establishment, the medical establishment, and a whole host of others have gotten into the act in a very big way. AI may soon be talking to us in flawless and idiomatic English, counseling us on fundamental life choices, deciding who gets imprisoned for how long, and diagnosing our most debilitating diseases. AI is also big business. The worldwide investment in AI technology, which stood at something like $12 billion in 2018, will top $200 billion by 2025. Governments are hopping on the AI bandwagon. The Chinese envision the development of a trillion-dollar domestic AI industry in the relatively near term. They clearly believe that the nation that dominates AI will dominate the world. And yet, a sober look at the current state of AI suggests that its promise and potential may still be a tad oversold.
Excessive hype is not confined to the distant past. One reason for my own skepticism is the fact that in recent years the AI landscape has come to be progressively more dominated by AI of the newfangled deep learning variety, rather than by AI of the more or less pass logic-based symbolic processing varietyaffectionately known in some quarters, and derisively known in others, as GOFAI (Good Old Fashion Artificial Intelligence).
It was mostly logic-based, symbolic processing GOFAI that so fired the imaginations of the founders of AI back in 1956. Admittedly, to the extent that you measure success by where time, money, and intellectual energy are currently being invested, GOFAI looks to be something of dead letter. I dont want to rehash the once hot theoretical and philosophical debates over which approach to AIlogic-based symbolic processing, or neural nets and deep learningis the more intellectually satisfying approach. Especially back in the 80s and 90s, those debates raged with what passes in the academic domain as white-hot intensity. They no longer do, but not because they were decisively settled in favor of deep learning and neural nets more generally. Its more that machine learning approaches, mostly in the form of deep learning, have recently achieved many impressive results. Of course, these successes may not be due entirely to the anti-GOFAI character of these approaches. Even GOFAI has gotten into the machine learning act with, for example, Bayesian networks. The more relevant divide may be between probabilistic approaches of various sorts and logic-based approaches.
It is important to distinguish AI-as-engineering from AI-as-cognitive-science. The former is where the real money turns out to be.
However exactly you divide up the AI landscape, it is important to distinguish what I call AI-as-engineering from what I call AI-as-cognitive-science. AI-as-engineering isnt particularly concerned with mimicking the precise way in which the human mind-brain does distinctively human things. The strategy of engineering machines that do things that are in some sense intelligent, even if they do what they do in their own way, is a perfectly fine way to pursue artificial intelligence. AI-as-cognitive science, on the other hand, takes as its primary goal that of understanding and perhaps reverse engineering the human mind. AI pretty much began its life by being in this business, perhaps because human intelligence was the only robust model of intelligence it had to work with. But these days, AI-as-engineering is where the real money turns out to be.
Though there is certainly value in AI-as-engineering, I confess to still have a hankering for AI-as-cognitive science. And that explains why I myself still feel the pull of the old logic-based symbolic processing approach. Whatever its failings, GOFAI had as one among its primary goals that of reverse engineering the human mind. Many decades later, though we have definitely made some progress, we still havent gotten all that far with that particular endeavor. When it comes to that daunting task, just about all the newfangled probability and statistics-based approaches to AImost especially deep learning, but even approaches that have more in common with GOFAI like Bayesian Netsstrike me as if not exactly nonstarters, then at best only a very small part of the truth. Probably the complete answer will involve some synthesis of older approaches and newer approaches and perhaps even approaches we havent even thought of yet. Unfortunately, however, although there are a few voices starting to sing such an ecumenical tune; neither ecumenicalism nor intellectual modesty are exactly the rage these days.
Back when the competition over competing AI paradigms was still a matter of intense theoretical and philosophical dispute, one of the advantages often claimed on behalf of artificial neural nets over logic-based symbolic approaches was that the former but not the latter were directly neuronally inspired. By directly modeling its computational atoms and computational networks on neurons and their interconnections, the thought went, artificial neural nets were bound to be truer to how the actual human brain does its computing than its logic-based symbolic processing competitor could ever hope to be.
Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world.
This is not the occasion to debate such claims at length. My own hunch is that there is little reason to believe that deep learning actually holds the key to finally unlocking the mystery of general purpose, humanlike intelligence. Despite being neuronally inspired, many of the most notable successes of the deep learning paradigm depend crucially on the ability of deep learning architectures to do something that the human brain isnt all that good at: extracting highly predictive, though not necessarily deeply explanatory patterns, on the basis of being trained up, via either supervised or unsupervised learning, on huge data sets, consisting, from the machine eye point of view, of a plethora of weakly correlated feature bundles, without the aid of any top-down direction or built-in worldly knowledge. That is an extraordinarily valuable and computationally powerful, technique for AI-as-engineering. And it is perfectly suited to the age of massive data, since the successes of deep learning wouldnt be possible without big data.
Its not that we humans are pikers at pattern extraction. As a species, we do remarkably well at it, in fact. But I doubt that the capacity for statistical analysis of huge data sets is the core competence on which all other aspects of human cognition are ultimately built. But heres the thing. Once youve invented a really cool new hammerwhich deep learning very much isits a very natural human tendency to start looking for nails to hammer everywhere. Once you are on the lookout for nails everywhere, you can expect to find a lot more of them than you might have at first thought, and you are apt to find some of them in some pretty surprising places.
But if its really AI-as-cognitive science that you are interested in, its important not to lose sight of the fact that it may take a bit more than our cool new deep learning hammer to build a humanlike mind. You cant let your obsession with your cool new hammer make you lose sight of the fact that in some domains, the human mind seems to deploy quite a different trick from the main sorts of tricks that are at the core not only of deep learning but also other statistical paradigms (some of which, again, are card carrying members of the GOFAI family). In particular, the human mind is often able to learn quite a lot from relatively little and comparatively impoverished data. This remarkable fact has led some to conjecture that human mind must come antecedently equipped with a great deal of endogenous, special purpose, task specific cognitive structure and content. If true, that alone would suffice to make the human mind rather unlike your typical deep learning architecture.
Indeed, deep learning takes quite the opposite approach. A deep learning network may be trained up to represent words, say, as points in a micro-featural vector space of, say, three hundred dimensions, and on that basis of such representations, it might learn, after many epochs of training, on a really huge data set, to make the sort of pragmatic inferencesfrom say, John ate some of the cake to John did not eat all of the cakethat humans make quickly, easily and naturally, without a lot of focused training of the sort required by deep learning and similar such approaches. The point is that deep learning can learn to do various cool thingsthings that one might once have thought only human beings can doand although they can do some of those things quite well, it still seems highly unlikely that they do those cool things in precisely the way that we humans do.
I stress again, though, that if you are not primarily interested in AI-as-cognitive science, but solely in AI-as-engineering, you are free to care not one whit whether deep learning architectures and its cousins hold the ultimate key to understanding human cognition in all its manifestations. You are free to embrace and exploit the fact that such architectures are not just good, but extraordinarily good, at what they do, at least when they are given large enough data sets to work with. Still, in thinking about the future of AI, especially in light of both our darkest dystopian nightmares and our brightest utopian dreams, it really does matter whether we are envisioning a future shaped by AI-as-engineering or AI-as-cognitive-science. If I am right that there are many mysteries about the human mind that currently dominant approaches to AI are ill-equipped to help us solve, then to the extent that such approaches continue to dominate AI into the future, we are very unlikely to be inundated anytime soon with a race of thinking robotsat least not if we mean by thinking that peculiar thing that we humans do, done in precisely the way that we humans do it.
Once youve invented a new hammerwhich deep learning very much isits a very natural human tendency to start looking for nails to hammer everywhere.
Deep learning and its cousins may do what they do better than we could possibly do what they do. But that doesnt imply that they do what we do better than we do what we do. If so, then, at the very least, we neednt fear, at least not yet, that AI will radically outpace humans in our most characteristically human modes of cognition. Nor should we expect the imminent arrival of the so-called singularity in which human intelligence and machine intelligence somehow merge to create a super intelligence that surpasses the limits of each. Given that we still havent managed to understand the full bag of tricks our amazing minds deploy, we havent the slightest clue as to what such a merger would even plausibly consist in.
Nonetheless, it would still be a major mistake to lapse into a false sense of security about the potential impact of AI on the human world. Even if current AI is far from being the holy grail of a science of mind that finally allows us to reverse engineer it, it will still allow us to the engineer extraordinarily powerful cognitive networks, as I will call them, in which human intelligence and artificial intelligence of some kind or other play quite distinctive roles. Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing what I will call the division of cognitive labor between human and artificial intelligence within engineered cognitive networks will be with us to stay. And it will almost certainly be a rather fraught and urgent matter. And this will be thanks in large measure to the power of AI-as-engineering rather than to the power of AI-as-cognitive-science.
Indeed, there is a distinct possibility that AI-as-engineering may eventually reduce the role of human cognitive labor within future cognitive networks to the bare minimum. It is that possibilitynot the possibility of the so-called singularity or the possibility that we will soon be surrounded by a race of free, autonomous, creative, or conscious robots, chafing at our undeserved dominance over themthat should now and for the foreseeable future worry us most. Long before the singularity looms even on some distant horizon, the sort of AI technology that AI-as-engineering is likely to give us already has the potential to wreak considerable havoc on the human world. It will not necessarily do so by superseding human intelligence, but simply by displacing a great deal of it within various engineered cognitive networks. And if thats right, it simply wont take the arrival of anything close to full-scale super AI, as we might call it, to radically disrupt, for good or for ill, the built cognitive world.
Start with the fact that much of the cognitive work that humans are currently tasked to do within extant cognitive networks doesnt come close to requiring the full range of human cognitive capacities to begin with. A human mind is an awesome cognitive instrument, one of the most powerful instruments that nature has seen fit to evolve. (At least on our own lovely little planet! Who knows what sorts of minds evolution has managed to design on the millions upon millions of mind-infested planets that must be out there somewhere?) But stop and ask yourself, how much of the cognitive power of her amazing human mind does a coffee house Barista, say, really use in her daily work?
Not much, I would wager. And precisely for that reason, its not hard to imagine coffee houses of the future in which more and more of the cognitive labor that needs doing within them is done by AI finely tuned to cognitive loads they will need to carry within such cognitive networks. More generally, it is abundantly clear that much of the cognitive labor that needs doing within our total cognitive economy that now happens to be performed by humans is cognitive labor for which we humans are often vastly overqualified. It would be hard to lament the off-loading of such cognitive labor onto AI technology.
Even if we never achieve a single further breakthrough in AI-as-cognitive-science, from this day forward, for as long as our species endures, the task of managing the division of cognitive labor between human and artificial intelligence will be with us to stay.
But there is also a flip side. The twenty-first century economy is already a highly data-driven economy. It is likely to become a great deal more so, thanksamong other thingsto the emergence of the internet of things. The built environment will soon be even more replete with so-called smart devices. And these smart devices will constantly be collecting, analyzing and sharing reams and reams of data on every human being who interacts with them. It will not be just the usual suspects, like our computers, smart phones or smart watches, that are so engaged. It will be our cars, our refrigerators, indeed every system or appliance in every building in the world. There will be data-collecting monitors of every sortheart monitors, sleep monitors, baby monitors. There will be smart roads, smart train tracks. There will be smart bridges that constantly monitor their own state and automatically alert the transportation department when they need repair. Perhaps they will shut themselves down and spontaneously reroute traffic while they are waiting for the repair crews to arrive. It will require an extraordinary amount of cognitive labor to keep such a built environment running smoothly. And for much of that cognitive labor, we humans are vastly underqualified. Try, for example, running a data mining operation using nothing but human brain power. Youll see pretty quickly that human brains are not at all the right tool for the job, I would wager.
Perhaps what should really worry us, I am suggesting, is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other cognitive labor will leave us open to something of an AI pincer attack. AI-as-engineering may give us the power to design cognitive networks in which each node is exquisitely fine-tuned to the cognitive load it is tasked to carry. Since distinctively human intelligence will often be either too much or too little for the task at hand, future cognitive networks may assign very little cognitive labor to humans. And that is precisely how it might come about that the demand for human cognitive labor within the overall economy may be substantially diminished. How should we think about the advance of AI in light of its capacity to allow us to re-imagine and re-engineer our cognitive networks in this way? That is the question I address in the remainder of this essay.
There may be lessons to be learned from the ways that we have coped with disruptive technological innovations of the past. So perhaps we should begin by looking backward rather than forward. The first thing to say is that many innovations of the past are now widely seen as good things, at least on balance. They often spared humans work that payed dead-end wages, or work that was dirty and dangerous, or work that was the source of mind-numbing drudgery.
What should really worry us is the possibility that the combination of our overqualification for certain cognitive labor and underqualification for other will leave us open to something of an AI pincer attack.
But we should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come. Even looking backward, we can see that new and disruptive technologies have sometimes been the culprit in increasing rather than decreasing the drudgery and oppressiveness of work. They have also served to rob work of a sense of meaning and purpose. The assembly line is perhaps the prime example. The rise of the assembly line doubtlessly played a vital role in making the mass production and distribution of all manner of goods possible. It made the factory worker vastly more productive than, say, the craftsman of old. In so doing, it increased the market for mass produced goods, while simultaneously diminishing the market for the craftsmans handcrafted goods. As such, it played a major role in increasing living standards for many. But it also had the downside effect of turning many human agents into mere appendages within a vast, impersonal and relentless mechanism of production.
All things considered, it would be hard to deny that trading in skilled craftsmanship for unskilled or semiskilled factory labor was a good thing. I do not intend to relitigate that choice here. But it is worth asking whether all things really were consideredand considered not just by those who owned the means of production but collectively by all the relevant stakeholders. I am no historian of political economy. But I venture the conjecture that the answer to that question is a resounding no. More likely than not, disruptive technological change was simply foisted on society as a whole, primarily by those who owned and controlled the means of production, and primarily to serve their own profit, with little, if any intentionality or democratic deliberation and participation on the part of a broader range of stakeholders.
Given the disruptive potential even of AI-as-engineering, we cannot afford to leave decisions about the future development and deployment of even this sort of AI solely in the hands of those who stand to make vast profits from its use. This time around, we have to find a way to ensure that all relevant stakeholders are involved and that we are more intentional and deliberative in our decision making than we were about the disruptive technologies of the past.
I am not necessarily advocating the sort of socialism that would require the means of production to be collectively owned or regulated. But even if we arent willing to go so far as collectively seizing the machines, as it were, we must get past the point of treating not just AI but all technology as a thing unto itself, with a life of its own, whose development and deployment is entirely independent of our collective will. Technology is never self-developing or self-deploying. Technology is always and only developed and deployed by humans, in various political, social, and economic contexts. Ultimately, it is and must be entirely up to us, and up to us collectively, whether, how, and to what end it is developed and deployed. As soon as we lose sight of the fact that it is up to us collectively to determine whether AI is to be developed and deployed in a way that enhances the human world rather than diminishes it, it is all too easy to give in to either utopian cheerleading or dystopian fear mongering. We need to discipline ourselves not to give into either prematurely. Only such discipline will afford us the space to consider various tradeoffs deliberatively, reflectively and intentionally.
We should be careful not to overstate the case for the liberating power of new technology, lest that lure us to into a misguided complacency about what is to come.
Utopian cheerleaders for AI often blithely insist that it is more likely to decrease rather than increase the amount of dirt, danger, or drudgery to which human workers are subject. As long as AI is not turned against usand why should we think that it would be?it will not eliminate the work for which we humans are best suited, but only the work that would be better left to machines in the first place.
I do not mean to dismiss this as an entirely unreasonable thought. Think of coal mining. Time was when coal mining was extraordinarily dangerous and dirty work. Over 100,000 coal miners died in mining accidents in the U.S. alone during the twentieth centurynot to mention the amount of black lung disease they suffered. Thanks largely to automation and computer technology, including robotics and AI technology, your average twenty-first-century coal industry worker relies a lot more on his or her brains than on mere brawn and is subject to a lot less danger and dirt than earlier generations of coal miners were. Moreover, it takes a lot fewer coal miners to extract more coal than the coal miners of old could possibly hope to extract.
To be sure, thanks to certain other forces having nothing to do with the AI revolution, the number of people dedicated to extracting coal from the earth will likely diminish even further in the relatively near term. But that just goes to show that even if we could manage to tame AIs effect on the future of human work, weve still got plenty of other disruptive challenges to face as we begin to re-imagine and re-engineer the made human world. But that just gives us even more reason to be intentional, reflective, and deliberative in thinking about the development and deployment of new technologies. Whatever one technology can do on its own to disrupt the human world, the interactive effects of multiple apparently independent technologies can greatly amplify the total level of disruption to which we may be subject.
I suppose that, if we had to choose, utopian cheerleading would at least feel more satisfying and uplifting than dystopian fear mongering. But we shouldnt be blind to the fact that any utopian buzz we may fall into while contemplating the future may serve to blind us to the fact that AI is very likely to transformperhaps radicallyour collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall in the first place. The point is that that boundary is likely to be drawn, erased, and redrawn by the progress of AI. And as our conception of the proper boundary evolves, our conception of what we humans are here for is likely to evolve right along with it.
The upshot is clear. If it is only relative to our sense of where the boundary is properly drawn that we could possibly know whether to embrace or recoil from the future, then we are now currently in no position to judge on behalf of our future selves which outcomes are to be embraced and which are to be feared. Nor, perhaps, are we entitled to insist that our current sense of where the boundary should be drawn should remain fixed for all time and circumstances.
To drive this last point home, it will help to consider three different cognitive networks in which AI already plays, or soon can be expected to play, a significant role: the air traffic control system, the medical diagnostic and treatment system, and what Ill call the ground traffic control system. My goal in so doing is to examine some subtle ways in which our sense of proper boundaries may shift.
We cannot afford to leave decisions about the future development and deployment even of AI-as-engineering solely in the hands of those who stand to make vast profits from its use.
Begin with the air traffic control system, one of the more developed systems in which brain power and computer power have been jointly engineered to cooperate in systematically discharging a variety of complex cognitive burdens. The system has steadily evolved over many decades into a system in which a surprising amount of cognitive work is done by software rather than humans. To be sure, there are still many humans involved. Human pilots sit in every cockpit and human brains monitor every air traffic control panel. But it is fair to say that humans, especially human pilots, no longer really fly airplanes on their own within this vast cognitive network. Its really the system as a whole that does the flying. Indeed, its only on certain occasions, and on an as needed basis, that the human beings within the system are called upon to do anything at all. Otherwise, they are mostly along for the ride.
This particular human-computer cognitive network works extremely well for the most part. It is extraordinarily safe in comparison with travel by automobile. And it is getting safer all the time. Its ever-increasing safety would seem to be in large measure due to the fact that more and more of the cognitive labor done within the system is being offloaded onto machine intelligence and taken away from human intelligence. Indeed, I would hazard the guess that almost no increases in safety have resulted from taking burdens away from algorithms and machines and giving them to humans instead.
To be sure, this trend started long before AI had reached anything like its current level of sophistication. But with the coming of age of AI-as-engineering you can expect that the trend will only accelerate. For example, starting in the 1970s, decades of effort went into building human-designed rules meant to provide guidance to pilots as to which maneuvers executed in which order would enable them to avoid any possible or pending mid-air collision. In more recent years, engineers have been using AI techniques to help design a new collision avoidance system that will make possible a significant increase in air safety. The secret to the new system is that instead of leaving the discovery of optimal rules of the airways to human ingenuity, the problem has been turned over to the machines. The new system uses computational techniques to derive an optimized decision logic that better deals with various sources of uncertainty and better balances competing system objectives than anything that we humans would be likely to think up on our own. The new system, called Airborne Collision Avoidance System (ACAS) X, promises to pay considerable dividends by reducing both the risks of mid-air collision and the need for alerts that call for corrective maneuvers in the first place.
In all likelihood, the system will not be foolproofprobably no system will ever be. But in comparison with automobile travel, air travel is already extraordinarily safe. Its not because the physics makes flying inherently safer than driving. Indeed, there was a time when flying was much riskier than it currently is. What makes air travel so much safer is primarily the differences between the cognitive networks within which each operates. In the ground traffic control system, almost none of the cognitive labor has been off loaded onto intelligent machines. Within the air traffic control system, a great deal of it has.
To be sure, every now and then, the flight system will call on a human pilot to execute a certain maneuver. When it does, the system typically isnt asking for anything like expert opinion from the human. Though it may sometimes need to do that, in the course of its routine, day-to-day operations, the system relies hardly at all on the ingenuity or intuition of human beings, including human pilots. When the system does need a human pilot to do something, it usually just needs the human to expertly execute a particular sequence of maneuvers. Mostly things go right. Mostly the humans do what they are asked to do, when they are asked to do it. But it should come as no surprise that when things do go wrong, it is quite often the humans and not the machines that are at fault. Humans too often fail to respond, or they respond with the wrong maneuver, or they execute the needed maneuver but in an untimely fashion.
Utopian buzz may serve to blind us to the fact that AI is very likely to transformperhaps radicallyour collective intuitive sense of where the boundary between work better consigned to machines and work best left to us humans should fall.
I have focused on the air traffic control system because it is a relatively mature and stable cognitive network in which a robust balance between human and machine cognitive labor has been achieved over time. Given its robustness and stability and the degree of safety it provides, its pretty hard to imagine anyone having any degree of nostalgia for the days when that task of navigating the airways fell more squarely on the shoulders of human beings and less squarely on machines. On the other hand, it is not at all hard to imagine a future in which the cognitive role of humans is reduced even further, if not entirely eliminated. No one would now dream of traveling on an airplane that wasnt furnished with the latest radar system or the latest collision avoidance software. Perhaps the day will soon come when no would dream of traveling on an airplane piloted by, of all things, a human being rather than by a robotic AI pilot.
I suspect that what is true of the air traffic control system may eventually be true of many of the cognitive networks in which human and machine intelligence systematically interact. We may find that the cognitive labor that was once assigned to the human nodes has been given over to intelligent machines for narrow economic reasons aloneespecially if we fail to engage in collective decision making that is intentional, deliberative, and reflective and thereby leave ourselves to the mercy of the short-term economic interests of those who currently own and control the means of production.
We may comfort ourselves that even in such an eventuality, that which is left to us humans will be cognitive work of very high value, finely suited to the distinctive capacities of human beings. But I do not know what would now assure us of the inevitability of such an outcome. Indeed, it may turn out that there isnt really all that much that needs doing within such networks that is best done by human brains at all. It may be, for example, that within most engineered cognitive networks, the human brains that still have a place within them will mostly be along for the ride. Both possibilities are, I think, genuinely live options. And if I had to place a bet, I would bet that for the foreseeable future the total landscape of engineered cognitive networks will increasingly contain engineered networks of both kinds.
In fact, the two system I mentioned earlierthe medical diagnostic and treatment system and the ground transportation systemalready provide evidence of my conjecture. Start with the medical diagnostic and treatment system. Note that a great deal of medical diagnosis involves expertise at interpreting the results of various forms of medical imaging. As things currently stand, it is mostly human beings that do the interpreting. But an impressive variety of machine learning algorithms that can do at least as well as humans are being developed at a rapid pace. For example, CheXNet, developed at Stanford, promises to equal or exceed the performance of human radiologists in the diagnosis a wide variety of difference diseases from X-ray scans. Partly because of the success of CheXNEt and other machine learning algorithms, Geoffrey Hinton, the founding father of deep learning, has come to regard radiologists as an endangered species. On his view, medical schools ought to stop training radiologists beginning right now.
Even if Hinton is right, that doesnt mean that all the cognitive work done by the medical diagnostic and treatment system will soon be done by intelligent machines. Though human-centered radiology may soon come to seem quaint and outmoded, there is, I think, no plausible short- to medium-term future in which human doctors are completely written out of the medical treatment and diagnostic system. For one thing, though the machines beat humans at diagnosis, we still outperform the machines when it comes to the treatmentperhaps because humans are much better at things like empathy than any AI system is now or is likely to be anytime soon. Still, even if the human doctors are never fully eliminated from the diagnostic and treatment cognitive network, it is likely that their enduring roles within such networks will evolve so much that human doctors of tomorrow will bear little resemblance to human doctors of today.
We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst.
By contrast, there is a quite plausible near- to medium-term future in which human beings within the ground traffic control system are gradually reduced to the status of passengers. Someday in the not terribly distant future, our automobiles, buses, trucks, and trains will likely be part of a highly interconnected ground transportation system in which much of the cognitive labor is done by intelligent machines rather than human brains. The system will involve smart vehicles in many different configurations, each loaded with advanced sensors that allow them collect, analyze, and act on huge stores of data, in coordination with each other, the smart roadways on which they travel, and perhaps some centralized information hub that is constantly monitoring the whole. Within this system, our vehicles will navigate the roadways and railways safely and smoothly with very little guidance from humans. Humans will be able to direct the system to get this or that cargo or passenger from here to there. But the details will be left to the system to work out without much, if any, human intervention.
Such a development, if and when it comes to full fruition, will no doubt be accompanied by quantum leaps in safety and efficiency. But no doubt it would be a major source of a possibly permanent and steep decrease in the net demand for human labor of the sort that we referred to at the outset. All around the world, many millions of human beings make their living by driving things from one place to another. Labor of this sort has traditionally been rather secure. It cannot possibly be outsourced to foreign competitors. That is, you cannot transport beer, for example, from Colorado to Ohio by hiring a low-wage driver operating a truck in Beijing. But it may soon be the case that we can outsource such work after all. Not to foreign laborers but to intelligent machines, right here in our midst!
I end where I began. The robots are coming. Eventually, they may come for every one of us. Walls will not contain them. We cannot outrun them. Nor will running faster than the next human being suffice to save us from them. Not in the long run. They are relentless, never breaking pace, never stopping to savor their latest prey before moving on to the next.
If we cannot stop or reverse the robot invasion of the built human world, we must turn and face them. We must confront hard questions about what will and should become of both them and us as we welcome ever more of them into our midst. Should we seek to regulate their development and deployment? Should we accept the inevitability that we will lose much work to them? If so, perhaps we should rethink the very basis of our economy. Nor is it merely questions of money that we must face. There are also questions of meaning. What exactly will we do with ourselves if there is no longer any economic demand for human cognitive labor? How shall we find meaning and purpose in a world without work?
These are the sort of questions that the robot invasion will force us to confront. It should be striking that these are also the questions presaged in my prescient epigraph from Mill. Over a century before the rise of AI, Mill realized that the most urgent question raised by the rise of automation would not be the question of whether automata could perform certain tasks faster or cheaper or more reliably than human beings might. Instead, the most urgent question is what we humans would become in the process of substituting machine labor for human labor. Would such a substitution enhance us or diminish us? That has, in fact, has always been the most urgent question raised by disruptive technologies, though we have seldom recognized it.
This time around, may we face the urgent question head on. And may we do so collectively, deliberatively, reflectively, and intentionally.
Read more:
The Robots Are Coming - Boston Review
Posted in Genetic Engineering
Comments Off on The Robots Are Coming – Boston Review
New ‘Feed Your Mind’ Initiative Launches to Increase Consumer Understanding of Genetically Engineered Foods – FDA.gov
Posted: March 5, 2020 at 6:22 pm
For Immediate Release: March 04, 2020
Espaol
The U.S. Food and Drug Administration, in collaboration with the U.S. Environmental Protection Agency and the U.S. Department of Agriculture, today launched a new initiative to help consumers better understand foods created through genetic engineering, commonly called GMOs or genetically modified organisms.
The initiative, Feed Your Mind, aims to answer the most common questions that consumers have about GMOs, including what GMOs are, how and why they are made, how they are regulated and to address health and safety questions that consumers may have about these products.
While foods from genetically engineered plants have been available to consumers since the early 1990s and are a common part of todays food supply, there are a lot of misconceptions about them, said FDA Commissioner Stephen M. Hahn, M.D. This initiative is intended to help people better understand what these products are and how they are made. Genetic engineering has created new plants that are resistant to insects and diseases, led to products with improved nutritional profiles, as well as certain produce that dont brown or bruise as easily.
Farmers and ranchers are committed to producing foods in ways that meet or exceed consumer expectations for freshness, nutritional content, safety, sustainability and more. I look forward to partnering with FDA and EPA to ensure that consumers understand the value of tools like genetic engineering in meeting those expectations, said Greg Ibach, Under Secretary for Marketing and Regulatory Programs at USDA.
As EPA celebrates its 50th anniversary, we are proud to partner with FDA and USDA to push agricultural innovation forward so that Americans can continue to enjoy a protected environment and a safe, abundant and affordable food supply, said EPA Office of Chemical Safety and Pollution Prevention Assistant Administrator Alexandra Dapolito Dunn.
The Feed Your Mind initiative is launching in phases. The materials released today include a new website, as well as a selection of fact sheets, infographics and videos. Additional materialsincluding a supplementary science curriculum for high schools, resources for health professionals and additional consumer materialswill be released later in 2020 and 2021.
To guide development of the Feed Your Mind initiative, the three government agencies formed a steering committee and several working groups consisting of agency leaders and subject matter experts; sought input from stakeholders through two public meetings; opened a docket to receive public comments; examined the latest science and research related to consumer understanding of genetically engineered foods; and conducted extensive formative research. Funding for Feed Your Mind was provided by Congress in the Consolidated Appropriations Act of 2017 as the Agricultural Biotechnology Education and Outreach Initiative.
The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological products for human use, and medical devices. The agency also is responsible for the safety and security of our nations food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for regulating tobacco products.
###
03/04/2020
Read the original here:
New 'Feed Your Mind' Initiative Launches to Increase Consumer Understanding of Genetically Engineered Foods - FDA.gov
Posted in Genetic Engineering
Comments Off on New ‘Feed Your Mind’ Initiative Launches to Increase Consumer Understanding of Genetically Engineered Foods – FDA.gov
Building ‘better’ astronauts through genetic engineering could be key to colonizing other planets – Genetic Literacy Project
Posted: at 6:22 pm
Space exploration has long been a source of fascination. Since the stars first captured our attention, we have obsessed over that vast curtain of darkness that lies beyond our atmosphere. But to what end? What ultimate goal does mankind strive towards, if not the ability to visit and colonize other worlds?
Before we can take our first steps out into the universe, we have to answer a critical question: Do we have the ability to adapt to other environments very different from what we have on Earth to not only survive, but to thrive? Instead of focusing on how we might terraform other planets to suit us, perhaps we should consider how we might use genetic engineering to alter own bodies to suit those other planets.
As a jumping off point, lets consider the feasibility of using the popular gene-editing tool CRISPR to alter human physiology to tolerate parameters outside of Earths norms. If we take a look at common factors that are significant to human health, gleaned from our experience with space exploration, the most obvious choices for our attention are variations in gravity, atmospheric pressure and gas ratios, and solar radiation levels.
If we consider Mars as our template, because of its relative suitability for colonization, then we must compensate for two-thirds less gravity than Earth. A lack of gravity results in a number of ill effects on human health, including a decrease in bone mass and density over time, particularly in the large bones of the lower extremities, as well as the spine. While we do not have research showing the impact of living on a planet with one-third Earths gravity, we do know that we can expect losses in bone density somewhere under 1-2 percent per month, the amount lost in the microgravity environment of space.
For comparison, the elderly lose 1-1.5 percent per month in Earth gravity. Atmospheric pressure that is either too high or too low also results in complications; low atmospheric pressure results in less oxygen available and causes altitude sickness and possible death. Radiation levels from the sun are another variable that is well known to have upper and lower thresholds for optimal human health, where low levels can lead to vitamin D deficiency and high levels increase cell death and cancer.
It would stand to reason that the human body has a minimum threshold for healthy physiology as regards the environment in which it grows, develops and lives. To colonize other planets successfully, we must consider solutions to overcome these thresholds; for example: prostheses, domed colonies recreating an ideal or near ideal environment, or, as this author suggests, the permanent genetic alteration of humanity as a species. This applies to our four chosen variables of gravitational forces, atmospheric pressure, atmospheric gas ratios, and solar radiation levels. While science fiction might have us consider surgical and biomedical prostheses or the more far-fetched use of animal DNA to change ourselves for this purpose, the key to human adaptation for other planets lies in our own genetics and it may well be CRISPR, the use of the enzyme Cas9 for introduction of altered DNA sequences or CRISPRs to existing cells to change how those cells function, that will make this possible.
Human genetic variation provides a veritable treasure trove of adaptations if one looks at the less common but heritable variations that on Earth may seem irrelevant, nonessential, or even maladaptive, but on another planet could be essential to survival. One example of a gene that, with engineering, could help humanity adapt to higher or lower gravity is the LRP5 gene. Recent research into the LRP5 gene shows that mutations of the gene are responsible for both low bone density and elevated bone density in the case of the later, from increased bone formation. A family of individuals in Nebraska carrying the mutation for elevated bone density have never experienced broken bones even well into old age. A whole colony of such individuals or ones engineered to enhance this mutation further could be expected to fare much better during prolonged space travel in zero gravity as well as in the low gravity environment on a planet like Mars.
While an atmospheric pressure and gas makeup very similar to Earths would be required for humans to survive and thrive outside of a spacesuit, Nepals Sherpas, high altitude dwellers in Ethiopia, and the Collas people in the Central Andes , as well as the deep sea divers of Bajau, may provide a solution to living on planets with differences in atmospheric pressure and oxygen availability. The three groups of high-altitude dwellers appear to have separate adaptations for thriving in low oxygen environments. Recent research indicates that there are genetic mutations in each of these groups. Sherpas mutations allow for more efficient use of available oxygen and resistance to ill effects from hypoxia.
Sherpas experience less of an increase in red blood cells than others and therefore avoid the ill-effects caused, such as edema and brain swelling. Sherpas instead have mitochondria in their cells that make more efficient use of the available oxygen, as well as having more efficient anaerobic metabolism in the absence of oxygen. The Collas show genetic differences in genes that control heart morphology, as well as cerebral vascular flow, as a means to withstand an elevated hematocrit in response to high altitude living. The Amhara people living in high altitudes in Ethiopia unlike the Sherpas do have lower oxygen saturation and higher hemoglobin levels compared to lowland dwellers in the region.
Research has yet to determine what adaptation favors the Amhara, but several genes that may play a role have been isolated. Another group, the Bajau of Thailand, may have complementary genetic variations that help them resist hypoxia and survive the high pressures of deep sea diving. Researchers found them to have 50% larger spleens and also a gene, PDE10A, that controls a thyroid hormone thought to affect spleen size. Capitalizing on any of these genetic features would improve our ability to survive with a lower oxygen content atmosphere, perhaps on a newly terraformed Mars or under domes with oxygen rationing.
While we cannot yet determine how comparable an atmosphere we can create on Mars, it stands to reason that achieving an exact replica atmosphere to Earths could be difficult. An atmosphere that lets in less radiation could impede our production of vitamin D, while a thinner atmosphere would admit an excess of radiation. Vitamin D deficiency could perhaps be handled by supplementation, or instead addressed by increasing our cells response to ultraviolet light to increase vitamin D synthesis. On the other side of the coin, a thinner atmosphere opens us up to higher UVR, which would result in higher rates of skin cancer.
It would stand to reason that, while skin pigmentation has high cultural and historical significance, it could make our species more suitable for colonization of high radiation planets; darker skin with larger melanocytes that react proactively to UVA and UVB radiation through tanning and higher antioxidant and free-radical counteraction would be protective and provide an advantage if we are to branch out into our solar system and beyond. At the same time, this solution poses the problem of vitamin D production.
The answer could lie in isolating and using the genes responsible for East Asian populations lower skin pigmentation coupled with lower skin cancer rates than European populations. A study headed by Pennsylvania university has isolated gene mutations responsible for skin pigmentation differences, SLC24A5, MFSD12, OCA2, and HERC2, by studying African, South Asian Indian, and Australo-Melanesian populations, some of which are associated with vitiligo and a form of albinism common in African populations. These mutations that confer higher vitamin D production to Europeans are not present in East Asians, indicating a different mutation responsible, and, while both populations have higher vitamin D production than African populations, Europeans have a 10-20 percent higher rate of cancer than both Africans and East Asians. Further research into these genes could provide targets for CRISPR to modify the protective factors in our skin without sacrificing vitamin D production of potential colonists.
The question remains: is CRISPR a feasible route to including some of these adaptations to create a new, more suitable colonist? To answer this question we look at the current status of CRISPR research.
While some experiments using CRISPR gene editing were conducted in the technologys infancy, including the controversial creation of twin girls in China designed to be resistant to HIV, we are still quite a bit of research away from using CRISPR with high success rates and full confidence, especially considering the repercussions of rushing into human trials, including the death of trial participants and long-term side-effects of cancer, both of which have occurred in gene-therapy trials.
According to information revealed by the FDA and NIH, 691 trial volunteers died in gene-editing trials prior to the tragic and high-profile death of Jesse Gelsinger in a 1999 trial to treat his OTCD, a rare metabolic disorder. The death was blamed on ethical oversights and a rush to make gene editing pan out before it was ready. The result was a long period of gene-editing fear and oversight but also, in the case of James Wilson, director of the University of Pennsylvanias Institute for Human Gene Therapy responsible for the trials that led to Gelsingers death, greater caution in research methodology. He has put safety at the forefront of his research and asserts that even still the risks of gene editing with CRISPR and other methods brings enough risk to justify human trials only for those diseases that are severe and debilitating enough for patients to accept the risks of gene editing.
What does all this mean for our hypothetical future of using CRISPR to edit the DNA of human colonists for space colonization? Is the technology too far off to serve our purpose or fraught with too much risk? Is it beyond our knowledge and skill to accomplish? The answer to each of these questions is undoubtedly, no.
Weve had too much success in treating complex genetic conditions, like the creation of an immune system for Ashanthi Desilva born with severe combined immunodeficiency (SCVID). Weve unlocked too many keys to making gene therapy safer and more effective to discount the possibility of future use for the advancement of our species into harsher environments. While subsequent uses of gene therapy for SCVID resulted in development of Leukemia years later, further advancements in the research have revealed the need to find the best delivery system for each body system. Adeno-associated viruses, and lentiviruses are being looked at in place of the more aggressive adenovirus or retroviruses for delivery of DNA segments both of which are less likely to provoke an immune response and less likely to trigger cell death by way of the B35 gene in healthy cells, and later cancer.
Regardless of the work ahead and the bumpy road that gene therapy has traveled, vast potential remains at our fingertips whether it is through use of CRISPR or future gene therapy tools. It is a sure eventuality that we will one day have these skills at the ready to spread our species into other worlds, well-equipped to survive and thrive in harsher environments.
Cherrie Newman is a writer and student of human reproduction and biological sciences. She is the author of a science fiction novel series entitled Progeny under the pseudonym CL Fors. Follow her on her blogor on Twitter @clfors
Read this article:
Building 'better' astronauts through genetic engineering could be key to colonizing other planets - Genetic Literacy Project
Posted in Genetic Engineering
Comments Off on Building ‘better’ astronauts through genetic engineering could be key to colonizing other planets – Genetic Literacy Project
The Pacific Declaration: 20 Years Later – Earth Island Journal
Posted: at 6:22 pm
When my father, Marc Lapp, died in 2005 at the age of 62 from glioblastoma, he left behind a wife, five children, two stepchildren, and an unfinished manuscript. In the wake of his death, while struggling to make sense of a world without him, I holed up in a writers retreat on the rocky coastline of Provincetown, Massachusetts to see if I could transform his rough ideas into something presentable and publish what would have been his 15th book.
I never succeeded. But the central idea of his book has stayed with me all these years. Drafted at the dawn of the age of genetic engineering long before the development of CRISPR technologies and new ways to alter life as we know it the books message was simple: Weve developed frameworks within and across nation states for protecting environmental integrity for future generations (think the US Endangered Species Act). Now, as we attempt to alter the genetic makeup of living beings, we need new strategies and frameworks for protecting the planets genetic integrity for future generations. He was writing as a scientist, an ethicist and a parent.
I have been reflecting on his insight from so many years ago on the 20th anniversary of The Pacific Declaration, a statement of the ethical principles for this era of genetic engineering that my father and two dozen scientists, ethicists, and authors crafted on another rocky coastline in Bolinas, California and published in October 1999.
The Declaration states: In recognition of the fundamental importance of our planets natural genetic heritage and diversity, and in acknowledgment of the power of genetic engineering to transform this heritage, [we] believe that the proponents and practitioners of genetic technologies must adhere to the principles of prudence, transparency, and accountability.
The document was fundamentally a call to apply the precautionary principle to our collective approach to genetic engineering. The authors noted that the burden of proof must be on those promoting genetic engineering to show that these technologies contribute to the general welfare of consumers, farmers, and society. And that they do so, importantly, without compromising the viability of traditional agricultural practices, including organic farming.
The Declaration was also a call to bring democratic deliberation to decisions about regulation and research priorities: In democratic societies, any decision to deploy powerful new technologies must be made with full public participation and accountability, the Declaration states. And it was a demand for food sovereignty, the concept developed in the 1990s by the global peasant movement, La Via Campesina, that calls for farmer and community power over what food is grown, where, and how.
The month after my father and others gathered to write this Declaration, I found myself getting tear gassed in the streets of Seattle. At the time, I was a graduate student at Columbia University, studying trade policy and globalization. Participating in the global action against the World Trade Organization the so-called Battle of Seattle felt like an appropriate extracurricular activity.
The Seattle action was also intimately tied to the work of my father and his colleagues. For the massive demonstrations in November 1999 against the new global trade regime were also about the future of food and how genetic engineering would affect farmers and eaters all around the world. In the streets, I heard as much from the Teamsters and environmentalists as I heard from Mexican farmers calling for protections of their corn markets in the face of American genetically engineered corn imports.
Since the Pacific Declaration was penned in 1999, commodity agriculture in the US has been remade by genetic engineering. The majority of US corn and soy grown today has been genetically engineered most of it to be resistant to the toxic herbicide Roundup. And the impacts of genetic engineering can now be felt in communities around the world burdened with exposure to toxic pesticides used in concert with these crops, including the tens of thousands suffering from cancers thought to be linked to the weedkiller Roundup with lawsuits pending against its producer, Bayer (which bought Monsanto in 2018). Today, despite the urging of scientists like those who penned the Pacific Declaration, there are no precautionary principles in the US regulatory system for these technologies.
When my dad and his colleagues wrote the Pacific Declaration, it was a call for all of us to ask big questions of this new genetic age: Who benefits? Who is harmed? How do these decisions affect future generations? Twenty years later, these questions are just as pressing.
See the article here:
The Pacific Declaration: 20 Years Later - Earth Island Journal
Posted in Genetic Engineering
Comments Off on The Pacific Declaration: 20 Years Later – Earth Island Journal
Zebrafish are the tropical minnows advancing genetics and molecular biology – TMC News – Texas Medical Center News
Posted: at 6:22 pm
Iridescent blue-striped zebrafish dart back and forth in tiny tanks stacked floor-to-ceiling in the basement of the Baylor College of Medicine. The freshwater minnowssome 13,000 strong in their watery studio apartmentsplay an integral role in innovative biomedical research.
They are part of the Gorelick Lab, one of more than 3,250 sites in 100 different countries using zebrafish to advance medicine and better understand human diseases. Led by Daniel Gorelick, Ph.D., assistant professor in the department of cellular and molecular biology at Baylor, the lab studies zebrafish to learn how certain hormones and chemicals affect the development and function of the human heart and brain, as well as other tissues.
Although science and technology are constantly evolving, zebrafish have remained relevant research tools for almost 50 years. Today, scientists are harnessing the power of CRISPR-Cas9 technologywhich can edit segments of the genome by deleting, inserting or altering sections of the DNAto generate specific mutations in zebrafish.
This has been a huge advance because it allows us to create mutant strains of zebrafish that have the same mutations as are found in a human disease, said Gorelick, whose lab is housed in Baylors Center for Precision Environmental Health and is currently undergoing an expansion to accommodate as many as 30,000 fish.
In addition, scientists have long sought to map the cell-by-cell progression of animals, in pursuit of understanding how a single cell develops into trillions of cells that make up an intricate biological system of organs. With single-cell RNA sequencing, a technology named Science magazines 2018 Breakthrough of the Year, scientists are able to track the different, intricate stages of embryo development in unprecedented detail, allowing researchers like Gorelick to study the cascading effects at the cellular level.
Theres just so much evidence now that a lot of the drugs that are effective in humans are also effective in [zebrafish], so people are now starting to use fish to discover drugs, Gorelick said. You want to know, if youre taking a drug or youre exposed to some pollutant, does that cause birth defects? How does that affect the life of humans? We can use [zebrafish] as research tools to understand how the chemicals normally work in a normal embryo.
Regenerative heartZebrafish are named for the colorful horizontal stripes on their bodies, and can grow from 1.5 to 2 inches in length. The tropical fish are native to South Asia.
On the surface, zebrafish appear nothing like humans, but 70 percent of the genes in humans are found in zebrafish and 84 percent of human genes associated with human disease have a zebrafish counterpart, studies show.
George Streisinger, an American molecular biologist and aquarium enthusiast, pioneered the use of zebrafish in biomedicine at the University of Oregon in 1972. His breadth of knowledge about zebrafish laid the groundwork for research methodologies, including developing breeding and care standards and creating tools for genetic engineering and analysis. He performed one of the first genetic screens of zebrafish by using gamma rays to randomly mutate the DNA of certain zebrafish and identify offspring that had notable phenotypes, such as pigmentation defects.
That caused a big explosion in the field and then thats when things really took off, Gorelick said.
Zebrafish are now used as a genetic model for the development of human diseases, including cancer, cardiovascular diseases, infectious diseases and neurodegenerative diseasesto name a few. Housed down the street from Gorelicks lab, John Cooke, M.D., Ph.D., is using zebrafish to study atherosclerosis, the major cause of heart disease in the country. Although zebrafish have only one ventricle to pump blood to the heart, whereas humans have two (a left and a right ventricle), their vasculature is very similar to humans.
The zebrafish can help us in understanding the cardiovascular system, in achieving those basic insights, and in translating those basic insights towards something thats potentially useful for people, said Cooke, director of the Center for Cardiovascular Regeneration at Houston Methodist Research Institute.
Cooke hopes that studying the regenerative capabilities of the zebrafish heart will lead to new discoveries that help human patients.
You can remove 20 percent of their heart, and they can regenerate it, Cooke explained. Why is that? We want to know. There are groups that are studying that amazing regenerative capacity of the [zebrafish] heart, and those insights obtained from that work may lead us to new therapies for people to regenerate the human heart or, at least, improve the healing after a heart attack.
Watching cells migrateAlthough mice are genetically closer to humans than zebrafish, sharing 85 percent of the same genomes, zebrafish have a few key advantages for researchers.
On average, zebrafish produce between 50 to 300 eggs, all at once, every 10 days. Their rapid breeding allows scientists to quickly test the effects of genetic modifications (such as gene knockouts and gene knock-ins) on current fish, as well as ensuing generations.
In addition, zebrafish are fertilized and developed externally, meaning the sperm meets the egg in the water. This allows scientists to access the embryos more easily, as opposed to mouse embryos that develop inside the womb. In one of his research projects, Gorelick simply adds drugs to the water to see how the zebrafish are affected.
Most drugs in the water will get taken up by the embryo, Gorelick said. We add it into the water and it gets taken up the next day when theyre just one day old. All of that discovery happened in zebrafish because you can literally watch it live.
Not only do zebrafish embryos develop quickly, they are also transparent. Within two to four days, a zebrafish will develop all its major organsincluding eyes, heart, liver, stomach, skin and fins.
We can literally watch these cells migrate from different parts of the embryo, form the tube, constrict, form the hourglass, loop on itself, beat regularly and see blood flow all at the same time, Gorelick said. When theres a belly and a uterus, you dont have access. You can use things like ultrasound, like we do with humans, but you cant get down to single-cell resolution like we can with the fish.
Ultimately, zebrafish have proven to be a powerful resource for researchers. Although all zebrafish studies are confirmed in rats and mice, followed by human tissue, they constitute a significant stepping stone.
You wouldnt want to build a house only using a hammer and a screwdriver. I want a power drill and I want a band saw, Gorelick said. Fish are part of that. Theyre not a cure-all. Theyre not the only tool, but theyre an important tool.
Here is the original post:
Zebrafish are the tropical minnows advancing genetics and molecular biology - TMC News - Texas Medical Center News
Posted in Genetic Engineering
Comments Off on Zebrafish are the tropical minnows advancing genetics and molecular biology – TMC News – Texas Medical Center News
35 drugs in the race for a coronavirus treatment – Genetic Literacy Project
Posted: at 6:22 pm
As the world scrambles to monitor and contain the COVID-19 outbreak, drug companies are racing to develop or repurpose treatments to combat the potential pandemic. The death toll continues to climb.
A new survey by Genetic Engineering & Biotechnology News (GEN) reveals 35 active drug development programs in North America, Europe, and China. Those 35 include treatments that have received the greatest public attention in recent days, being developed by companies that range from pharma giants like GlaxoSmithKline and Sanofi, to small and large biotechs such as Moderna and Gilead Sciences. Gileadhas begun clinical trials in Chinaafter peer-reviewed journals showed its antiviral candidate, remdesivir, having positive results ina case involving an American patientandChinese in vitro tests.
Chinas status as the center of the SARS-CoV-2 outbreak is reinforced by a statistic tucked at the bottom of areportpublished February 28 by the state-run Xinhua news agency: Of 234 clinical trials registered with the Chinese Clinical Trial Registry, nearly half (105) focus on treatments for COVID-19.
This list is certain to multiply in coming weeks as global health agencies, governments, and drug developers step up efforts against the SARS-CoV-2 virus.
Read the original post
Read this article:
35 drugs in the race for a coronavirus treatment - Genetic Literacy Project
Posted in Genetic Engineering
Comments Off on 35 drugs in the race for a coronavirus treatment – Genetic Literacy Project
First-Year Lab Experience Gave This Student the Confidence to Aim for a Ph.D. – UVA Today
Posted: at 6:22 pm
A University of Virginia biomedical engineering student is trying to tackle the worlds No. 1 cause of death on a genetic level.
Rita Anane-Wae, from Ghana by way of Glendale, Arizona, and a third-year biomedical engineering student, is using a 2019 Harrison Undergraduate Research grant to seek a genetic solution to atherosclerosis, or the build-up of plaque in ones arteries, which impedes blood flow.
There are cells that will try to fix this problem by covering them and basically pushing the plaque down to allow blood flow, she said. These cells will try to reduce that plaque so that there is correct blood flow. In very serious cases, the plaque can harden and break off. Once it breaks, it can get lodged somewhere and cause a stroke or a heart attack.
Created through a gift from the late David A. Harrison III and his family, the Harrison Undergraduate Research Awards fund outstanding undergraduate research projects. Selected by a faculty review committee, awardees receive as much as $4,000 apiece to pursue their research interests, under the direction of a faculty mentor.
Anane-Wae started working in a laboratory run by Mete Civelek, an assistant professor of biomedical engineering, as a second-year student.
Civelek had already altered her life. Anane-Wae came to UVA to be a chemical engineer. She met Civelek when she signed up as a first-year student for a program that offered faculty mentoring.
At the time I was a chemical engineering major with an interest in biomedical engineering, Anane-Wae said. After talking with him, he was able to assuage my fears about biomedical engineering.
Biomedical engineering is a relatively new field and as such, I did not believe there were many jobs out there, and my parents were worried for the same reason, she said. Mete has a chemical engineering undergrad degree and a masters and Ph.D. in biomedical engineering, so he was the perfect person for me to talk to. He explained the two fields in a unique way, unlike what I had read and seen on YouTube.
Honestly, I love biomedical engineering. When I switched into biomedical engineering, literally in my first class, I though Oh, my God, this is home. I am learning about anatomy, physiology, genes and cells, and it is still all really exciting for me.
Civelek also suggested Anane-Wae participate in the research trip to Uganda through the UVA Minority Health & Health Disparities International Research Training program to perform research on congestive heart failure. While in Uganda, Anane-Wae made rounds with a doctor at a local hospital and met a 17-year-old girl suffering from congestive heart failure.
Her legs were all swollen, Anane-Wae said. She had edema and her stomach was filled with fluid. I was looking at her and thinking, This girl cant lay down because of all the swelling and she cant even be at rest. And I was thinking, She is about my age and I am fortunate enough to be traveling the world and she is here stuck in this hospital bed.
Her encounter with the girl became part inspiration to her and part reminder that congestive heart failure is not just for older patients.
I have a hard time accepting what I am capable of doing, Anane-Wae said. Being here, being in Uganda, working in the lab, it has taught me that I am basically capable of making change. I know what I am supposed to be doing with my time and my future and I know that doing it makes me happy and will make other people better.
In her lab work, Anane-Wae studies a specific gene melanoma inhibitor activity 3, or MIA3 that affects smooth muscle cells.
Smooth muscle cells are able to basically cover the plaque in that disease state, Anane-Wae said. We are running experiments to see how us modulating MIA3 affects the disease.
She said she and members of the research team in the lab also performed experiments knocking out the MIA3 gene from the cells, which led to a more serious disease state.
I think experiments like these are really important because we are not yet at the stage where we can do gene therapy on a person, Anane-Wae said. If you knock out specific genes, it will affect things that we dont understand yet.
Anane-Wae is working on a small section of a large field, but she thinks there is promise in the work she is doing.
The genome-wide association studies show that 161 different genes so far have been associated with coronary artery disease, she said. And we are studying just one. There is so much further that we have to go.
The path is really long, but we are trying to understand the mechanism by which one gene affects the disease and if we actually figure out that mechanism, we can try to apply it to the other genes and maybe understand the bigger picture.
Research can lead her down many blind alleys, which she understands. Anane-Wae is also very conscious of the law of unintended consequences, and how something that solves one problem can create other problems in the process.
We can say that about everything, she said. I think that is the way with all new development. You fix problems and new ones will arise, and then you fix those, too. So we can only do so much. But I think what I have learned is that I have found something about which I am passionate. I have found something that I enjoy and here at UVA, I have found a community of people who will help me develop my skills.
Included in that community, Anane-Wae cited Civelek and Redouane Aherrahrou, an American Heart Association Postdoctoral Fellow with whom she works.
Aherrahrou has known Anane-Wae since she joined the lab in 2018. When she first joined our lab, Rita knew only the fundamental lab skills and methods, he said. After a short amount of training, she learned rapidly and became very familiar with the cell culture techniques and appropriate lab handling. She performed the experiments independently. Her interactions with other lab members are both professional and friendly.
He described Anane-Wae as a diligent researcher, a gifted student, an inspiring person, and enjoyable to be around.
She has a great personality, is open to guidance and responds well to criticism, he said. She wants to apply to Ph.D. programs after she graduates, and I predict a great future in her career as a research scientist.
Civelek said he enjoys having Anane-Wae as part of his team.
She is hard-working, curious and eager to make a scientific impact, he said. I can see the joy in her face when she learns something new. She gets along well with everyone in the lab and is a role model to those who are junior to her. She has a bright future and I am very proud of her accomplishments.
Civelek said Anane-Wae was recently awarded a German Academic Exchange Research Internship in Science and Engineering, which is presented to only 300 students from the U.S. and Canada.
Redouane and Mete both have high standards for me and motivate me to do my very best, Anane-Wae said. They have instilled a confidence in me that I did not have prior to joining the lab, and they continuously push me to achieve great things. I am so fortunate to have these two individuals as mentors, in addition to all of the other members in the laboratory.
A Blue Ridge Scholarship recipient, Anane-Wae is member of the National Society of Black Engineers and the Society of Women Engineers. She also has received a Hugh Bache Scholarship.
Anane-Wae said she is looking at doing big things, such as gene therapy, but realizes that she has to take small steps at first, and that her friends in the lab will help her out when things go wrong.
She has also learned that research is a team effort, not a solo pursuit.
You cant do research by yourself, she said. You wont be able to get anything done. You will have to depend on other people and you have to be able to share what you have learned. You wont get anything done in any amount of time if you dont trust other people and work together.
Read this article:
First-Year Lab Experience Gave This Student the Confidence to Aim for a Ph.D. - UVA Today
Posted in Genetic Engineering
Comments Off on First-Year Lab Experience Gave This Student the Confidence to Aim for a Ph.D. – UVA Today
Microorganisms And The Indian Patents Scenario – Intellectual Property – India – Mondaq News Alerts
Posted: at 6:22 pm
To print this article, all you need is to be registered or login on Mondaq.com.
A microorganism is a microscopic organism, known to be one ofthe earliest life forms on earth. Viruses, fungi, bacteria,archaea, protozoa and algae are the six major forms ofmicroorganisms, exploited expeditiously by the biotechnologists andmicro-biologists for research purposes. From beer brewing, breadmaking to mass production of antibiotics, microorganisms is used inall such processes by the scientists to reach the desired results.Genetic engineering techniques, DNA typing etc., have further pavedway for genetically modified organisms such as the geneticallymodified bacterium, as in the US Supreme court case of Diamond v.Chakrabarty.
The case of Diamond v.Chakrabarty1 in 1980s, opened gates forthe patentability of microorganisms, wherein the claim of aMicro-biologist Dr. Ananda Chakrabarty, for the grant of patent fora live human made & genetically engineered bacterium, capableof breaking the components of crude oil was accepted by the USSupreme Court. In this case, the controller of patents of theUnited States denied the claim for patenting the bacterium per se,stating that, microorganism are product of nature and hence arenon-patentable according to the US patents regime, which wasreversed by the United States Court of Customs and Patent Appeals.Dejected by the decision of the US court of Customs and PatentsAppeal, Sideny A. Diamond, the commissioner of Patents andtrademarks appealed to the US Supreme court2 which againwent in favour of Chakrabarty by establishing that a human made,genetically engineered bacteria was capable of treating oil spillsand thus was an invention accompanied by novelty, usefulness,utility, non-obviousness and industrial applicability3,which a naturally occurring microorganism was incapable of.
Before the US Supreme Court's decision in the case ofDiamond v. Chakrabarty, Patent protection was not granted tomicroorganisms as product claims, but only to the process claims inwhich microorganisms was used as a medium ininventions.4
Article 27(3)(b) of the TRIPS 1994, further established thatmicroorganisms and non-biological and microbiological processes arepatentable by stating that, "Members may also exclude frompatentability, plants and animals other than micro-organisms, andessentially biological processes for the production of plants oranimals other than non-biological and microbiologicalprocesses."
'Microorganisms per' se can be patented, however, itshould be noted that a patent is not granted for a discovery ratherfor an invention which is novel, non-obvious, useful and capable ofindustrial application. Therefore, a patent can only be granted fora micro-organism, when there's a human intervention to create anew, non-obvious and useful microorganism by way of geneticmodification/engineering, cell fusion, gene therapy or othermicro-biological or non-biological techniques.5
Further, since the disclosure of details in written descriptionw.r.t., inventions involving micro-organisms is not possible, theBudapest Treaty provides for a mechanism to deposit microorganismwith any "international Depository Authority" for thepurpose of patent procedure of national patent office of all thecontracting states.
The Indian Patents Act, 1970 added microorganisms under thepurview of patentability through the Patents (Amendment) Act, 2002,in compliance with the TRIPS.
According to Section 3(j) of the Patents Act, 1970, a plant,animal, seeds and biological processes, apart from microorganismsare not patentable. Therefore, section 3(j) of the Indian patentsact, allows patentability of microorganisms.
The landmark judgment of the Calcutta High court in the case ofDimminaco A.G. v. Controller of Patents &Designs on 15th January, 2001, prior tothe 2002 amendment in the patents act, 1970 established a benchmarkin the field of micro-biological research. In this case, an appealwas filed against the Assistant Controller of Patents &Designs, wherein, the process for preparation of infectiousBursitis Vaccine was refused on the grounds that the process ofpreparation of vaccine that contained a living virus cannot beconsidered manufacture and that a vaccine comprising of a livingvirus cannot be considered a substance or inanimate object. Thecourt in this case reversed the decision of the Assistantcontroller and held that, the process of preparing a vendiblecommodity containing a living substance is not excluded from thepurview of the word, 'manufacture' and that the controllererred in denying patent protection to the vaccine just because itcontained a live virus. Furthermore, the end product was novel,capable of industrial application and was useful for protectingpoultry against contagious Bursitis infection, thus making theprocess an invention. The court further allowed the appeal anddirected the petitioner's patent application to be reconsideredwithin two months of the publication/delivery of the judgement.
In the recent Supreme Court's judgment in the case of,Monsanto Technology Pvt. Ltd. v. NuziveeduSeeds6, The plaintiff claimed that theirpatent in the man-made, chemical product called NAS(Nucleotide AcidSequence) containing the gene Bacillus thuringiensis (Bt gene),capable of killing bollworms when inserted in cotton, was not aninfringement under section 3(j) of the patents act, 1970, as heldby the Division bench of the Delhi High Court. Nuziveedu'sclaim was that, NAS was merely a chemical composition in-capable ofreproduction and not a man-made inventive microorganism, capable ofindustrial application7. The Supreme Court in this caseset aside the order of the division bench and restored the order ofthe single bench and reverted back the matter back to the singlebench of the Delhi High Court to be decided on the basis of expertadvice and evidence, who had held that, the claims on NAS wasrightly entertained by the Indian Patent office and that theparties shall remain bound to their sub-lease agreement.
Thus, the current scenario in India w.r.t. patents inmicroorganisms is still at an infancy stage and needsprogression.
The micro-organisms with human interventions, accompanied bynovelty, utility and industrial applicability are patentable. Thetechnological advancements in the field of micro-biology, genetics,etc., have complicated the issues relating to patents inmicroorganisms. Therefore, scientific aspects and legal drafting ofthe invention should be done with due precaution and consideration.Further even though, the issues involved in the Monsanto's casewas highly technical, The Supreme Court missed its opportunity indeciding upon the facts in issue8.
Footnotes
1 447 US 303 (1980)
2 Patenting of microorganisms: Systems and concerns,Ramkumar Balachandra Nair & Pratap Chandran Ramachandranna.Journal of Commercial Biotechnology volume 16, pages337347(2010) access from: https://link.springer.com/article/10.1057/jcb.2010.20
3 Dr. BL Wadehra.Law Relating to Intellectual Property66.(Universal-Lexis Nexis, Fifth Edition, Reprint 2018)
4 Id. at ii.
5 Globalization and Access to Drugs. Perspectives on theWTO/TRIPS Agreement Health Economics and Drugs Series, No.007 (Revised). Essential Medicines and Health Products InformationPortal.A World Health Organization resource. Access from:https://apps.who.int/medicinedocs/en/d/Jwhozip35e/3.4.4.html
6 AIR2019SC559
7 Kluwer Patent Blog.Monsanto v. Nuziveedu: A MissedOpportunity by the Supreme Court?
Access from: http://patentblog.kluweriplaw.com/2020/01/27/monsanto-v-nuziveedu-a-missed-opportunity-by-the-supreme-court/
8 Ibid.
The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.
Read more from the original source:
Microorganisms And The Indian Patents Scenario - Intellectual Property - India - Mondaq News Alerts
Posted in Genetic Engineering
Comments Off on Microorganisms And The Indian Patents Scenario – Intellectual Property – India – Mondaq News Alerts
GMO brought us the Impossible Burger but what does it actually mean? – Screen Shot
Posted: at 6:22 pm
What is digital food? Heres everything you need to know
The food industry has been undergoing monumental changes in the past few decadesnew technologies were implemented, even into the way we cook, produce and buy food. Climate change pushed more and more people to watch out for how much meat they consume, which then made becoming a vegetarian or vegan extremely trendy. This created a growing need for plant-based meats and non-dairy products.
Along with these shifts, a new term appeared in the culinary world: digital food. Its here, and it doesnt look like its going to vanish anytime soon, so you better get used to it. But what exactly is digital food, and what changes will it inspire in the ever-changing industry that is the food sector?
First of all, lets start by clarifying something: digital food and new technologies being used in the daily operations of food companies are two different things. New technologies meant that manufacturing processes were upgraded and started producing more food at a faster pace. But digital food is something else entirely. With social media came the recent boom in online food-based media, which completely changed the way we look at food online and seek out new recipes, restaurants and reviews.
We began craving new flavours from different countries, but it went even further than that. From sharing images of food on Instagram to augmented reality (AR) filters that shape our faces into a peach or a tomato or any food you can think of, it seems that the term digital food still has many meanings and, therefore, that there is no general consensus on its definition. Why is it not clearer? Because digital food is so recent that it is still in constant change. In other words, digital food is the future but no one can tell what the future holds.
Forget about the Instagram face, the new trend involves face filters that either allow you to look like your favourite food or make photo-realistic 3D food models appear on your camera. Not only can you look like your favourite kind of bubble tea, but you can also help reduce food waste by playing with food digitally. Because, lets be honest, who hasnt tried the Greggs face filter that lets you know which Greggs product you are?
Screen Shot spoke to Clay Weishaar, also known as @wrld.space on Instagram, the AR artist specialising in food filters, about our new obsession with food, especially on social media, and why his designs mainly focus on digital food, Food culture has always been a big subject on Instagram. So has fashion. This has really inspired me to explore the idea of food as fashion. I loved the idea of people wearing their favourite food. With augmented reality technology we have the ability to do this.
This can explain the kind of feedback that his Instagram filters received: I am a huge foodie myself. Combining food, fashion and technology was a sweet spot for me. I think the reason my filters have almost 2 billion impressions is that food is something people identify with. Its a universal subject, and it is what brings people and cultures together.
Some big food chains have already seen the potential in digital food. For example, Dominos created a Snapchat filter that would let users see an AR pizza and offer them the possibility of ordering the pizza online, straight from their Snapchat app. Using AR, brands could show us exactly what a specific meal would look like, making it easier for potential customers to make up their minds on what theyd like to order.
Five years ago, people were writing about food online to complain about the trend of people sharing pictures of their meals on Instagram. Now, people are looking, liking and sharing pictures of fake fooddigital food.
Among the few who can already see the potential of digital food is Jessica Herrington, who created the Instagram account Fresh Hot Delicious, a completely digital restaurant specialising in digital desserts. She described the concept in OneZero, saying, Each dessert exists as a freely available AR filter on Instagram. To simulate a real-world restaurant, the desserts sell out when the AR filters reach a specific number of views. Users can play with the desserts for free until they are sold out and become deactivated. In this way, the digital restaurant gives a life span to previously permanent digital objects.
Experiencing digital food through AR is an accessible and innovative alternative to engage with an audience. Food brands are trying to sell more than a productthey need to sell an experience, and digital food could help them build a connection with potential customers. The future of the food sector is digital, and weve only witnessed a few of the many ways we will consume digital food. As unusual it may seem to many for now, digital food will offer us a new approach to traditional eating experiences, and I dont know about you, but all this made me hungry.
What is digital food? Heres everything you need to know
See more here:
GMO brought us the Impossible Burger but what does it actually mean? - Screen Shot
Posted in Genetic Engineering
Comments Off on GMO brought us the Impossible Burger but what does it actually mean? – Screen Shot
What is the Covid-19 coronavirus and how might it continue to affect professional cycling? – Cyclingnews.com
Posted: at 6:22 pm
With the coronavirus now spreading rapidly in Italy, recent concerns that various spring races most notably this Saturday's Strade Bianche, next week's Tirreno-Adriatico and Milan-San Remo (March 21) could be affected have now become very real, and their cancellation extremely likely, with the Italian government calling for a halt to all sporting and public events for the next month. Additionally, a number of cycling teams have revealed their own concerns in the past 24 hours about racing in Italy and at Paris-Nice in France and many have already pulled out of the races.
The Italian races now seem most likely to be rescheduled for a later date, although RCS Sport the organiser of Strade Bianche, Tirreno-Adriatico and Milan-San Remo hopes that their spring events might be able to be rescheduled, with further announcements expected following meetings on Thursday.
But what exactly is the Covid-19 coronavirus? Cyclingnews dug into the facts, as reported by theCenters for Disease Controland Prevention and other sites, in order to be able to explain more about the virus, who it affects and why it is important to control its spread.
Bottom line? Practising good personal hygiene and helping prevent the spread of the virus is the absolute best way to protect yourself, your loved ones, your neighbours and your community, and the world at large.
Covid-19 is the disease caused by a new type of coronavirus. The group is named after the viruses' appearance, with each virion surrounded by bunches of spiky proteins that look like a halo, or the corona of the sun, under a microscope. The virus that causes Covid-19 is actually named SARS-CoV-2.
Coronaviruses are common in animals, and in rare cases can pass to humans from direct contact. It is not yet known which animal was the source of SARS-CoV-2, although it is suspected it may have come from bats. Other types of coronaviruses that have jumped from animals to humans include Ebola, MERS (Middle East Respiratory Syndrome) and SARS (Severe Acute Respiratory Syndrome). SARS-CoV-2 emerged from the Huanan Seafood Wholesale Market in Wuhan, China.
Myth: The virus came from a genetic-research facility. Scientists have sequenced the entire genome of the virus and found no evidence that it was the result of genetic engineering, which would leave hallmark sequences behind.
The problem with SARS-CoV-2 is that it is new, so humans likely don't have immunity to it. Immunity from previous infection or vaccination can hamper the ability of a virus to spread, which is why vaccinations like the ones for polio, measles and diptheria are so effective at stopping the spread of the diseases they cause.
Viruses vary in their transmission rate the rate depends on a number of factors including when and how long a person is infectious, how the virus is transmitted, and how long it can stick around in the air or on surfaces. Scientists have been furiously studying SARS-CoV-2 to determine how it's passed along in order to predict how it will spread.
So far, scientists agree that Covid-19 is spread much like the flu or the common cold, by infected people coughing out tiny droplets of virus-laden moisture that can land on people within a few feet or be left on surfaces that others can pick up on their hands. People then become infected when they touch their eyes or mouth, or inhale the droplets.
As a result, good personal hygiene hand-washing, and not touching your eyes, nose or mouth is important. Most professional cyclists already practise this in order to try not get sick during the season.
Because there is no vaccination yet for SARS-CoV-2, it is important to slow the spread using other methods, like isolating sick patients and quarantining people who might have come into contact with infected people. To be safe, these quarantines are kept in place for two weeks to make sure anyone who is infected with the virus can be identified once they begin to show symptoms. Quarantines can keep the virus from spreading outside the community.
To measure how infectious a disease is, epidemiologists calculate the R0 (R-nought) or number of people an infected person is likely to infect. Some viruses like chickenpox or measles are highly contagious because they are carried on smaller vapour particles and can linger in the air much longer. There is little evidence at this point that Covid-19 is as infectious as that. One person with the measles can infect 12-18 people, while one person with Covid-19 typically infects two to three.
But the R0 isn't static; people can bring the infection rates down. According toScientific American, the SARS outbreak went from about three to 0.4 after people took preventative measures. Once the R0 goes below one, the virus will die out.
The best way to stop a virus is immunity, which is why scientists are working quickly to find a vaccination, but until then, the best steps to slow the spread are quarantines, such as those being imposed in regions where infections have been confirmed like China, South Korea and Italy, and good personal hygiene.
It's important to take this seriously if you have travelled to an area with active contagion. One woman in South Korea refused to take a test for coronavirus and is thought to have infected more than 30 people at her church.
The outbreak in Italy has now likely caused the cancellation of Milan-San Remo, Strade Bianche and Tirreno-Adriatico, and potentially even the Giro d'Italia (May 9-May 31) if the virus continues to spread, with RCS Sport's Mauro Vegni initially tellingCyclingnewsof his concern for his races at last week's UAE Tour, and saying on Wednesday that he now hopes thatStrade Bianche, Tirreno-Adriatico andMilan-San Remo, if/when declared cancelled, could be rescheduled for June or September.
The main symptoms of Covid-19 are fever, a cough and aching muscles. More severe cases can develop into difficulty breathing and pneumonia, while the most critical cases require hospitalisation. In Wuhan, the death rate has been higher than in other cities because the hospitals there became overwhelmed with patients.
If you've been to an area where Covid-19 is present, keep an eye on your temperature and quarantine yourself if you spike a fever, and be sure to cover your mouth when coughing, and wash your hands to reduce the chances of spreading it to those close to you.
Myth: Healthy people need to wear masks to keep from getting the virus. The masks are actually most effective for sick people to keep them from spreading the germs with their coughs. Masks require precise fitting and training to ensure that air doesn't get around them to your nose and mouth, so it's likely they won't protect you from getting sick if you're healthy.
The chances are in favour of you surviving the outbreak. Recent statistics show that 81 per cent only have mild symptoms from the virus. The reason health officials are so concerned is because of the potential of this virus to spread from human to human in a population without immunity to it. Healthy young people have the least to fear it's the elderly and other vulnerable groups that are most at risk.
The chances of dying from a virus is known as 'case fatality rate', or CFR. For the flu, the CFR is typically less than 0.5 per cent. The estimated CFR for Covid-19 is more than 2 per cent from numbers taken in China. That's why it is important to take this outbreak seriously.
Myth: The flu vaccine offers protection against SARS-CoV-2. No, it doesn't. But not getting the flu will help free up medical resources for people who get Covid-19.
The most recent study estimated the death rate of Covid-19 to be 2.3% of those infected in China. It's less deadly than MERS (34.4%) or SARS (9.6%), but more transmissible. But the measurement is still a rough estimate because there may be more people infected without symptoms or with mild symptoms that were not counted.
Unlike the 1918 flu also known as Spanish flu which struck healthy young adults and children, Covid-19 appears to be more deadly for the elderly and those with pre-existing medical conditions, such as cancer or cardiovascular disease. There have been no deaths of those aged nine or under, but the CFR for people 80 and older in China was 14.8%.
The disease becomes critical when the body's immune system goes into overdrive, producing chemicals called cytokines that signal to the immune system to produce more white blood cells to fight the pathogen. These immune cells then congregate at the site of infection the lungs and cause inflammation and fluid build-up as they try to battle the virus. In some patients, the body goes overboard producing cytokines, resulting in a 'cytokine storm' that can lead to organ failure, sepsis and death.
No. Be encouraged by the fact that 81 per cent of the cases are mild, and that one per cent of people infected don't show any symptoms at all.
Also be encouraged by the fact that scientists are working faster on this virus, and sharing their data more widely, than at any other time, using cutting-edge technology to learn how it works and how to fight it. There are already efforts under way to create a vaccine for Covid-19, and trials for various drugs to treat severe and critical cases.
The key is to buy time for science to come up with a vaccine to prevent the spread of the virus and to find effective treatments for the severe cases. It took less than a year for the SARS virus to be fully contained, and while Covid-19 has infected more people, there's still time to stop it in its tracks.
The biggest risk to professional cycling is travel restrictions because of quarantines. The summer will be unpredictable, however, with officials speculating that if the virus is not contained by May, this summer's Olympic Games in Tokyo, Japan, could be cancelled.
But don't worry there's always e-racing...
See the rest here:
What is the Covid-19 coronavirus and how might it continue to affect professional cycling? - Cyclingnews.com
Posted in Genetic Engineering
Comments Off on What is the Covid-19 coronavirus and how might it continue to affect professional cycling? – Cyclingnews.com