The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Transhuman News
The Possible Reality of Artificial Gravity – futurism.com
Posted: July 12, 2016 at 6:16 am
In Brief
The prospect of artificial gravity on space stations is actually possible, but highly expensive and resource-demanding
Working to make astronauts lives easier and less health damaging is a pretty big goal for NASA. The real problem is actually limiting the effects of zero gravity on the human body. Science Fiction has posited the solution of artificial gravity. However,as this video shows, that is no easy feat.
Artificial gravity couldcertainly be a possibilitywith current technology.Sadly, we are limited by the expense and availability of materials.Through the use of centrifugal force, a spinning space station would be able to generate artificial gravity. However, it would have to be spinning at a very fast rate. Alternatively, itd have tobe big enough to not need speed. The trade-off is between being too big to build or spinning too rapidlyto be practical.
Building something as huge as science fiction models would certainly cost. Building the eponymous space stationsfrom the 2013 filmElysiumwould require500,000 people contributing $10 million each. Even more, aluminum would have to be mined from asteroids as Earths supply would not beenough.
Original post:
The Possible Reality of Artificial Gravity - futurism.com
Posted in Futurism
Comments Off on The Possible Reality of Artificial Gravity – futurism.com
Tour of Basic Genetics
Posted: July 10, 2016 at 5:53 pm
tour
Learn how traits pass from parents to offspring.
tour
Explore traits, the characteristics that make us unique.
tour
Get to know DNA, the molecule that holds the universal code of life.
tour
Take a look at genes, the instructions for building a body.
tour
Learn how proteins form the foundation for all living things.
tour
These vehicles of inheritance pack a lot of information.
Funding provided by a gift from the R. Harold Burton Foundation.
APA format: Genetic Science Learning Center (2014, June 22) Tour of Basic Genetics. Learn.Genetics. Retrieved July 10, 2016, from http://learn.genetics.utah.edu/content/basics/ MLA format: Genetic Science Learning Center. "Tour of Basic Genetics." Learn.Genetics 10 July 2016 <http://learn.genetics.utah.edu/content/basics/> Chicago format: Genetic Science Learning Center, "Tour of Basic Genetics," Learn.Genetics, 22 June 2014, <http://learn.genetics.utah.edu/content/basics/> (10 July 2016)
The rest is here:
Tour of Basic Genetics
Posted in Human Genetics
Comments Off on Tour of Basic Genetics
DNA Tests for Ethnicity & Genealogical DNA testing
Posted: at 5:53 pm
Isabel Rojas
Identity is an interesting concept. For the most part we like to believe that we define our own identity. The truth is a lot goes into defining our identity. And what it comes down to is what we accept as our own. The more we know about ourselves, our own experiences, our families past and heritage, and so on - the more our own identity changes and evolves and becomes further defined in our minds and accepted as our own. I have a lot of thoughts and experiences around this topic that have caused my own identity do grow and evolve over time. Here is a snap shot:
I was born in NYC, the youngest of 5 kids. My parents and three older siblings were born in Bogota, Colombia. My family migrated to NYC in the late 70s looking for a better life. After my brother and I were born in the early 80s my parents had begun to realize what a dangerous city it was at that time and decided to head back to Colombia. They worked hard to build a 3 story building where we would live, work, and rent out space. It was a 3 year process. But sadly Colombia at that time was worsening. Bomb threats throughout the city and in front of our new building became too much for my family. We made the trip back to NYC and a year later drove to Salt Lake City where we have lived for about 27 years.
People look at me and often wonder what I am.
People look at me and often wonder what I am. It is often both entertaining and frustrating when people attempt to find out where I am from. My name implies Hispanic/Latino and considering that is the largest ethnic/minority population in Utah its a pretty safe guess. However, when Im with my Polynesian friends people think Im Hawaiian or a mix of Polynesian and something else. In fact in high school I MCd a Polynesian dance group because I could pull off the look. When I travel my friend have told me that they like having me around because I blend in just about anywhere. I recently attended a Nepali church service and had a few people ask me what part of Nepal I was from. Its fun when people assume I am from a different culture/heritage then I am. And I have to admit its kind of entertaining watching people try to skirt around the inquiry as to where I am from.
I identify myself as Colombian, But the sad thing is that when I go to Colombia some family members consider me North American because I was born in the U.S. However, in the U.S. I am defined as Hispanic/Latino in just about every form of paper work I fill out, by associates, friends, and strangers. I often weave in and out of the wonderful experience of growing up straddling two worlds and cultures and the feeling of being neither from here nor there. There is a constant pull between how other identify and define me and how I chose to define and accept myself, my heritage, my culture, and the unknown history that somehow contributes to who I am.
As my dad and I have begun to explore our genealogy the past 7 years or so, weve found that our family is largely from Spain which is no big surprise. My mom is white; her mother was also fair skinned with grayish blue eyes. Some of her cousins that live in Colombia are blond and blue eyed. But that isnt rare in Colombia, let alone south/central america. Colombians have a wide range of ethnicities and consequently a lot of racial discrimination. The Spanish influence is very much present and often people can easily say how many generations back are from Spain. My dad also suspects we have German ancestry somewhere back there.
I received an AncestryDNA kit a few years ago for my birthday. My friend knew I had been working on family history and thought I should give it a shot. Since then Ive had my mother and grandmother on my fathers side tested as well. What surprised me the most in my results was that Im 35% Native American, 5% African, and 29% from the Iberian Peninsula. This has drastically broadened the way I think about my identity and heritage. I feel a sense of connectedness with those areas of the world now and am now anxious to dig deeper and see how far back our records can go. In a less personal sense, I feel like information like this can have a great influence on how people think and treat each other. My grandmother, who took pride in being of pure blood, meaning Spanish, would have completely rejected the notion that Im 5% African, and likely would have blamed it on my fathers side.
There is great power in understanding our deepest heritage and history and in giving ourselves permission to connect with others through that heritage and knowledge. Its liberating in many ways.
Like many who work on their family history, our family had a few lines where we were really struggling to find more information. My 2nd great-grandfather was a mystery ancestor on one of those lines. We could not pin him to a specific census, nor could we find any information about his arrival in the United States. We did however believe he came from Jewish descent.
With this DNA cousin match, weve been able to add a generation to our family tree.
Shortly thereafter, we were contacted by another Ancestry member who used the AncestryDNA kit. He was the descendant of our mystery ancestor and as it turns out, was the 2nd cousin once removed of my father. He was able to point us to the correct 1860 census for the family where we were able discover other family members, and we should now be able to trace their family back to France. So with this DNA cousin match, weve been able to add a generation to our family tree, as well as identify several siblings and their spouses. For immigration research, its so much easier to find a town of origin when youre looking at an entire family who came over rather than just one individual, so Im really excited about the prospects.
In December of 2012 I received an AncestryDNA kit as a gift from my brother-in-law who was hoping to help me learn more about my roots as I was adopted.
More recently, an Ancestry employee was describing the AncestryDNA test to a potential investor and suggested he take the test to experience it. He did, and when his test results came back he was surprised to discover he was related to me either through a grandfather or great-grandfather. He did not recognize my name and when he shared the results with his father Greg, Greg was inspired to take the test as well. Greg's results indicated that I was a possible first cousin, and so he sent me a message.
This has opened a new chapter in my lifeand it is a most welcome life interruption.'
In May of 2014 (less than two years after taking my own test), I received that letter from Greg. We eventually confirmed that we were half-brothers. While Greg's father was my father as well, my birth mother was in her early 20s when she was pregnant with me and had not informed my father. Within days of Gregs letter, I discovered my half-brother and half-sister that I had never met.
Unfortunately, both of my biological parents have since passed away. But instead, I now have connected with my half-siblings Greg and Carole, his half-nephews and niece (Gregs three sons and daughter), and their families. Ive had the most heartwarming embrace from my new brother, sister, and their kids. This has opened a new chapter in my lifeand it is a most welcome "life interruption." I look forward to meeting my family in person in December 2014.
Posted in DNA
Comments Off on DNA Tests for Ethnicity & Genealogical DNA testing
About NHGRI – Genome.gov | National Human Genome Research …
Posted: at 5:53 pm
Feature Video now available The Genomic Landscape of Breast Cancer in Women of African Ancestry
On Tuesday, June 7, Olufunmilayo I. Olopade, M.D., F.A.C.P., presented The Genomic Landscape of Breast Cancer in Women of African Ancestry, the final lecture in the 2016 Genomics and Health Disparities Lecture Series. Dr. Olufunmilayo is director of the Center for Clinical Cancer Genetics at the University of Chicago School of Medicine. Read more | Watch the video
The NHGRI History of Genomics Program closed its six-part seminar series featuring Human Genome Project (HGP) participants who helped launch the HGP with the talk: The Genome is for Life, by David Bentley, D.Phil., on Thursday, May 26th. Dr. Bentley is vice president and chief scientist at Illumina Inc. His long-term research interest is the study of human sequence variation and its impact on health and disease. Read more about the series
In this issue of The Genomics Landscape, we feature the use of model organisms to explore the function of genes implicated in human disease. This month's issue also highlights a recently completed webinar series to help professionals in the health insurance industry understand genetic testing, new funding for training in genomic medicine research, and NHGRI's Genome Statute and Legislation Database. Read more
Cristina Kapusti, M.S., has been named chief of the Policy and Program Analysis Branch (PPAB) at the National Human Genome Research Institute (NHGRI). In her new role, she will oversee policy activities and evaluation as well as program reporting and assessment to support institute priorities. PPAB is a part of the Division of Policy, Communications and Education (DPCE), whose mission is to promote the understanding and application of genomic knowledge to advance human health and society. Read more
Last Updated: July 7, 2016
See the original post:
About NHGRI - Genome.gov | National Human Genome Research ...
Posted in Genome
Comments Off on About NHGRI – Genome.gov | National Human Genome Research …
Genomics – Wikipedia, the free encyclopedia
Posted: at 5:53 pm
Genomics is a discipline in genetics that applies recombinant DNA, DNA sequencing methods, and bioinformatics to sequence, assemble, and analyze the function and structure of genomes (the complete set of DNA within a single cell of an organism).[1][2] Advances in genomics have triggered a revolution in discovery-based research to understand even the most complex biological systems such as the brain.[3] The field includes efforts to determine the entire DNA sequence of organisms and fine-scale genetic mapping. The field also includes studies of intragenomic phenomena such as heterosis, epistasis, pleiotropy and other interactions between loci and alleles within the genome.[4] In contrast, the investigation of the roles and functions of single genes is a primary focus of molecular biology or genetics and is a common topic of modern medical and biological research. Research of single genes does not fall into the definition of genomics unless the aim of this genetic, pathway, and functional information analysis is to elucidate its effect on, place in, and response to the entire genomes networks.[5][6]
From the Greek [7]gen, "gene" (gamma, epsilon, nu, epsilon) meaning "become, create, creation, birth", and subsequent variants: genealogy, genesis, genetics, genic, genomere, genotype, genus etc. While the word genome (from the German Genom, attributed to Hans Winkler) was in use in English as early as 1926,[8] the term genomics was coined by Tom Roderick, a geneticist at the Jackson Laboratory (Bar Harbor, Maine), over beer at a meeting held in Maryland on the mapping of the human genome in 1986.[9]
Following Rosalind Franklin's confirmation of the helical structure of DNA, James D. Watson and Francis Crick's publication of the structure of DNA in 1953 and Fred Sanger's publication of the Amino acid sequence of insulin in 1955, nucleic acid sequencing became a major target of early molecular biologists.[10] In 1964, Robert W. Holley and colleagues published the first nucleic acid sequence ever determined, the ribonucleotide sequence of alanine transfer RNA.[11][12] Extending this work, Marshall Nirenberg and Philip Leder revealed the triplet nature of the genetic code and were able to determine the sequences of 54 out of 64 codons in their experiments.[13] In 1972, Walter Fiers and his team at the Laboratory of Molecular Biology of the University of Ghent (Ghent, Belgium) were the first to determine the sequence of a gene: the gene for Bacteriophage MS2 coat protein.[14] Fiers' group expanded on their MS2 coat protein work, determining the complete nucleotide-sequence of bacteriophage MS2-RNA (whose genome encodes just four genes in 3569 base pairs [bp]) and Simian virus 40 in 1976 and 1978, respectively.[15][16]
In addition to his seminal work on the amino acid sequence of insulin, Frederick Sanger and his colleagues played a key role in the development of DNA sequencing techniques that enabled the establishment of comprehensive genome sequencing projects.[4] In 1975, he and Alan Coulson published a sequencing procedure using DNA polymerase with radiolabelled nucleotides that he called the Plus and Minus technique.[17][18] This involved two closely related methods that generated short oligonucleotides with defined 3' termini. These could be fractionated by electrophoresis on a polyacrylamide gel and visualised using autoradiography. The procedure could sequence up to 80 nucleotides in one go and was a big improvement, but was still very laborious. Nevertheless, in 1977 his group was able to sequence most of the 5,386 nucleotides of the single-stranded bacteriophage X174, completing the first fully sequenced DNA-based genome.[19] The refinement of the Plus and Minus method resulted in the chain-termination, or Sanger method (see below), which formed the basis of the techniques of DNA sequencing, genome mapping, data storage, and bioinformatic analysis most widely used in the following quarter-century of research.[20][21] In the same year Walter Gilbert and Allan Maxam of Harvard University independently developed the Maxam-Gilbert method (also known as the chemical method) of DNA sequencing, involving the preferential cleavage of DNA at known bases, a less efficient method.[22][23] For their groundbreaking work in the sequencing of nucleic acids, Gilbert and Sanger shared half the 1980 Nobel Prize in chemistry with Paul Berg (recombinant DNA).
The advent of these technologies resulted in a rapid intensification in the scope and speed of completion of genome sequencing projects. The first complete genome sequence of an eukaryotic organelle, the human mitochondrion (16,568 bp, about 16.6 kb [kilobase]), was reported in 1981,[24] and the first chloroplast genomes followed in 1986.[25][26] In 1992, the first eukaryotic chromosome, chromosome III of brewer's yeast Saccharomyces cerevisiae (315 kb) was sequenced.[27] The first free-living organism to be sequenced was that of Haemophilus influenzae (1.8 Mb [megabase]) in 1995.[28] The following year a consortium of researchers from laboratories across North America, Europe, and Japan announced the completion of the first complete genome sequence of a eukaryote, S. cerevisiae (12.1 Mb), and since then genomes have continued being sequenced at an exponentially growing pace.[29] As of October 2011[update], the complete sequences are available for: 2,719 viruses, 1,115 archaea and bacteria, and 36 eukaryotes, of which about half are fungi.[30][31]
Most of the microorganisms whose genomes have been completely sequenced are problematic pathogens, such as Haemophilus influenzae, which has resulted in a pronounced bias in their phylogenetic distribution compared to the breadth of microbial diversity.[32][33] Of the other sequenced species, most were chosen because they were well-studied model organisms or promised to become good models. Yeast (Saccharomyces cerevisiae) has long been an important model organism for the eukaryotic cell, while the fruit fly Drosophila melanogaster has been a very important tool (notably in early pre-molecular genetics). The worm Caenorhabditis elegans is an often used simple model for multicellular organisms. The zebrafish Brachydanio rerio is used for many developmental studies on the molecular level, and the flower Arabidopsis thaliana is a model organism for flowering plants. The Japanese pufferfish (Takifugu rubripes) and the spotted green pufferfish (Tetraodon nigroviridis) are interesting because of their small and compact genomes, which contain very little noncoding DNA compared to most species.[34][35] The mammals dog (Canis familiaris),[36] brown rat (Rattus norvegicus), mouse (Mus musculus), and chimpanzee (Pan troglodytes) are all important model animals in medical research.[23]
A rough draft of the human genome was completed by the Human Genome Project in early 2001, creating much fanfare.[37] This project, completed in 2003, sequenced the entire genome for one specific person, and by 2007 this sequence was declared "finished" (less than one error in 20,000 bases and all chromosomes assembled).[37] In the years since then, the genomes of many other individuals have been sequenced, partly under the auspices of the 1000 Genomes Project, which announced the sequencing of 1,092 genomes in October 2012.[38] Completion of this project was made possible by the development of dramatically more efficient sequencing technologies and required the commitment of significant bioinformatics resources from a large international collaboration.[39] The continued analysis of human genomic data has profound political and social repercussions for human societies.[40]
The English-language neologism omics informally refers to a field of study in biology ending in -omics, such as genomics, proteomics or metabolomics. The related suffix -ome is used to address the objects of study of such fields, such as the genome, proteome or metabolome respectively. The suffix -ome as used in molecular biology refers to a totality of some sort; similarly omics has come to refer generally to the study of large, comprehensive biological data sets. While the growth in the use of the term has led some scientists (Jonathan Eisen, among others[41]) to claim that it has been oversold,[42] it reflects the change in orientation towards the quantitative analysis of complete or near-complete assortment of all the constituents of a system.[43] In the study of symbioses, for example, researchers which were once limited to the study of a single gene product can now simultaneously compare the total complement of several types of biological molecules.[44][45]
After an organism has been selected, genome projects involve three components: the sequencing of DNA, the assembly of that sequence to create a representation of the original chromosome, and the annotation and analysis of that representation.[4]
Historically, sequencing was done in sequencing centers, centralized facilities (ranging from large independent institutions such as Joint Genome Institute which sequence dozens of terabases a year, to local molecular biology core facilities) which contain research laboratories with the costly instrumentation and technical support necessary. As sequencing technology continues to improve, however, a new generation of effective fast turnaround benchtop sequencers has come within reach of the average academic laboratory.[46][47] On the whole, genome sequencing approaches fall into two broad categories, shotgun and high-throughput (aka next-generation) sequencing.[4]
Shotgun sequencing (Sanger sequencing is used interchangeably) is a sequencing method designed for analysis of DNA sequences longer than 1000 base pairs, up to and including entire chromosomes.[48] It is named by analogy with the rapidly expanding, quasi-random firing pattern of a shotgun. Since the chain termination method of DNA sequencing can only be used for fairly short strands (100 to 1000 base pairs), longer DNA sequences must be broken into random small segments which are then sequenced to obtain reads. Multiple overlapping reads for the target DNA are obtained by performing several rounds of this fragmentation and sequencing. Computer programs then use the overlapping ends of different reads to assemble them into a continuous sequence.[48][49] Shotgun sequencing is a random sampling process, requiring over-sampling to ensure a given nucleotide is represented in the reconstructed sequence; the average number of reads by which a genome is over-sampled is referred to as coverage.[50]
For much of its history, the technology underlying shotgun sequencing was the classical chain-termination method, which is based on the selective incorporation of chain-terminating dideoxynucleotides by DNA polymerase during in vitro DNA replication.[19][51] Developed by Frederick Sanger and colleagues in 1977, it was the most widely used sequencing method for approximately 25 years. More recently, Sanger sequencing has been supplanted by "Next-Gen" sequencing methods, especially for large-scale, automated genome analyses. However, the Sanger method remains in wide use in 2013, primarily for smaller-scale projects and for obtaining especially long contiguous DNA sequence reads (>500 nucleotides).[52] Chain-termination methods require a single-stranded DNA template, a DNA primer, a DNA polymerase, normal deoxynucleosidetriphosphates (dNTPs), and modified nucleotides (dideoxyNTPs) that terminate DNA strand elongation. These chain-terminating nucleotides lack a 3'-OH group required for the formation of a phosphodiester bond between two nucleotides, causing DNA polymerase to cease extension of DNA when a ddNTP is incorporated. The ddNTPs may be radioactively or fluorescently labelled for detection in automated sequencing machines.[4] Typically, these automated DNA-sequencing instruments (DNA sequencers) can sequence up to 96 DNA samples in a single batch (run) in up to 48 runs a day.[53]
The high demand for low-cost sequencing has driven the development of high-throughput sequencing (or next-generation sequencing [NGS]) technologies that parallelize the sequencing process, producing thousands or millions of sequences at once.[54][55] High-throughput sequencing technologies are intended to lower the cost of DNA sequencing beyond what is possible with standard dye-terminator methods. In ultra-high-throughput sequencing as many as 500,000 sequencing-by-synthesis operations may be run in parallel.[56][57]
Solexa, now part of Illumina, developed a sequencing method based on reversible dye-terminators technology acquired from Manteia Predictive Medicine in 2004. This technology had been invented and developed in late 1996 at Glaxo-Welcome's Geneva Biomedical Research Institute (GBRI), by Dr. Pascal Mayer and Dr Laurent Farinelli.[58] In this method, DNA molecules and primers are first attached on a slide and amplified with polymerase so that local clonal colonies, initially coined "DNA colonies", are formed. To determine the sequence, four types of reversible terminator bases (RT-bases) are added and non-incorporated nucleotides are washed away. Unlike pyrosequencing, the DNA chains are extended one nucleotide at a time and image acquisition can be performed at a delayed moment, allowing for very large arrays of DNA colonies to be captured by sequential images taken from a single camera.
Decoupling the enzymatic reaction and the image capture allows for optimal throughput and theoretically unlimited sequencing capacity. With an optimal configuration, the ultimately reachable instrument throughput is thus dictated solely by the analogic-to-digital conversion rate of the camera, multiplied by the number of cameras and divided by the number of pixels per DNA colony required for visualizing them optimally (approximately 10 pixels/colony). In 2012, with cameras operating at more than 10MHz A/D conversion rates and available optics, fluidics and enzymatics, throughput can be multiples of 1 million nucleotides/second, corresponding roughly to 1 human genome equivalent at 1x coverage per hour per instrument, and 1 human genome re-sequenced (at approx. 30x) per day per instrument (equipped with a single camera). The camera takes images of the fluorescently labeled nucleotides, then the dye along with the terminal 3' blocker is chemically removed from the DNA, allowing the next cycle.[59]
Ion Torrent Systems Inc. developed a sequencing approach based on standard DNA replication chemistry. This technology measures the release of a hydrogen ion each time a base is incorporated. A microwell containing template DNA is flooded with a single nucleotide, if the nucleotide is complementary to the template strand it will be incorporated and a hydrogen ion will be released. This release triggers an ISFET ion sensor. If a homopolymer is present in the template sequence multiple nucleotides will be incorporated in a single flood cycle, and the detected electrical signal will be proportionally higher.[60]
Overlapping reads form contigs; contigs and gaps of known length form scaffolds.
Paired end reads of next generation sequencing data mapped to a reference genome.
Multiple, fragmented sequence reads must be assembled together on the basis of their overlapping areas.
Sequence assembly refers to aligning and merging fragments of a much longer DNA sequence in order to reconstruct the original sequence.[4] This is needed as current DNA sequencing technology cannot read whole genomes as a continuous sequence, but rather reads small pieces of between 20 and 1000 bases, depending on the technology used. Typically the short fragments, called reads, result from shotgun sequencing genomic DNA, or gene transcripts (ESTs).[4]
Assembly can be broadly categorized into two approaches: de novo assembly, for genomes which are not similar to any sequenced in the past, and comparative assembly, which uses the existing sequence of a closely related organism as a reference during assembly.[50] Relative to comparative assembly, de novo assembly is computationally difficult (NP-hard), making it less favorable for short-read NGS technologies.
Finished genomes are defined as having a single contiguous sequence with no ambiguities representing each replicon.[61]
The DNA sequence assembly alone is of little value without additional analysis.[4]Genome annotation is the process of attaching biological information to sequences, and consists of three main steps:[62]
Automatic annotation tools try to perform these steps in silico, as opposed to manual annotation (a.k.a. curation) which involves human expertise and potential experimental verification.[63] Ideally, these approaches co-exist and complement each other in the same annotation pipeline (also see below).
Traditionally, the basic level of annotation is using BLAST for finding similarities, and then annotating genomes based on homologues.[4] More recently, additional information is added to the annotation platform. The additional information allows manual annotators to deconvolute discrepancies between genes that are given the same annotation. Some databases use genome context information, similarity scores, experimental data, and integrations of other resources to provide genome annotations through their Subsystems approach. Other databases (e.g. Ensembl) rely on both curated data sources as well as a range of software tools in their automated genome annotation pipeline.[64]Structural annotation consists of the identification of genomic elements, primarily ORFs and their localisation, or gene structure. Functional annotation consists of attaching biological information to genomic elements.
The need for reproducibility and efficient management of the large amount of data associated with genome projects mean that computational pipelines have important applications in genomics.[65]
Functional genomics is a field of molecular biology that attempts to make use of the vast wealth of data produced by genomic projects (such as genome sequencing projects) to describe gene (and protein) functions and interactions. Functional genomics focuses on the dynamic aspects such as gene transcription, translation, and proteinprotein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. Functional genomics attempts to answer questions about the function of DNA at the levels of genes, RNA transcripts, and protein products. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional gene-by-gene approach.
A major branch of genomics is still concerned with sequencing the genomes of various organisms, but the knowledge of full genomes has created the possibility for the field of functional genomics, mainly concerned with patterns of gene expression during various conditions. The most important tools here are microarrays and bioinformatics.
Structural genomics seeks to describe the 3-dimensional structure of every protein encoded by a given genome.[66][67] This genome-based approach allows for a high-throughput method of structure determination by a combination of experimental and modeling approaches. The principal difference between structural genomics and traditional structural prediction is that structural genomics attempts to determine the structure of every protein encoded by the genome, rather than focusing on one particular protein. With full-genome sequences available, structure prediction can be done more quickly through a combination of experimental and modeling approaches, especially because the availability of large numbers of sequenced genomes and previously solved protein structures allow scientists to model protein structure on the structures of previously solved homologs. Structural genomics involves taking a large number of approaches to structure determination, including experimental methods using genomic sequences or modeling-based approaches based on sequence or structural homology to a protein of known structure or based on chemical and physical principles for a protein with no homology to any known structure. As opposed to traditional structural biology, the determination of a protein structure through a structural genomics effort often (but not always) comes before anything is known regarding the protein function. This raises new challenges in structural bioinformatics, i.e. determining protein function from its 3D structure.[68]
Epigenomics is the study of the complete set of epigenetic modifications on the genetic material of a cell, known as the epigenome.[69] Epigenetic modifications are reversible modifications on a cells DNA or histones that affect gene expression without altering the DNA sequence (Russell 2010 p.475). Two of the most characterized epigenetic modifications are DNA methylation and histone modification. Epigenetic modifications play an important role in gene expression and regulation, and are involved in numerous cellular processes such as in differentiation/development and tumorigenesis.[69] The study of epigenetics on a global level has been made possible only recently through the adaptation of genomic high-throughput assays.[70]
Metagenomics is the study of metagenomes, genetic material recovered directly from environmental samples. The broad field may also be referred to as environmental genomics, ecogenomics or community genomics. While traditional microbiology and microbial genome sequencing rely upon cultivated clonal cultures, early environmental gene sequencing cloned specific genes (often the 16S rRNA gene) to produce a profile of diversity in a natural sample. Such work revealed that the vast majority of microbial biodiversity had been missed by cultivation-based methods.[71] Recent studies use "shotgun" Sanger sequencing or massively parallel pyrosequencing to get largely unbiased samples of all genes from all the members of the sampled communities.[72] Because of its power to reveal the previously hidden diversity of microscopic life, metagenomics offers a powerful lens for viewing the microbial world that has the potential to revolutionize understanding of the entire living world.[73][74]
Bacteriophages have played and continue to play a key role in bacterial genetics and molecular biology. Historically, they were used to define gene structure and gene regulation. Also the first genome to be sequenced was a bacteriophage. However, bacteriophage research did not lead the genomics revolution, which is clearly dominated by bacterial genomics. Only very recently has the study of bacteriophage genomes become prominent, thereby enabling researchers to understand the mechanisms underlying phage evolution. Bacteriophage genome sequences can be obtained through direct sequencing of isolated bacteriophages, but can also be derived as part of microbial genomes. Analysis of bacterial genomes has shown that a substantial amount of microbial DNA consists of prophage sequences and prophage-like elements.[75] A detailed database mining of these sequences offers insights into the role of prophages in shaping the bacterial genome.[76][77]
At present there are 24 cyanobacteria for which a total genome sequence is available. 15 of these cyanobacteria come from the marine environment. These are six Prochlorococcus strains, seven marine Synechococcus strains, Trichodesmium erythraeum IMS101 and Crocosphaera watsonii WH8501. Several studies have demonstrated how these sequences could be used very successfully to infer important ecological and physiological characteristics of marine cyanobacteria. However, there are many more genome projects currently in progress, amongst those there are further Prochlorococcus and marine Synechococcus isolates, Acaryochloris and Prochloron, the N2-fixing filamentous cyanobacteria Nodularia spumigena, Lyngbya aestuarii and Lyngbya majuscula, as well as bacteriophages infecting marine cyanobaceria. Thus, the growing body of genome information can also be tapped in a more general way to address global problems by applying a comparative approach. Some new and exciting examples of progress in this field are the identification of genes for regulatory RNAs, insights into the evolutionary origin of photosynthesis, or estimation of the contribution of horizontal gene transfer to the genomes that have been analyzed.[78]
Genomics has provided applications in many fields, including medicine, biotechnology, anthropology and other social sciences.[40]
Next-generation genomic technologies allow clinicians and biomedical researchers to drastically increase the amount of genomic data collected on large study populations.[79] When combined with new informatics approaches that integrate many kinds of data with genomic data in disease research, this allows researchers to better understand the genetic bases of drug response and disease.[80][81]
The growth of genomic knowledge has enabled increasingly sophisticated applications of synthetic biology.[82] In 2010 researchers at the J. Craig Venter Institute announced the creation of a partially synthetic species of bacterium, Mycoplasma laboratorium, derived from the genome of Mycoplasma genitalium.[83]
Conservationists can use the information gathered by genomic sequencing in order to better evaluate genetic factors key to species conservation, such as the genetic diversity of a population or whether an individual is heterozygous for a recessive inherited genetic disorder.[84] By using genomic data to evaluate the effects of evolutionary processes and to detect patterns in variation throughout a given population, conservationists can formulate plans to aid a given species without as many variables left unknown as those unaddressed by standard genetic approaches.[85]
See the rest here:
Genomics - Wikipedia, the free encyclopedia
Posted in Genome
Comments Off on Genomics – Wikipedia, the free encyclopedia
Human Genetics | University of Michigan, Ann Arbor
Posted: July 8, 2016 at 7:42 am
The Department of Human Genetics is dedicated to basic scientific research in human genetics and genetic disease, as well as the training of the next generation of scientists and health care providers.
Our faculty explore three broad areas of human genetics: molecular genetics, genetic disease, and statistical/population genetics. Within molecular genetics, research groups study DNA repair and recombination, genome instability, gene function and regulation, epigenetics, RNA modification and control, and genomic systems. Research in human genetic disease emphasizes the genetics of development, neurogenetics, stem cell biology, medical genetics, reproductive sciences, and the genetics of cancer. Evolutionary and population genetics research includes statistical tools for genetics, genetic epidemiology, and genetic mapping of complex traits and diseases.
We invite you to explore our faculty, students, graduate programs, courses, and events/seminars.
Wildschutte JH, Williams ZH, Montesion M, Subramanian RP, Kidd JM, Coffin JM. Discovery of unfixed endogenous retrovirus insertions in diverse human populations. Published online PNAS March 21, 2016 http://www.pnas.org/content/early/2016/03/16/1602336113
Iwase S, Brookes E, Agarwal S, Badeaux AI, Ito H, Vallianatos CN, Tomassy GS, Kasza T, Lin G, Thompson A, Gu L, Kwan KY, Chen C, Sartor MA, Egan B, Xu J, Shi Y. A mouse model of X-linked intellectual disability associated with impaired removal of histone methylation. Cell Rep. 2016 Feb 9;14(5):1000-9. [PubMed]
Lenk GM, Frei CM, Miller AC, Wallen RC, Mironova YA, Giger RJ, Meisler MH. Rescue of neurodegeneration in the Fig4 null mouse by a catalytically inactive FIG4 transgene. Hum Mol Genet. 2016 Jan 15;25(2):340-7. [PubMed]
Originally posted here:
Human Genetics | University of Michigan, Ann Arbor
Posted in Human Genetics
Comments Off on Human Genetics | University of Michigan, Ann Arbor
National Human Genome Research Institute
Posted: July 5, 2016 at 11:30 pm
The Genomics Landscape The Power of Model Organisms for Studying Rare Diseases In this issue of The Genomics Landscape, we feature the use of model organisms to explore the function of genes implicated in human disease. This month's issue also highlights a recently completed webinar series to help professionals in the health insurance industry understand genetic testing, new funding for training in genomic medicine research, and NHGRI's Genome Statute and Legislation Database. Read more New training grants prime doctors to tackle genomic medicine The practice of medicine is expensive and doesn't fit in a one-hour time frame. Tests can only eliminate one diagnosis at a time. Questioning and family history can help a doctor arrive at the correct diagnosis, but even with the information gathered upfront, there are a huge number of tests to consider, and many tests may still be needed. Training doctors to use genomic sequencing is a powerful solution to the challenges facing today's medical practice. Read more One little fish hooks genome researchers with its versatility Modern molecular biology and the genome of a tiny silver and black striped fish - the zebrafish - are making waves in genomics research. This tiny fish is a powerhouse tool that helps researchers better understand the genes that are implicated in disease. Here, at the National Human Genome Research Institute (NHGRI), researchers are working to advance human health by coupling the potential of this little fish with an institute-funded resource known as The Zebrafish Core. Read more New NIH studies seek adults and families affected by sickle cell disease/trait People with sickle cell disease (SCD) can experience excruciating pain, kidney problems, a higher risk of stroke and, in rare cases, chronic leg ulcers. Little is known about why the severity of these symptoms varies throughout a lifetime or why these symptoms differ from person to person. NHGRI researchers are seeking help from people affected by SCD to find the factors - environmental, social and genetic - that impact the severity of the symptoms. Read more Video now available The Genomic Landscape of Breast Cancer in Women of African Ancestry On June 7, Olufunmilayo I. Olopade, M.D., F.A.C.P., presented The Genomic Landscape of Breast Cancer in Women of African Ancestry, the final lecture in the 2016 Genomics and Health Disparities Lecture Series. Dr. Olufunmilayo is director of the Center for Clinical Cancer Genetics at the University of Chicago School of Medicine. She is an expert in cancer risk assessment and treatment for aggressive forms of breast cancer. Watch video | Read about the series
Here is the original post:
National Human Genome Research Institute
Posted in Human Genetics
Comments Off on National Human Genome Research Institute
Redesigning the World: Ethical Questions About Genetic …
Posted: July 3, 2016 at 6:39 pm
Redesigning the World Ethical Questions about Genetic Engineering
Ron Epstein 1
INTRODUCTION
Until the demise of the Soviet Union, we lived under the daily threat of nuclear holocaust extinguishing human life and the entire biosphere. Now it looks more likely that total destruction will be averted, and that widespread, but not universally fatal, damage will continue to occur from radiation accidents from power plants, aging nuclear submarines, and perhaps the limited use of tactical nuclear weapons by governments or terrorists.
What has gone largely unnoticed is the unprecedented lethal threat of genetic engineering to life on the planet. It now seems likely, unless a major shift in international policy occurs quickly, that the major ecosystems that support the biosphere are going to be irreversibly disrupted, and that genetically engineered viruses may very well lead to the eventual demise of almost all human life. In the course of the major transformations that are on the way, human beings will be transformed, both intentionally and unintentionally, in ways that will make us something different than what we now consider human.
Heedless of the dangers, we are rushing full speed ahead on almost all fronts. Some of the most powerful multinational chemical, pharmaceutical and agricultural corporations have staked their financial futures on genetic engineering. Enormous amounts of money are already involved, and the United States government is currently bullying the rest of the world into rapid acceptance of corporate demands concerning genetic engineering research and marketing.
WHAT IS GENETIC ENGINEERING
What are genes?
Genes are often described as 'blueprints' or 'computer programs' for our bodies and all living organisms. Although it is true that genes are specific sequences of DNA (deoxyribonucleic acid) that are central to the production of proteins, contrary to popular belief and the now outmoded standard genetic model, genes do not directly determine the 'traits' of an organism.1a They are a single factor among many. They provide the 'list of ingredients' which is then organized by the 'dynamical system' of the organism. That 'dynamical system' determines how the organism is going to develop. In other words, a single gene does not, in most cases, exclusively determine either a single feature of our bodies or a single aspect of our behavior. A recipe of ingredients alone does not create a dish of food. A chef must take those ingredients and subject them to complex processes which will determine whether the outcome is mediocre or of gourmet quality. So too the genes are processed through the self-organizing ('dynamical') system of the organism, so that the combination of a complex combination of genes is subjected to a variety of environmental factors which lead to the final results, whether somatic or behavioral.2
a gene is not an easily identifiable and tangible object. It is not only the DNA sequence which determines its functions in the organisms, but also its location in a specific chromosomal, cellular, physiological and evolutionary context. It is therefore difficult to predict the impact of genetic material transfer on the functioning of the extremely tightly controlled, integrated and balanced functioning of all the tens of thousands of structures and processes that make up the body of any complex organism.3
Genetic engineering refers to the artificial modification of the genetic code of a living organism. Genetic engineering changes the fundamental physical nature of the organism, sometimes in ways that would never occur in nature. Genes from one organism are inserted in another organism, most often across natural species boundaries. Some of the effects become known, but most do not. The effects of genetic engineering which we know are ususally short-term, specific and physical. The effects we do not know are often long-term, general, and also mental. Long-term effects may be either specific4 or general.
Differences between Bioengineering and Breeding
The breeding of animals and plants speeds up the natural processes of gene selection and mutation that occur in nature to select new species that have specific use to humans. Although the selecting of those species interferes with the natural selection process that would otherwise occur, the processes utilized are found in nature. For example, horses are bred to run fast without regard for how those thoroughbreds would be able to survive in the wild. There are problems with stocking streams with farmed fish because they tend to crowd out natural species, be less resistant to disease, and spread disease to wild fish.5
The breeding work of people like Luther Burbank led to the introduction of a whole range of tasty new fruits. At the University of California at Davis square tomatoes with tough skins were developed for better packing and shipping. Sometimes breeding goes awry. Killer bees are an example. Another example is the 1973 corn blight that killed a third of the crop that year. It was caused by a newly bred corn cultivar that was highly susceptible to a rare variant of a common leaf fungus.6
Bioengineers often claim that they are just speeding up the processes of natural selection and making the age-old practices of breeding more efficient. In some cases that may be true, but in most instances the gene changes that are engineered would never occur in nature, because they cross natural species barriers.
HOW GENETIC ENGINEERING IS CURRENTLY USED
Here is a brief summary of some of the more important, recent developments in genetic engineering.7
1) Most of the genetic engineering now being used commercially is in the agricultural sector. Plants are genetically engineered to be resistant to herbicides, to have built in pesticide resistance, and to convert nitrogen directly from the soil. Insects are being genetically engineered to attack crop predators. Research is ongoing in growing agricultural products directly in the laboratory using genetically engineered bacteria. Also envisioned is a major commercial role for genetically engineered plants as chemical factories. For example, organic plastics are already being produced in this manner.8
2) Genetically engineered animals are being developed as living factories for the production of pharmaceuticals and as sources of organs for transplantation into humans. (New animals created through the process of cross-species gene transfer are called xenographs. The transplanting of organs across species is called xenotransplantation.) A combination of genetic engineering and cloning is leading to the development of animals for meat with less fat, etc. Fish are being genetically engineered to grow larger and more rapidly.
3) Many pharmaceutical drugs, including insulin, are already genetically engineered in the laboratory. Many enzymes used in the food industry, including rennet used in cheese production, are also available in genetically engineered form and are in widespread use.
4) Medical researchers are genetically engineering disease carrying insects so that their disease potency is destroyed. They are genetically engineering human skin9 and soon hope to do the same with entire organs and other body parts.
5) Genetic screening is already used to screen for some hereditary conditions. Research is ongoing in the use of gene therapy in the attempt to correct some of these conditions. Other research is focusing on techniques to make genetic changes directly in human embryos. Most recen
tly research has also been focused on combining cloning with genetic enginering. In so-called germline therapy, the genetic changes are passed on from generation to generation and are permanent.
6) In mining, genetically engineered organisms are being developed to extract gold, copper, etc. from the substances in which it is embedded. Other organisms may someday live on the methane gas that is a lethal danger to miners. Still others have been genetically engineered to clean up oil spills, to neutralize dangerous pollutants, and to absorb radioactivity. Genetically engineered bacteria are being developed to transform waste products into ethanol for fuel.
SOME DISTINGUISHED SCIENTISTS' OPINIONS
In the 1950's, the media was full of information about the great new scientific miracle that was going to make it possible to kill all of the noxious insects in the world, to wipe out insect-born diseases and feed the world's starving masses. That was DDT. In the 1990's, the media is full of information about the coming wonders of genetic engineering. Everywhere are claims that genetic engineering will feed the starving, help eliminate disease, and so forth. The question is the price tag. The ideas and evidence presented below are intended to help evaluate that central question.
Many prominent scientists have warned against the dangers of genetic engineering. George Wald, Nobel Prize-winning biologist and Harvard professor, wrote:
Recombinant DNA technology [genetic engineering] faces our society with problems unprecedented not only in the history of science, but of life on the Earth. It places in human hands the capacity to redesign living organisms, the products of some three billion years of evolution.
Such intervention must not be confused with previous intrusions upon the natural order of living organisms; animal and plant breeding, for example; or the artificial induction of mutations, as with X-rays. All such earlier procedures worked within single or closely related species. The nub of the new technology is to move genes back and forth, not only across species lines, but across any boundaries that now divide living organisms The results will be essentially new organisms. Self-perpetuating and hence permanent. Once created, they cannot be recalled
Up to now living organisms have evolved very slowly, and new forms have had plenty of time to settle in. Now whole proteins will be transposed overnight into wholly new associations, with consequences no one can foretell, either for the host organism or their neighbors.
It is all too big and is happening too fast. So this, the central problem, remains almost unconsidered. It presents probably the largest ethical problem that science has ever had to face. Our morality up to now has been to go ahead without restriction to learn all that we can about nature. Restructuring nature was not part of the bargain For going ahead in this direction may be not only unwise but dangerous. Potentially, it could breed new animal and plant diseases, new sources of cancer, novel epidemics.10
Erwin Chargoff, an eminent geneticist who is sometimes called the father of modern microbiology, commented:
The principle question to be answered is whether we have the right to put an additional fearful load on generations not yet born. I use the adjective 'additional' in view of the unresolved and equally fearful problem of the disposal of nuclear waste. Our time is cursed with the necessity for feeble men, masquerading as experts, to make enormously far-reaching decisions. Is there anything more far-reaching than the creation of forms of life? You can stop splitting the atom; you can stop visiting the moon; you can stop using aerosals; you may even decide not to kill entire populations by the use of a few bombs. But you cannot recall a new form of life. Once you have constructed a viable E. coli cell carry a plasmid DNA into which a piece of eukaryotic DNA has been spliced, it will survive you and your children and your children's children. An irreversible attack on the biosphere is something so unheard-of, so unthinkable to previous generations, that I could only wish that mine had not been guilty of it.11
It appears that the recombination experiments in which a piece of animal DNA is incorporated into the DNA of a microbial plasmid are being performed without a full appreciation of what is going on. Is the position of one gene with respect to its neighbors on the DNA chain accidental or do they control and regulate each other? Are we wise in getting ready to mix up what nature has kept apart, namely the genomes of eukaryotic and prokaryotic cells.
The worst is that we shall never know. Bacteria and viruses have always formed a most effective biological underground. The guerrilla warfare through which they act on higher forms of life is only imperfectly understood. By adding to this arsenal freakish forms of life-prokyarotes propagating eukaryotic genes-we shall be throwing a veil of uncertainties over the life of coming generations. Have we the right to counteract, irreversibly, the evolutionary wisdom of millions of years, in order to satisfy the ambition and curiosity of a few scientists?
This world is given to us on loan. We come and we go; and after a time we leave earth and air and water to others who come after us. My generation, or perhaps the one preceding mine, has been the first to engage, under the leadership of the exact sciences, in a destructive colonial warfare against nature. The future will curse us for it.12
In contrast, here are two examples of prominent scientists who support genetic engineering. Co-discoverer of the DNA code and Nobel Laureate Dr. James D. Watson takes this approach:
On the possible diseases created by recombinant DNA, Watson wrote in March 1979: 'I would not spend a penny trying to see if they exist' (Watson 1979:113). Watson's position is that we must go ahead until we experience serious disadvantages. We must take the risk of even a catastrophe that might be hidden in recombinant DNA technology. According to him that is how learning works: until a tiger devours you, you don't know that the jungle is dangerous.13
What is wrong with Watson's analogy? If Watson wants to go off into the jungle and put himself at risk of being eaten by a tiger, that is his business. What gives him the right to drag us all with him and put us at risk of being eaten? When genetically engineered organisms are released into the environment, they put us all at risk, not just their creators.
The above statement by a great scientist clearly shows that we cannot depend on the high priests of science to make our ethical decisions for us. Too much is at stake. Not all geneticists are so cavalier or unclear about the risks. Unfortunately the ones who see or care about the potential problems are in the minority. That is not really surprising, because many who did see some of the basic problems would either switch fields or not enter it in the first place. Many of those who are in it have found a fascinating playground, not only in which to earn a livelihood, but also one with high-stake prizes of fame and fortune.
Watson himself saw some of the problems clearly when he stated:
This [genetic engineering] is a matter far too important to be left solely in the hands of the scientific and medical communities. The belief thatscience always moves forward represents a
form of laissez-faire nonsense dismally reminiscent of the credo that American business if left to itself will solve everybody's problems. Just as the success of a corporate body in making money need not set the human condition ahead, neither does every scientific advance automatically make our lives more 'meaningful'.14
Although not a geneticist, Stephen Hawking, the world-renowned physicist and cosmologist and Lucasian Professor of Mathematics at Cambridge University in England (a post once held by Sir Isaac Newton), has commented often and publicly on the future role of genetic engineering. For example:
Hawking, known mostly for his theories about the Big Bang and black holes, is focusing a lot these days on how humanity fits into the future of the universe--if indeed it fits at all. One possibility he suggests is that once an intelligent life form reaches the stage we're at now, it proceeds to destroy itself. He's an optimist, however, preferring the notion that people will alter DNA, redesigning the race to minimize our aggressive nature and give us a better chance at long-term survival. ``Humans will change their genetic makeup to give them more intelligence and better memory,'' he said.15
Hawking assumes that, even though humans are about to destroy themselves, they have the wisdom to know how to redesign themselves. If that were the case, why would we be about to destroy ourselves in the first place? Is Hawking assuming that genes control IQ and memory, and that they are equivalent to wisdom, or is Hawking claiming there is a wisdom gene? All these assumptions are extremely dubious. The whole notion that we can completely understand what it means to be human with a small part of our intellect, which is in turn a small part of who we are is, in its very nature, extremely suspect. If we attempt to transform ourselves in the image of a small part of ourselves, what we transform ourselves into will certainly be something smaller or at least a serious distortion of our human nature.
Those questions aside, Hawking does make explicit that, for the first time in history, natural evolution has come to an end and has been replaced by humans meddling with their own genetic makeup. With genetic engineering science has moved from exploring the natural world and its mechanisms to redesigning them. This is a radical departure in the notion of what we mean by science. As Nobel Prize winning biologist Professor George Wald was quoted above as saying: "Our morality up to now has been to go ahead without restriction to learn all that we can about nature. Restructuring nature was not part of the bargain."16
Hawking's views illustrate that even brilliant scientists, whose understanding of science should be impeccable, can get caught in the web of scientism. "Scientism"17 refers to the extending of science beyond the use of the scientific method and wrongly attempting to use it as the foundation for belief systems. Scientism promotes the myth that science is the sole source of truth about ourselves and the world we live in.
Most scientific research is dependent on artificial closed system models, yet the cosmos is an open system. Therefore, there are a priori limitations to the relevance of scientific data to the open system of the natural world. What seems to be the case in the laboratory may or may not be valid in the natural world.17a Therefore, we cannot know through scientific methodology the full extent of the possible effects of genetic alterations in living creatures.18
If science is understood in terms of hypotheses from data collected according to scientific method, then the claims of Hawking in the name of science extend far beyond what science actually is. He is caught in an unconscious web of presuppositions and values that deeply affect both his hypotheses and his interpretation of data. It is not only Hawking who is caught in this web but all of us, regardless of our philosophical positions, because scientism is part of our cultural background that is very hard to shake. We all have to keep in mind that there is more to the world than what our current crop of scientific instruments can detect.
Hawking's notions are at least altruistic. Perhaps more dangerous in the short run are projected commercial applications of so-called 'designer genes': gene alterations to change the physical appearance of our offspring to more closely match cultural values and styles. When we change the eye-color, height, weight, and other bodily characteristics of our offspring, how do we know what else is also being changed? Genes are not isolated units that have simple one-to-one correspondences.19
SOME SPECIFIC DIFFICULTIES WITH GENETIC ENGINEERING
Here are a few examples of current efforts in genetic engineering that may cause us to think twice about its rosy benefits.
The Potential of Genetic Engineering for Disrupting the Natural Ecosystems of the Biosphere
At a time when an estimated 50,000 species are already expected to become extinct every year, any further interference with the natural balance of ecosystems could cause havoc. Genetically engineered organisms, with their completely new and unnatural combinations of genes, have a unique power to disrupt our environment. Since they are living, they are capable of reproducing, mutating and moving within the environment. As these new life forms move into existing habitats they could destroy nature as we know it, causing long term and irreversible changes to our natural world.20
Any child who has had an aquarium knows that the fish, plants, snails, and food have to be kept in balance to keep the water clear and the fish healthy. Natural ecosystems are more complex but operate in a similar manner. Nature, whether we consider it to be conscious or without consciousness, is a self-organizing system with its own mechanisms.21 In order to guarantee the long-term viability of the system, those mechanisms insure that important equilibria are maintained. Lately the extremes of human environmental pollution and other human activities have been putting deep strains on those mechanisms. Nonetheless, just as we can clearly see when the aquarium is out of kilter, we can learn to sensitize ourselves to Nature's warnings and know when we are endangering Nature's mechanisms for maintaining equilibria. We can see an aquarium clearly. Unfortunately, because of the limitations of our senses in detecting unnatural and often invisible change, we may not become aware of serious dangers to the environment until widespread damage has already been done.
Deep ecology22 and Gaia theory have brought to general awareness the interactive and interdependent quality of environmental systems.22a No longer do we believe that isolated events occur in nature. Each event is part of a vast web of inter-causality, and as such has widespread consequences within that ecosystem.
If we accept the notion that the biosphere has its own corrective mechanisms, then we have to look at how they work and the limitations of their design. The more extreme the disruption to the self-organizing systems of the biosphere, the stronger the corrective measures are necessary. The notion that the systems can ultimately deal with any threat, however extreme, is without scientific basis. No evidence exists that the life and welfare of human beings have priority in those self-organizing systems. Nor does any evidence exist that anything in those systems is equipped to deal with all the threats that genetically engineered organisms may pose. Why? The organisms are not in th
e experience of the systems, because they could never occur naturally as a threat. The basic problem is a denial on the part of many geneticists that genetically engineered organisms are radical, new, and unnatural forms of life, which, as such, have no place in the evolutionarily balanced biosphere.
Viruses
Plant, animal and human viruses play a major role in the ecosystems that comprise the biosphere. They are thought by some to be one of the primary factors in evolutionary change. Viruses have the ability to enter the genetic material of their hosts, to break apart, and then to recombine with the genetic material of the host to create new viruses. Those new viruses then infect new hosts, and, in the process, transfer new genetic material to the new host. When the host reproduces, genetic change has occurred.
If cells are genetically engineered, when viruses enter the cells, whether human, animal, or plant, then some of the genetically engineered material can be transferred to the newly created viruses and spread to the viruses' new hosts. We can assume that ordinary viruses, no matter how deadly, if naturally produced, have a role to play in an ecosystem and are regulated by that ecosystem. Difficulties can occur when humans carry them out of their natural ecosystems; nonetheless, all ecosystems in the biosphere may presumably share certain defense characteristics. Since viruses that contain genetically engineered material could never naturally arise in an ecosystem, there is no guarantee of natural defenses against them. They then can lead to widespread death of humans, animals or plants, thereby temporarily or even permanently damaging the ecosystem. Widespread die-off of a plant species is not an isolated event but can affect its whole ecosystem. For many, this may be a rather theoretical concern. The distinct possibility of the widespread die-off of human beings from genetically engineered viruses may command more attention.23
Biowarfare
Secret work is going forward in many countries to develop genetically engineered bacteria and viruses for biological warfare. International terrorists have already begun seriously considering their use. They are almost impossible to regulate, because the same equipment and technology that are used commercially can easily and quickly be transferred to military application.
The former Soviet Union had 32,000 scientists working on biowarfare, including military applications of genetic engineering. No one knows where most of them have gone, or what they have taken with them. Among the more interesting probable developments of their research were smallpox viruses engineered either with equine encephalitis or with Ebola virus. In one laboratory, despite the most stringent containment standards, a virulent strain of pneumonia, which had been stolen from the United State military, infected wild rats living in the building, which then escaped into the wild.24
There is also suggestive evidence that much of the so-called Gulf War Syndrome may have been caused by a genetically engineered biowarfare agent which is contagious after a relatively long incubation period. Fortunately that particular organism seems to respond to antibiotic treatment.25 What is going to happen when the organisms are purposely engineered to resist all known treatment?
Nobel laureate in genetics and president emeritus of Rockefeller University Joshua Lederberg has been in the forefront of those concerned about international control of biological weapons. Yet when I wrote Dr. Lederberg for information about ethical problems in the use of genetic engineering in biowarfare, he replied, "I don't see how we'd be talking about the ethics of genetic engineering, any more than that of iron smelting - which can be used to build bridges or guns."26 Like most scientists, Lederberg fails to acknowledge that scientific researchers have a responsibility for the use to which their discoveries are put. Thus he also fails to recognize that once the genie is out of the bottle, you cannot coax it back in. In other words, research in genetic engineering naturally leads to its employment for biowarfare, so that before any research in genetic engineering is undertaken, its potential use in biowarfare should be clearly evaluated. After they became aware of the horrors of nuclear war, many of the scientists who worked in the Manhattan project, which developed the first atomic bomb, underwent terrible anguish and soul-searching. It is surprising that more geneticists do not see the parallels.
After reading about the dangers of genetic engineering in biowarfare, the president of the United States, Bill Clinton, became extremely concerned, and, in the spring of 1998, made civil defense countermeasures a priority. Yet, his administration has systematically opposed all but the most rudimentary safety regulations and restrictions for the biotech industry. By doing so, Clinton has unwittingly created a climate in which the production of the weapons he is trying to defend against has become very easy for both governments and terrorists.27
Plants
New crops may breed with wild relatives or cross breed with related species. The "foreign" genes could spread throughout the environment causing unpredicted changes which will be unstoppable once they have begun. Entirely new diseases may develop in crops or wild plants. Foreign genes are designed to be carried into other organisms by viruses which can break through species barriers, and overcome an organism's natural defenses. This makes them more infectious than naturally existing parasites, so any new viruses could be even more potent than those already known.
Ordinary weeds could become "Super-weeds": Plants engineered to be herbicide resistant could become so invasive they are a weed problem themselves, or they could spread their resistance to wild weeds making them more invasive. Fragile plants may be driven to extinction, reducing nature's precious biodiversity. Insects could be impossible to control. Making plants resistant to chemical poisons could lead to a crisis of "super pests" if they also take on the resistance to pesticides.
The countryside may suffer even greater use of herbicides and pesticides: Because farmers will be able to use these toxic chemicals with impunity their use may increase threatening more pollution of water supplies and degradation of soils.
Plants developed to produce their own pesticide could harm non-target species such as birds, moths and butterflies. No one - including the genetic scientists - knows for sure the effect releasing new life forms will have on the environment. They do know that all of the above are possible and irreversible, but they still want to carry out their experiment. THEY get giant profits. All WE get is a new and uncertain environment - an end to the world as we know it.29
When genetically engineered crops are grown for a specific purpose, they cannot be easily isolated both from spreading into the wild and from cross-pollinating with wild relatives. It has already been shown30 that cross-pollination can take place almost a mile away from the genetically engineered plantings. As has already occurred with noxious weeds and exotics, human beings, animals and birds may accidentally carry the genetically engineered seeds far vaster distances. Spillage in transport and at processing factories is also inevitable. The genetically engineered plants can then force out plant competitors and thus radically change the balance of ecosystems or even destroy
them.
Under current United States government regulations, companies that are doing field-testing of genetically engineered organisms need not inform the public of what genes have been added to the organisms they are testing. They can be declared trade secrets, so that the public safety is left to the judgment of corporate scientists and government regulators many of whom switch back and forth between working for the government and working for the corporations they supposedly regulate.31 Those who come from academic positions often have large financial stakes in biotech companies, 32 and major universities are making agreements with biotech corporations that compromise academic freedom and give patent rights to the corporations. As universities become increasingly dependent on major corporations for funding, the majority of university scientists will no longer be able to function as independent, objective experts in matters concerning genetic engineering and public safety.32a
Scientists have already demonstrated the transfer of transgenes and marker genes to both bacterial pathogens and to soil fungi. That means genetically engineered organisms are going to enter the soil and spread to whatever grows in it. Genetically engineered material can migrate from the roots of plants into soil bacteria, in at least one case radically inhibiting the ability of the soil to grow plants.33 Once the bacteria are free in the soil, no natural barriers inhibit their spread. With ordinary soil pollution, the pollution can be confined and removed (unless it reaches the ground-water). If genetically engineered soil bacteria spreads into the wild, the ability of the soil to support plant life may seriously diminish.33a It does not take much imagination to see what the disastrous consequences might be.
Water and air are also subject to poisoning by genetically engineered viruses and bacteria.
The development of new genetically engineered crops with herbicide resistance will affect the environment through the increased use of chemical herbicides. Monsanto and other major international chemical, pharmaceutical, and agricultural corporations have staked their financial futures on genetically engineered herbicide-resistant plants.33b
Recently scientists have found a way to genetically engineer plants so that their seeds lose their viability unless sprayed with patented formulae, most of which turn out to have antibiotics as their primary ingredient. The idea is to keep farmers from collecting genetically engineered seed, thus forcing them to buy it every year. The corporations involved are unconcerned about the gene escaping into the wild, with obvious disastrous results, even though that is a clear scientific possibility.34
So that we would not have to be dependent on petroleum-based plastics, some scientists have genetically engineered plants that produce plastic within their stem structures. They claim that it biodegrades in about six months.35 If the genes escape into the wild, through cross-pollination with wild relatives or by other means, then we face the prospect of natural areas littered with the plastic spines of decayed leaves. However aesthetically repugnant that may seem, the plastic also poses a real danger. It has the potential for disrupting entire food-chains. It can be eaten by invertebrates, which are in turn eaten, and so forth. If primary foods are inedible or poisonous, then whole food-chains can die off.36
Another bright idea was to genetically engineer plants with scorpion toxin, so that insects feeding on the plants would be killed. Even though a prominent geneticist warned that the genes could be horizontally transferred to the insects themselves, so that they might be able to inject the toxin into humans, the research and field testing is continuing.37
Animals
The genetic engineering of new types of insects, fish, birds and animals has the potential of upsetting natural ecosystems. They can displace natural species and upset the balance of other species through behavior patterns that are a result of their genetic transformation.
One of the more problematic ethical uses of animals is the creation of xenographs, already mentioned above, which often involve the insertion of human genes. (See the section immediately below.) Whether or not the genes inserted to create new animals are human ones, the xenographs are created for human use and patented for corporate profit with little or no regard for the suffering of the animals, their felings and thoughts, or their natural life-patterns.
Use of Human Genes
As more and more human genes are being inserted into non-human organisms to create new forms of life that are genetically partly human, new ethical questions arise. What percent of human genes does an organism have to contain before it is considered human? For instance, how many human genes would a green pepper38 have to contain before one would have qualms about eating it? For meat-eaters, the same question could be posed about eating pork. If human beings have special ethical status, does the presence of human genes in an organism change its ethical status? What about a mouse genetically engineered to produce human sperm39 that is then used in the conception of a human child?
Several companies are working on developing pigs that have organs containing human genes in order to facilitate the use of the organs in humans. The basic idea is something like this. You can have your own personal organ donor pig with your genes implanted. When one of your organs gives out, you can use the pig's.
The U.S. Food and Drug Administration (FDA) issued a set of xenotransplant guidelines in September of 1996 that allows animal to human transplants, and puts the responsibility for health and safety at the level of local hospitals and medical review boards. A group of 44 top virologists, primate researchers, and AIDS specialists have attacked the FDA guidelines, saying, "based on knowledge of past cross-species transmissions, including AIDS, Herpes B virus, Ebola, and other viruses, the use of animals has not been adequately justified for use in a handful of patients when the potential costs could be in the hundreds, thousands or millions of human lives should a new infectious agent be transmitted."40
England has outlawed such transplants as too dangerous.41
Humans
Genetically engineered material can enter the body through food or bacteria or viruses. The dangers of lethal viruses containing genetically engineered material and created by natural processes have been mentioned above.
The dangers of generating pathogens by vector mobilization and recombination are real. Over a period of ten years, 6 scientists working with the genetic engineering of cancer-related oncogenes at the Pasteur Institutes in France have contracted cancer.42
Non-human engineered genes can also be introduced into the body through the use of genetically engineered vaccines and other medicines, and through the use of animal parts genetically engineered with human genes to combat rejection problems.
Gene therapy, for the correction of defective human genes that cause certain genetic diseases, involves the intentional introduction of new genes into the body in an attempt to modify the genetic structure of the body. It is based on a simplistic and flawed model of gene function which assumes a one-to-one correspondence between individual gene and individual function. Since horizontal interaction43 among genes has been demonstrated, introduction of a new gene ca
n have unforeseen effects. Another problem, already mentioned, is the slippery slope that leads to the notion of designer genes. We are already on that slope with the experimental administration of genetically engineered growth hormone to healthy children, simply because they are shorter than average and their parents would like them to be taller.44
A few years ago a biotech corporation applied to the European Patent Office for a patent on a so-called 'pharm-woman,' the idea being to genetically engineer human females so that their breast-milk would contain specialized pharmaceuticals.44a Work is also ongoing to use genetic engineering to grow human breasts in the laboratory. It doesn't take much imagination to realize that not only would they be used for breast replacement needed due to cancer surgery, but also to foster a vigorous commercial demand by women in search of the "perfect" breasts.45 A geneticist has recently proposed genetically engineering headless humans to be used for body parts. Some prominent geneticists have supported his idea.46
Genetically Engineered Food
Many scientists have claimed that the ingestion of genetically engineered food is harmless because the genetically engineered materials are destroyed by stomach acids. Recent research47 suggests that genetically engineered materials are not completely destroyed by stomach acids and that significant portions reach the bloodstream and also the brain-cells. Furthermore, it has been shown that the natural defense mechanisms of body cells are not entirely effective in keeping the genetically engineered substances out of the cells.48
Some dangers of eating genetically engineered foods are already documented. Risks to human health include the probable increase in the level of toxins in foods and in the number of disease-causing organisms that are resistant to antibiotics.49 The purposeful increase in toxins in foods to make them insect-resistant is the reversal of thousands of years of selective breeding of food-plants. For example when plants are genetically engineered to resist predators, often the plant defense systems involve the synthesis of natural carcinogens.50
Industrial mistakes or carelessness in production of genetically engineered food ingredients can also cause serious problems. The l-tryptophan food supplement, an amino acid that was marketed as a natural tranquilizer and sleeping pill, was genetically engineered. It killed thirty-seven people and permanently disabled 1,500 others with an incurable nervous system condition known as eosinophilia myalgia syndrome (EMS).51
Dr. John Fagan has summarized some major risks of eating genetically engineered food as follows:
the new proteins produced in genetically engineered foods could: a) themselves, act as allergens or toxins, b) alter the metabolism of the food producing organism, causing it to produce new allergens or toxins, or c) causing it to be reduced in nutritional value.a) Mutations can damage genes naturally present in the DNA of an organism, leading to altered metabolism and to the production of toxins, and to reduced nutritional value of the food. b) Mutations can alter the expression of normal genes, leading to the production of allergens and toxins, and to reduced nutritional value of the food. c) Mutations can interfere with other essential, but yet unknown, functions of an organisms DNA.52
Basically what we have at present is a situation in which genetically engineered foods are beginning to flood the market, and no one knows what all their effects on humans will be. We are all becoming guinea pigs. Because genetically engineered food remains unlabeled, should serious problems arise, it will be extremely difficult to trace them to their source. Lack of labeling will also help to shield the corporations that are responsible from liability.
MORE BASIC ETHICAL PROBLEMS
See original here:
Posted in Genetic Engineering
Comments Off on Redesigning the World: Ethical Questions About Genetic …
The Cost of Sequencing a Human Genome
Posted: at 12:07 pm
The Cost of Sequencing a Human Genome
Advances in the field of genomics over the past quarter-century have led to substantial reductions in the cost of genome sequencing. The underlying costs associated with different methods and strategies for sequencing genomes are of great interest because they influence the scope and scale of almost all genomics research projects. As a result, significant scrutiny and attention have been given to genome-sequencing costs and how they are calculated since the beginning of the field of genomics in the late 1980s. For example, NHGRI has carefully tracked costs at its funded 'genome sequencing centers' for many years (see Figure 1). With the growing scale of human genetics studies and the increasing number of clinical applications for genome sequencing, even greater attention is being paid to understanding the underlying costs of generating a human genome sequence.
Accurately determining the cost for sequencing a given genome (e.g., a human genome) is not simple. There are many parameters to define and nuances to consider. In fact, it is difficult to cite precise genome-sequencing cost figures that mean the same thing to all people because, in reality, different researchers, research institutions, and companies typically track and account for such costs in different fashions.
A genome consists of all of the DNA contained in a cell's nucleus. DNA is composed of four chemical building blocks or "bases" (for simplicity, abbreviated G, A, T, and C), with the biological information encoded within DNA determined by the order of those bases. Diploid organisms, like humans and all other mammals, contain duplicate copies of almost all of their DNA (i.e., pairs of chromosomes; with one chromosome of each pair inherited from each parent). The size of an organism's genome is generally considered to be the total number of bases in one representative copy of its nuclear DNA. In the case of diploid organisms (like humans), that corresponds to the sum of the sizes of one copy of each chromosome pair.
Organisms generally differ in their genome sizes. For example, the genome of E. coli (a bacterium that lives in your gut) is ~5 million bases (also called megabases), that of a fruit fly is ~123 million bases, and that of a human is ~3,000 million bases (or ~3 billion bases). There are also some surprising extremes, such as with the loblolly pine tree - its genome is ~23 million bases in size, over seven times larger than ours. Obviously, the cost to sequence a genome depends on its size. The discussion below is focused on the human genome; keep in mind that a single 'representative' copy of the human genome is ~3 billion bases in size, whereas a given person's actual (diploid) genome is ~6 billion bases in size.
Genomes are large and, at least with today's methods, their bases cannot be 'read out' in order (i.e., sequenced) end-to-end in a single step. Rather, to sequence a genome, its DNA must first be broken down into smaller pieces, with each resulting piece then subjected to chemical reactions that allow the identify and order of its bases to be deduced. The established base order derived from each piece of DNA is often called a 'sequence read,' and the collection of the resulting set of sequence reads (often numbering in the billions) is then computationally assembled back together to deduce the sequence of the starting genome. Sequencing human genomes are nowadays aided by the availability of available 'reference' sequences of the human genome, which play an important role in the computational assembly process. Historically, the process of breaking down genomes, sequencing the individual pieces of DNA, and then reassembling the individual sequence reads to generate a sequence of the starting genome was called 'shotgun sequencing' (although this terminology is used less frequently today). When an entire genome is being sequenced, the process is called 'whole-genome sequencing.' See Figure 2 for a comparison of human genome sequencing methods during the time of the Human Genome Project and circa ~ 2016.
An alternative to whole-genome sequencing is the targeted sequencing of part of a genome. Most often, this involves just sequencing the protein-coding regions of a genome, which reside within DNA segments called 'exons' and reflect the currently 'best understood' part of most genomes. For example, all of the exons in the human genome (the human 'exome') correspond to ~1.5% of the total human genome. Methods are now readily available to experimentally 'capture' (or isolate) just the exons, which can then be sequenced to generate a 'whole-exome sequence' of a genome. Whole-exome sequencing does require extra laboratory manipulations, so a whole-exome sequence does not cost ~1.5% of a whole-genome sequence. But since much less DNA is sequenced, whole-exome sequencing is (at least currently) cheaper than whole-genome sequencing.
Another important driver of the costs associated with generating genome sequences relates to data quality. That quality is heavily dependent upon the average number of times each base in the genome is actually 'read' during the sequencing process. During the Human Genome Project (HGP), the typical levels of quality considered were: (1) 'draft sequence' (covering ~90% of the genome at ~99.9% accuracy); and (2) 'finished sequence' (covering >95% of the genome at ~99.99% accuracy). Producing truly high-quality 'finished' sequence by this definition is very expensive; of note, the process of 'sequence finishing' is very labor-intensive and is thus associated with high costs. In fact, most human genome sequences produced today are 'draft sequences' (sometimes above and sometimes below the accuracy defined above).
There are thus a number of factors to consider when calculating the costs associated with genome sequencing. There are multiple different types and quality levels of genome sequences, and there can be many steps and activities involved in the process itself. Understanding the true cost of a genome sequence therefore requires knowledge about what was and was not included in calculating that cost (e.g., sequence data generation, sequence finishing, upfront activities such as mapping, equipment amortization, overhead, utilities, salaries, data analyses, etc.). In reality, there are often differences in what gets included when estimating genome-sequencing costs in different situations.
Below is summary information about: (1) the estimated cost of sequencing the first human genome as part of the HGP; (2) the estimated cost of sequencing a human genome in 2006 (i.e., roughly a decade ago); and (3) the estimated cost of sequencing a human genome in 2016 (i.e., the present time).
The HGP generated a 'reference' sequence of the human genome - specifically, it sequenced one representative version of all parts of each human chromosome (totaling ~3 billion bases). In the end, the quality of the 'finished' sequence was very high, with an estimated error rate of <1 in 100,000 bases; note this is much higher than a typical human genome sequence produced today. The generated sequence did not come from one person's genome, and, being a 'reference' sequence of ~3 billion bases, really reflects half of what is generated when an individual person's ~6-billion-base genome is sequenced (see below).
The HGP involved first mapping and then sequencing the human genome. The former was required at the time because there was otherwise no 'framework' for organizing the actual sequencing or the resulting sequence data. The maps of the human genome served as 'scaffolds' on which to connect individual segments of assembled DNA sequence. These genome-mapping efforts were quite expensive, but were essential at the time for generating an accurate genome sequence. It is difficult to estimate the costs associated with the 'human genome mapping phase' of the HGP, but it was certainly in the many tens of millions of dollars (and probably hundreds of millions of dollars).
Once significant human genome sequencing began for the HGP, a 'draft' human genome sequence (as described above) was produced over a 15-month period (from April 1999 to June 2000). The estimated cost for generating that initial 'draft' human genome sequence is ~$300 million worldwide, of which NIH provided roughly 50-60%.
The HGP then proceeded to refine the 'draft' and produce a 'finished' human genome sequence (as described above), which was achieved by 2003. The estimated cost for advancing the 'draft' human genome sequence to the 'finished' sequence is ~$150 million worldwide. Of note, generating the final human genome sequence by the HGP also relied on the sequences of small targeted regions of the human genome that were generated before the HGP's main production-sequencing phase; it is impossible to estimate the costs associated with these various other genome-sequencing efforts, but they likely total in the tens of millions of dollars.
The above explanation illustrates the difficulty in coming up with a single, accurate number for the cost of generating that first human genome sequence as part of the HGP. Such a calculation requires a clear delineation about what does and does not get 'counted' in the estimate; further, most of the cost estimates for individual components can only be given as ranges. At the lower bound, it would seem that this cost figure is at least $500 million; at the upper bound, this cost figure could be as high as $1 billion. The truth is likely somewhere in between.
The above estimated cost for generating the first human genome sequence by the HGP should not be confused with the total cost of the HGP. The originally projected cost for the U.S.'s contribution to the HGP was $3 billion; in actuality, the Project ended up taking less time (~13 years rather than ~15 years) and requiring less funding - ~$2.7 billion. But the latter number represents the total U.S. funding for a wide range of scientific activities under the HGP's umbrella beyond human genome sequencing, including technology development, physical and genetic mapping, model organism genome mapping and sequencing, bioethics research, and program management. Further, this amount does not reflect the additional funds for an overlapping set of activities pursued by other countries that participated in the HGP.
As the HGP was nearing completion, genome-sequencing pipelines had stabilized to the point that NHGRI was able to collect fairly reliable cost information from the major sequencing centers funded by the Institute. Based on these data, NHGRI estimated that the hypothetical 2003 cost to generate a 'second' reference human genome sequence using the then-available approaches and technologies was in the neighborhood of $50 million.
Since the completion of the HGP and the generation of the first 'reference' human genome sequence, efforts have increasingly shifted to the generation of human genome sequences from individual people. Sequencing an individual's 'personal' genome actually involves establishing the identity and order of ~6 billion bases of DNA (rather than a ~3-billion-base 'reference' sequence; see above). Thus, the generation of a person's genome sequence is a notably different endeavor than what the HGP did.
Within a few years following the end of the HGP (e.g., in 2006), the landscape of genome sequencing was beginning to change. While revolutionary new DNA sequencing technologies, such as those in use today, were not quite implemented at that time, genomics groups continued to refine the basic methodologies used during the HGP and continued lowering the costs for genome sequencing. Considerable efforts were being made to the sequencing of nonhuman genomes (much more so than human genomes), but the cost-accounting data collected at that time can be used to estimate the approximate cost that would have been associated with human genome sequencing at that time.
Based on data collected by NHGRI from the Institute's funded genome-sequencing groups, the cost to generate a high-quality 'draft' human genome sequence had dropped to ~$14 million by 2006. Hypothetically, it would have likely cost upwards of $20-25 million to generate a 'finished' human genome sequence - expensive, but still considerably less so than for generating the first reference human genome sequence.
The decade following the HGP brought revolutionary advances in DNA sequencing technologies that are fundamentally changing the nature of genomics. So-called 'next-generation' DNA sequencing methods arrived on the scene, and their effects quickly became evident in terms of lowering genome-sequencing costs; note that these NHGRI-collected data are 'retroactive' in nature, and do not always accurately reflect the 'projected' costs for genome sequencing going forward).
In 2015, the most common routine for sequencing an individual's human genome involves generating a 'draft' sequence and comparing it to a reference human genome sequence, so as to catalog all sequence variants in that genome; such a routine does not involve any sequence finishing. In short, nearly all human genome sequencing in 2015 yields high-quality 'draft' (but unfinished) sequence. That sequencing is typically targeted to all exons (whole-exome sequencing) or aimed at the entire ~6-billion-base genome (whole-genome sequencing), as discussed above. The quality of the resulting 'draft' sequences is heavily dependent on the amount of average base redundancy provided by the generated data (with higher redundancy costing more).
Adding to the complex landscape of genome sequencing in 2015 has been the emergence of commercial enterprises offering genome-sequencing services at competitive pricing. Direct comparisons between commercial versus academic genome-sequencing operations can be particularly challenging because of the many nuances about what each includes in any cost estimates (with such details often not revealed by private companies). The cost data that NHGRI collects from its funded genome-sequencing groups includes information about a wide range of activities and components, such as: reagents, consumables, DNA-sequencing instruments, certain computer equipment, other equipment, laboratory pipeline development, laboratory information management systems, initial data processing, submission of data to public databases, project management, utilities, other indirect costs, labor, and administration. Note that such cost-accounting does not typically include activities such as quality assurance/quality control (QA/QC), alignment of generated sequence to a reference human genome, sequence assembly, genomic variant calling, or annotation. Almost certainly, companies vary in terms of which of the items in the above lists get included in any cost estimates, making direct cost comparisons with academic genome-sequencing groups difficult. It is thus important to consider these variables - along with the distinction between retrospective versus projected costs - when comparing genome-sequencing costs claimed by different groups. Anyone comparing costs for genome sequencing should also be aware of the distinction between 'price' and 'cost' - a given price may be either higher or lower than the actual cost.
Based on the data collected from NHGRI-funded genome-sequencing groups, the cost to generate a high-quality 'draft' whole human genome sequence in mid-2015 was just above $4,000; by late in 2015, that figure had fallen below $1,500. The cost to generate a whole-exome sequence was generally below $1,000. Commercial prices for whole-genome and whole-exome sequences have often (but not always) been slightly below these numbers.
Innovation in genome-sequencing technologies and strategies does not appear to be slowing. As a result, one can readily expect continued reductions in the cost for human genome sequencing. The key factors to consider when assessing the 'value' associated with an estimated cost for generating a human genome sequence - in particular, the amount of the genome (whole versus exome), quality, and associated data analysis (if any) - will likely remain largely the same. With new DNA-sequencing platforms anticipated in the coming years, the nature of the generated sequence data and the associated costs will likely continue to be dynamic. As such, continued attention will need to be paid to the way in which the costs associated with genome sequencing are calculated.
Top of page
Last Updated: June 6, 2016
Read more from the original source:
The Cost of Sequencing a Human Genome
Posted in Genome
Comments Off on The Cost of Sequencing a Human Genome
Maximum life span – Wikipedia, the free encyclopedia
Posted: at 12:06 pm
Maximum life span is a measure of the maximum amount of time one or more members of a population have been observed to survive between birth and death. The term can also denote an estimate of the maximum amount of time that a member of a given species could survive between life and death, provided circumstances that are optimal to that member's longevity.
Most living species have at least one upper limit on the number of times cells can divide. This is called the Hayflick limit, although number of cell divisions does not strictly control lifespan (non-dividing cells and dividing cells lived over 122 years in the oldest known human).
In animal studies, maximum span is often taken to be the mean life span of the most long-lived 10% of a given cohort. By another definition, however, maximum life span corresponds to the age at which the oldest known member of a species or experimental group has died. Calculation of the maximum life span in the latter sense depends upon initial sample size.[1]
Maximum life span contrasts with mean life span (average life span, life expectancy), and longevity. Mean life span varies with susceptibility to disease, accident, suicide and homicide, whereas maximum life span is determined by "rate of aging".[2] Longevity refers only to the characteristics of the especially long lived members of a population, such as infirmities as they age or compression of morbidity, and not the specific life span of an individual.
The longest-living person whose dates of birth and death were verified to the modern norms of Guinness World Records and the Gerontology Research Group was Jeanne Calment, a French woman who lived to 122. Reduction of infant mortality has accounted for most of the increased average life span longevity, but since the 1960s mortality rates among those over 80 years have decreased by about 1.5% per year. "The progress being made in lengthening lifespans and postponing senescence is entirely due to medical and public-health efforts, rising standards of living, better education, healthier nutrition and more salubrious lifestyles."[3] Animal studies suggest that further lengthening of human lifespan could be achieved through "calorie restriction mimetic" drugs or by directly reducing food consumption. Although calorie restriction has not been proven to extend the maximum human life span, as of 2014, results in ongoing primate studies have demonstrated that the assumptions derived from rodents are valid in primates as well [Reference: Nature 01.04.2014].[4]
No fixed theoretical limit to human longevity is apparent today.[5] "A fundamental question in aging research is whether humans and other species possess an immutable life-span limit."[6] "The assumption that the maximum human life span is fixed has been justified, [but] is invalid in a number of animal models and ... may become invalid for humans as well."[7] Studies in the biodemography of human longevity indicate a late-life mortality deceleration law: that death rates level off at advanced ages to a late-life mortality plateau. That is, there is no fixed upper limit to human longevity, or fixed maximal human lifespan.[8] This law was first quantified in 1939, when researchers found that the one-year probability of death at advanced age asymptotically approaches a limit of 44% for women and 54% for men.[9]
It has also been observed that the VO2max value (a measure of the volume of oxygen flow to the cardiac muscle) decreases as a function of age. Therefore, the maximum lifespan of an individual can be determined by calculating when his or her VO2max value drops below the basal metabolic rate necessary to sustain life - approximately 3 ml per kg per minute.[10] Noakes (p.84) notes that, on the basis of this hypothesis, athletes with a VO2max value between 50 and 60 at age 20 can be expected "to live for 100 to 125 years, provided they maintained their physical activity so that their rate of decline in VO2max remained constant."
A theoretical study suggested the maximum human lifespan to be around 125 years using a modified stretched exponential function for human survival curves.[11]
Small animals such as birds and squirrels rarely live to their maximum life span, usually dying of accidents, disease or predation. Grazing animals accumulate wear and tear to their teeth to the point where they can no longer eat, and they die of starvation.[citation needed]
The maximum life span of most species has not been accurately determined, because the data collection has been minimal and the number of species studied in captivity (or by monitoring in the wild) has been small.[citation needed]
Maximum life span is usually longer for species that are larger or have effective defenses against predation, such as bird flight, tortoise shells, porcupine quills, or large primate brains.
The differences in life span between species demonstrate the role of genetics in determining maximum life span ("rate of aging"). The records (in years) are these:
The longest-lived vertebrates have been variously described as
With the possible exception of the Bowhead whale, the claims of lifespans >100 year rely on conjecture (e.g. counting otoliths) rather than empirical, continuous documentation.[citation needed]
Invertebrate species which continue to grow as long as they live (e.g., certain clams, some coral species) can on occasion live hundreds of years:
Plants are referred to as annuals which live only one year, biennials which live two years, and perennials which live longer than that. The longest-lived perennials, woody-stemmed plants such as trees and bushes, often live for hundreds and even thousands of years (one may question whether or not they may die of old age). A giant sequoia, General Sherman is alive and well in its third millennium. A Great Basin Bristlecone Pine called Methuselah is 4,845 years old (as of 2014) and the Bristlecone Pine called Prometheus was a little older still, at least 4,844 years (and possibly as old as 5,000 years), when it was cut down in 1964. The oldest known plant (possibly oldest living thing) is a clonal Quaking Aspen (Populus tremuloides) tree colony in the Fishlake National Forest in Utah called Pando at about 80,000 years.
"Maximum life span" here means the mean life span of the most long-lived 10% of a given cohort. Caloric restriction has not yet been shown to break mammalian world records for longevity. Rats, mice, and hamsters experience maximum life-span extension from a diet that contains all of the nutrients but only 4060% of the calories that the animals consume when they can eat as much as they want. Mean life span is increased 65% and maximum life span is increased 50%, when caloric restriction is begun just before puberty.[37] For fruit flies the life extending benefits of calorie restriction are gained immediately at any age upon beginning calorie restriction and ended immediately at any age upon resuming full feeding.[38]
A few transgenic strains of mice have been created that have maximum life spans greater than that of wild-type or laboratory mice. The Ames and Snell mice, which have mutations in pituitary transcription factors and hence are deficient in Gh, LH, TSH, and secondarily IGF1, have extensions in maximal lifespan of up to 65%. To date, both in absolute and relative terms, these Ames and Snell mice have the maximum lifespan of any mouse not on caloric restriction (see below on GhR). Mutations/knockout of other genes affecting the GH/IGF1 axis, such as Lit, Ghr and Irs1 have also shown extension in lifespan, but much more modest both in relative and absolute terms. The longest lived laboratory mouse ever was a Ghr knockout mouse on caloric restriction, which lived to ~1800 days in the lab of Andrzej Bartke at Southern Illinois University. The maximum for normal B6 mice under ideal conditions is 1200 days.
Most biomedical gerontologists believe that biomedical molecular engineering will eventually extend maximum lifespan and even bring about rejuvenation.[citation needed]Anti-aging drugs are a potential tool for extending life.[39]
Aubrey de Grey, a theoretical gerontologist, has proposed that aging can be reversed by Strategies for Engineered Negligible Senescence. De Grey has established The Methuselah Mouse Prize to award money to researchers who can extend the maximum life span of mice. So far, three Mouse Prizes have been awarded: one for breaking longevity records to Dr. Andrzej Bartke of Southern Illinois University (using GhR knockout mice); one for late-onset rejuvenation strategies to Dr. Stephen Spindler of the University of California (using caloric restriction initiated late in life); and one to Dr. Z. Dave Sharp for his work with the pharmaceutical rapamycin.[40]
Accumulated DNA damage appears to be a limiting factor in the determination of maximum life span. The theory that DNA damage is the primary cause of aging, and thus a principal determinant of maximum life span, has attracted increased interest in recent years. This is based, in part, on evidence in human and mouse that inherited deficiencies in DNA repair genes often cause accelerated aging.[41][42][43] There is also substantial evidence that DNA damage accumulates with age in mammalian tissues, such as those of the brain, muscle, liver and kidney (reviewed by Bernstein et al.[44] and see DNA damage theory of aging and DNA damage (naturally occurring)). One expectation of the theory (that DNA damage is the primary cause of aging) is that among species with differing maximum life spans, the capacity to repair DNA damage should correlate with lifespan. The first experimental test of this idea was by Hart and Setlow[45] who measured the capacity of cells from seven different mammalian species to carry out DNA repair. They found that nucleotide excision repair capability increased systematically with species longevity. This correlation was striking and stimulated a series of 11 additional experiments in different laboratories over succeeding years on the relationship of nucleotide excision repair and life span in mammalian species (reviewed by Bernstein and Bernstein[46]). In general, the findings of these studies indicated a good correlation between nucleotide excision repair capacity and life span. The association between nucleotide excision repair capability and longevity is strengthened by the evidence that defects in nucleotide excision repair proteins in humans and rodents cause features of premature aging, as reviewed by Diderich.[42]
Further support for the theory that DNA damage is the primary cause of aging comes from study of Poly ADP ribose polymerases (PARPs). PARPs are enzymes that are activated by DNA strand breaks and play a role in DNA base excision repair. Burkle et al. reviewed evidence that PARPs, and especially PARP-1, are involved in maintaining mammalian longevity.[47] The life span of 13 mammalian species correlated with poly(ADP ribosyl)ation capability measured in mononuclear cells. Furthermore, lymphoblastoid cell lines from peripheral blood lymphocytes of humans over age 100 had a significantly higher poly(ADP-ribosyl)ation capability than control cell lines from younger individuals.
See original here:
Maximum life span - Wikipedia, the free encyclopedia
Posted in Human Longevity
Comments Off on Maximum life span – Wikipedia, the free encyclopedia