Daily Archives: November 23, 2016

Gene H. Kim, MD – The University of Chicago Medicine

Posted: November 23, 2016 at 9:56 pm

Gene Kim, MD, provides skilled medical care to adults with cardiovascular disease. He focuses on heart transplantation and patients with advanced heart failure.

Dr. Kim is investigating microRNA regulation of cardiovascular development and function. He is also currently researching the use of high-frequency ultrasonic imaging in laboratory models to detect a wide range of cardiac disorders, including aortic and vascular disorders, hypertension, hypertrophy, cardiomyopathy, and right ventricular dysfunction.

The University of Chicago Medicine 5841 S. Maryland Avenue Chicago, IL 60637

2008

Internal Medicine Cardiovascular Diseases

The University of Chicago Pritzker School of Medicine

The University of Chicago Medicine

American Heart Association

English

gkim1@medicine.bsd.uchicago.edu

(773) 702-3936

(773) 834-1764

Gene H. Kim, MD The University of Chicago Medicine 5841 S. Maryland Avenue, MC 6080 Chicago, IL 60637

Request an appointment online or call UCM Connect at 1-888-824-0200

Physicians, contact the Referring Physician Access Line at 1-877-DOM-2730

Additionally, Dr. Kim provides cardiology care through the Urban Health Initiative, a partnership designed to improve access to health care for residents on the South Side of Chicago.

View a partial list of Dr. Kim's publications through the National Library of Medicine's PubMed online database.

Read more here:
Gene H. Kim, MD - The University of Chicago Medicine

Posted in Gene Medicine | Comments Off on Gene H. Kim, MD – The University of Chicago Medicine

Gene – Wikipedia

Posted: at 9:56 pm

This article is about the heritable unit for transmission of biological traits. For other uses, see Gene (disambiguation).

A gene is a locus (or region) of DNA which is made up of nucleotides and is the molecular unit of heredity.[1][2]:Glossary The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. Most biological traits are under the influence of polygenes (many different genes) as well as geneenvironment interactions. Some genetic traits are instantly visible, such as eye colour or number of limbs, and some are not, such as blood type, risk for specific diseases, or the thousands of basic biochemical processes that comprise life.

Genes can acquire mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a protein, which cause different phenotype traits. Colloquial usage of the term "having a gene" (e.g., "good genes," "hair colour gene") typically refers to having a different allele of the gene. Genes evolve due to natural selection or survival of the fittest of the alleles.

The concept of a gene continues to be refined as new phenomena are discovered.[3] For example, regulatory regions of a gene can be far removed from its coding regions, and coding regions can be split into several exons. Some viruses store their genome in RNA instead of DNA and some gene products are functional non-coding RNAs. Therefore, a broad, modern working definition of a gene is any discrete locus of heritable, genomic sequence which affect an organism's traits by being expressed as a functional product or by regulation of gene expression.[4][5]

The existence of discrete inheritable units was first suggested by Gregor Mendel (18221884).[6] From 1857 to 1864, he studied inheritance patterns in 8000 common edible pea plants, tracking distinct traits from parent to offspring. He described these mathematically as 2ncombinations where n is the number of differing characteristics in the original peas. Although he did not use the term gene, he explained his results in terms of discrete inherited units that give rise to observable physical characteristics. This description prefigured the distinction between genotype (the genetic material of an organism) and phenotype (the visible traits of that organism). Mendel was also the first to demonstrate independent assortment, the distinction between dominant and recessive traits, the distinction between a heterozygote and homozygote, and the phenomenon of discontinuous inheritance.

Prior to Mendel's work, the dominant theory of heredity was one of blending inheritance, which suggested that each parent contributed fluids to the fertilisation process and that the traits of the parents blended and mixed to produce the offspring. Charles Darwin developed a theory of inheritance he termed pangenesis, from Greek pan ("all, whole") and genesis ("birth") / genos ("origin").[7][8] Darwin used the term gemmule to describe hypothetical particles that would mix during reproduction.

Mendel's work went largely unnoticed after its first publication in 1866, but was rediscovered in the late 19th century by Hugo de Vries, Carl Correns, and Erich von Tschermak, who (claimed to have) reached similar conclusions in their own research.[9] Specifically, in 1889, Hugo de Vries published his book Intracellular Pangenesis,[10] in which he postulated that different characters have individual hereditary carriers and that inheritance of specific traits in organisms comes in particles. De Vries called these units "pangenes" (Pangens in German), after Darwin's 1868 pangenesis theory.

Sixteen years later, in 1905, the word genetics was first used by William Bateson,[11] while Eduard Strasburger, amongst others, still used the term pangene for the fundamental physical and functional unit of heredity.[12] In 1909 the Danish botanist Wilhelm Johannsen shortened the name to "gene". [13]

Advances in understanding genes and inheritance continued throughout the 20th century. Deoxyribonucleic acid (DNA) was shown to be the molecular repository of genetic information by experiments in the 1940s to 1950s.[14][15] The structure of DNA was studied by Rosalind Franklin and Maurice Wilkins using X-ray crystallography, which led James D. Watson and Francis Crick to publish a model of the double-stranded DNA molecule whose paired nucleotide bases indicated a compelling hypothesis for the mechanism of genetic replication.[16][17]

In the early 1950s the prevailing view was that the genes in a chromosome acted like discrete entities, indivisible by recombination and arranged like beads on a string. The experiments of Benzer using mutants defective in the rII region of bacteriophage T4 (1955-1959) showed that individual genes have a simple linear structure and are likely to be equivalent to a linear section of DNA.[18][19]

Collectively, this body of research established the central dogma of molecular biology, which states that proteins are translated from RNA, which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses. The modern study of genetics at the level of DNA is known as molecular genetics.

In 1972, Walter Fiers and his team at the University of Ghent were the first to determine the sequence of a gene: the gene for Bacteriophage MS2 coat protein.[20] The subsequent development of chain-termination DNA sequencing in 1977 by Frederick Sanger improved the efficiency of sequencing and turned it into a routine laboratory tool.[21] An automated version of the Sanger method was used in early phases of the Human Genome Project.[22]

The theories developed in the 1930s and 1940s to integrate molecular genetics with Darwinian evolution are called the modern evolutionary synthesis, a term introduced by Julian Huxley.[23] Evolutionary biologists subsequently refined this concept, such as George C. Williams' gene-centric view of evolution. He proposed an evolutionary concept of the gene as a unit of natural selection with the definition: "that which segregates and recombines with appreciable frequency."[24]:24 In this view, the molecular gene transcribes as a unit, and the evolutionary gene inherits as a unit. Related ideas emphasizing the centrality of genes in evolution were popularized by Richard Dawkins.[25][26]

The vast majority of living organisms encode their genes in long strands of DNA (deoxyribonucleic acid). DNA consists of a chain made from four types of nucleotide subunits, each composed of: a five-carbon sugar (2'-deoxyribose), a phosphate group, and one of the four bases adenine, cytosine, guanine, and thymine.[2]:2.1

Two chains of DNA twist around each other to form a DNA double helix with the phosphate-sugar backbone spiralling around the outside, and the bases pointing inwards with adenine base pairing to thymine and guanine to cytosine. The specificity of base pairing occurs because adenine and thymine align to form two hydrogen bonds, whereas cytosine and guanine form three hydrogen bonds. The two strands in a double helix must therefore be complementary, with their sequence of bases matching such that the adenines of one strand are paired with the thymines of the other strand, and so on.[2]:4.1

Due to the chemical composition of the pentose residues of the bases, DNA strands have directionality. One end of a DNA polymer contains an exposed hydroxyl group on the deoxyribose; this is known as the 3'end of the molecule. The other end contains an exposed phosphate group; this is the 5'end. The two strands of a double-helix run in opposite directions. Nucleic acid synthesis, including DNA replication and transcription occurs in the 5'3'direction, because new nucleotides are added via a dehydration reaction that uses the exposed 3'hydroxyl as a nucleophile.[27]:27.2

The expression of genes encoded in DNA begins by transcribing the gene into RNA, a second type of nucleic acid that is very similar to DNA, but whose monomers contain the sugar ribose rather than deoxyribose. RNA also contains the base uracil in place of thymine. RNA molecules are less stable than DNA and are typically single-stranded. Genes that encode proteins are composed of a series of three-nucleotide sequences called codons, which serve as the "words" in the genetic "language". The genetic code specifies the correspondence during protein translation between codons and amino acids. The genetic code is nearly the same for all known organisms.[2]:4.1

The total complement of genes in an organism or cell is known as its genome, which may be stored on one or more chromosomes. A chromosome consists of a single, very long DNA helix on which thousands of genes are encoded.[2]:4.2 The region of the chromosome at which a particular gene is located is called its locus. Each locus contains one allele of a gene; however, members of a population may have different alleles at the locus, each with a slightly different gene sequence.

The majority of eukaryotic genes are stored on a set of large, linear chromosomes. The chromosomes are packed within the nucleus in complex with storage proteins called histones to form a unit called a nucleosome. DNA packaged and condensed in this way is called chromatin.[2]:4.2 The manner in which DNA is stored on the histones, as well as chemical modifications of the histone itself, regulate whether a particular region of DNA is accessible for gene expression. In addition to genes, eukaryotic chromosomes contain sequences involved in ensuring that the DNA is copied without degradation of end regions and sorted into daughter cells during cell division: replication origins, telomeres and the centromere.[2]:4.2 Replication origins are the sequence regions where DNA replication is initiated to make two copies of the chromosome. Telomeres are long stretches of repetitive sequence that cap the ends of the linear chromosomes and prevent degradation of coding and regulatory regions during DNA replication. The length of the telomeres decreases each time the genome is replicated and has been implicated in the aging process.[29] The centromere is required for binding spindle fibres to separate sister chromatids into daughter cells during cell division.[2]:18.2

Prokaryotes (bacteria and archaea) typically store their genomes on a single large, circular chromosome. Similarly, some eukaryotic organelles contain a remnant circular chromosome with a small number of genes.[2]:14.4 Prokaryotes sometimes supplement their chromosome with additional small circles of DNA called plasmids, which usually encode only a few genes and are transferable between individuals. For example, the genes for antibiotic resistance are usually encoded on bacterial plasmids and can be passed between individual cells, even those of different species, via horizontal gene transfer.[30]

Whereas the chromosomes of prokaryotes are relatively gene-dense, those of eukaryotes often contain regions of DNA that serve no obvious function. Simple single-celled eukaryotes have relatively small amounts of such DNA, whereas the genomes of complex multicellular organisms, including humans, contain an absolute majority of DNA without an identified function.[31] This DNA has often been referred to as "junk DNA". However, more recent analyses suggest that, although protein-coding DNA makes up barely 2% of the human genome, about 80% of the bases in the genome may be expressed, so the term "junk DNA" may be a misnomer.[5]

The structure of a gene consists of many elements of which the actual protein coding sequence is often only a small part. These include DNA regions that are not transcribed as well as untranslated regions of the RNA.

Firstly, flanking the open reading frame, all genes contain a regulatory sequence that is required for their expression. In order to be expressed, genes require a promoter sequence. The promoter is recognized and bound by transcription factors and RNA polymerase to initiate transcription.[2]:7.1 A gene can have more than one promoter, resulting in messenger RNAs (mRNA) that differ in how far they extend in the 5'end.[32] Promoter regions have a consensus sequence, however highly transcribed genes have "strong" promoter sequences that bind the transcription machinery well, whereas others have "weak" promoters that bind poorly and initiate transcription less frequently.[2]:7.2Eukaryotic promoter regions are much more complex and difficult to identify than prokaryotic promoters.[2]:7.3

Additionally, genes can have regulatory regions many kilobases upstream or downstream of the open reading frame. These act by binding to transcription factors which then cause the DNA to loop so that the regulatory sequence (and bound transcription factor) become close to the RNA polymerase binding site.[33] For example, enhancers increase transcription by binding an activator protein which then helps to recruit the RNA polymerase to the promoter; conversely silencers bind repressor proteins and make the DNA less available for RNA polymerase.[34]

The transcribed pre-mRNA contains untranslated regions at both ends which contain a ribosome binding site, terminator and start and stop codons.[35] In addition, most eukaryotic open reading frames contain untranslated introns which are removed before the exons are translated. The sequences at the ends of the introns, dictate the splice sites to generate the final mature mRNA which encodes the protein or RNA product.[36]

Many prokaryotic genes are organized into operons, with multiple protein-coding sequences that are transcribed as a unit.[37][38] The genes in an operon are transcribed as a continuous messenger RNA, referred to as a polycistronic mRNA. The term cistron in this context is equivalent to gene. The transcription of an operons mRNA is often controlled by a repressor that can occur in an active or inactive state depending on the presence of certain specific metabolites.[39] When active, the repressor binds to a DNA sequence at the beginning of the operon, called the operator region, and represses transcription of the operon; when the repressor is inactive transcription of the operon can occur (see e.g. Lac operon). The products of operon genes typically have related functions and are involved in the same regulatory network.[2]:7.3

Defining exactly what section of a DNA sequence comprises a gene is difficult.[3]Regulatory regions of a gene such as enhancers do not necessarily have to be close to the coding sequence on the linear molecule because the intervening DNA can be looped out to bring the gene and its regulatory region into proximity. Similarly, a gene's introns can be much larger than its exons. Regulatory regions can even be on entirely different chromosomes and operate in trans to allow regulatory regions on one chromosome to come in contact with target genes on another chromosome.[40][41]

Early work in molecular genetics suggested the concept that one gene makes one protein. This concept (originally called the one gene-one enzyme hypothesis) emerged from an influential 1941 paper by George Beadle and Edward Tatum on experiments with mutants of the fungus Neurospora crassa.[42]Norman Horowitz, an early colleague on the Neurospora research, reminisced in 2004 that these experiments founded the science of what Beadle and Tatum called biochemical genetics. In actuality they proved to be the opening gun in what became molecular genetics and all the developments that have followed from that.[43] The one gene-one protein concept has been refined since the discovery of genes that can encode multiple proteins by alternative splicing and coding sequences split in short section across the genome whose mRNAs are concatenated by trans-splicing.[5][44][45]

A broad operational definition is sometimes used to encompass the complexity of these diverse phenomena, where a gene is defined as a union of genomic sequences encoding a coherent set of potentially overlapping functional products.[11] This definition categorizes genes by their functional products (proteins or RNA) rather than their specific DNA loci, with regulatory elements classified as gene-associated regions.[11]

In all organisms, two steps are required to read the information encoded in a gene's DNA and produce the protein it specifies. First, the gene's DNA is transcribed to messenger RNA (mRNA).[2]:6.1 Second, that mRNA is translated to protein.[2]:6.2 RNA-coding genes must still go through the first step, but are not translated into protein.[46] The process of producing a biologically functional molecule of either RNA or protein is called gene expression, and the resulting molecule is called a gene product.

The nucleotide sequence of a gene's DNA specifies the amino acid sequence of a protein through the genetic code. Sets of three nucleotides, known as codons, each correspond to a specific amino acid.[2]:6 The principle that three sequential bases of DNA code for each amino acid was demonstrated in 1961 using frameshift mutations in the rIIB gene of bacteriophage T4[47] (see Crick, Brenner et al. experiment).

Additionally, a "start codon", and three "stop codons" indicate the beginning and end of the protein coding region. There are 64possible codons (four possible nucleotides at each of three positions, hence 43possible codons) and only 20standard amino acids; hence the code is redundant and multiple codons can specify the same amino acid. The correspondence between codons and amino acids is nearly universal among all known living organisms.[48]

Transcription produces a single-stranded RNA molecule known as messenger RNA, whose nucleotide sequence is complementary to the DNA from which it was transcribed.[2]:6.1 The mRNA acts as an intermediate between the DNA gene and its final protein product. The gene's DNA is used as a template to generate a complementary mRNA. The mRNA matches the sequence of the gene's DNA coding strand because it is synthesised as the complement of the template strand. Transcription is performed by an enzyme called an RNA polymerase, which reads the template strand in the 3' to 5'direction and synthesizes the RNA from 5' to 3'. To initiate transcription, the polymerase first recognizes and binds a promoter region of the gene. Thus, a major mechanism of gene regulation is the blocking or sequestering the promoter region, either by tight binding by repressor molecules that physically block the polymerase, or by organizing the DNA so that the promoter region is not accessible.[2]:7

In prokaryotes, transcription occurs in the cytoplasm; for very long transcripts, translation may begin at the 5'end of the RNA while the 3'end is still being transcribed. In eukaryotes, transcription occurs in the nucleus, where the cell's DNA is stored. The RNA molecule produced by the polymerase is known as the primary transcript and undergoes post-transcriptional modifications before being exported to the cytoplasm for translation. One of the modifications performed is the splicing of introns which are sequences in the transcribed region that do not encode protein. Alternative splicing mechanisms can result in mature transcripts from the same gene having different sequences and thus coding for different proteins. This is a major form of regulation in eukaryotic cells and also occurs in some prokaryotes.[2]:7.5[49]

Translation is the process by which a mature mRNA molecule is used as a template for synthesizing a new protein.[2]:6.2 Translation is carried out by ribosomes, large complexes of RNA and protein responsible for carrying out the chemical reactions to add new amino acids to a growing polypeptide chain by the formation of peptide bonds. The genetic code is read three nucleotides at a time, in units called codons, via interactions with specialized RNA molecules called transfer RNA (tRNA). Each tRNA has three unpaired bases known as the anticodon that are complementary to the codon it reads on the mRNA. The tRNA is also covalently attached to the amino acid specified by the complementary codon. When the tRNA binds to its complementary codon in an mRNA strand, the ribosome attaches its amino acid cargo to the new polypeptide chain, which is synthesized from amino terminus to carboxyl terminus. During and after synthesis, most new proteins must fold to their active three-dimensional structure before they can carry out their cellular functions.[2]:3

Genes are regulated so that they are expressed only when the product is needed, since expression draws on limited resources.[2]:7 A cell regulates its gene expression depending on its external environment (e.g. available nutrients, temperature and other stresses), its internal environment (e.g. cell division cycle, metabolism, infection status), and its specific role if in a multicellular organism. Gene expression can be regulated at any step: from transcriptional initiation, to RNA processing, to post-translational modification of the protein. The regulation of lactose metabolism genes in E. coli (lac operon) was the first such mechanism to be described in 1961.[50]

A typical protein-coding gene is first copied into RNA as an intermediate in the manufacture of the final protein product.[2]:6.1 In other cases, the RNA molecules are the actual functional products, as in the synthesis of ribosomal RNA and transfer RNA. Some RNAs known as ribozymes are capable of enzymatic function, and microRNA has a regulatory role. The DNA sequences from which such RNAs are transcribed are known as non-coding RNA genes.[46]

Some viruses store their entire genomes in the form of RNA, and contain no DNA at all.[51][52] Because they use RNA to store genes, their cellular hosts may synthesize their proteins as soon as they are infected and without the delay in waiting for transcription.[53] On the other hand, RNA retroviruses, such as HIV, require the reverse transcription of their genome from RNA into DNA before their proteins can be synthesized. RNA-mediated epigenetic inheritance has also been observed in plants and very rarely in animals.[54]

Organisms inherit their genes from their parents. Asexual organisms simply inherit a complete copy of their parent's genome. Sexual organisms have two copies of each chromosome because they inherit one complete set from each parent.[2]:1

According to Mendelian inheritance, variations in an organism's phenotype (observable physical and behavioral characteristics) are due in part to variations in its genotype (particular set of genes). Each gene specifies a particular trait with different sequence of a gene (alleles) giving rise to different phenotypes. Most eukaryotic organisms (such as the pea plants Mendel worked on) have two alleles for each trait, one inherited from each parent.[2]:20

Alleles at a locus may be dominant or recessive; dominant alleles give rise to their corresponding phenotypes when paired with any other allele for the same trait, whereas recessive alleles give rise to their corresponding phenotype only when paired with another copy of the same allele. For example, if the allele specifying tall stems in pea plants is dominant over the allele specifying short stems, then pea plants that inherit one tall allele from one parent and one short allele from the other parent will also have tall stems. Mendel's work demonstrated that alleles assort independently in the production of gametes, or germ cells, ensuring variation in the next generation. Although Mendelian inheritance remains a good model for many traits determined by single genes (including a number of well-known genetic disorders) it does not include the physical processes of DNA replication and cell division.[55][56]

The growth, development, and reproduction of organisms relies on cell division, or the process by which a single cell divides into two usually identical daughter cells. This requires first making a duplicate copy of every gene in the genome in a process called DNA replication.[2]:5.2 The copies are made by specialized enzymes known as DNA polymerases, which "read" one strand of the double-helical DNA, known as the template strand, and synthesize a new complementary strand. Because the DNA double helix is held together by base pairing, the sequence of one strand completely specifies the sequence of its complement; hence only one strand needs to be read by the enzyme to produce a faithful copy. The process of DNA replication is semiconservative; that is, the copy of the genome inherited by each daughter cell contains one original and one newly synthesized strand of DNA.[2]:5.2

The rate of DNA replication in living cells was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli and found to be impressively rapid.[57] During the period of exponential DNA increase at 37 C, the rate of elongation was 749 nucleotides per second.

After DNA replication is complete, the cell must physically separate the two copies of the genome and divide into two distinct membrane-bound cells.[2]:18.2 In prokaryotes(bacteria and archaea) this usually occurs via a relatively simple process called binary fission, in which each circular genome attaches to the cell membrane and is separated into the daughter cells as the membrane invaginates to split the cytoplasm into two membrane-bound portions. Binary fission is extremely fast compared to the rates of cell division in eukaryotes. Eukaryotic cell division is a more complex process known as the cell cycle; DNA replication occurs during a phase of this cycle known as S phase, whereas the process of segregating chromosomes and splitting the cytoplasm occurs during M phase.[2]:18.1

The duplication and transmission of genetic material from one generation of cells to the next is the basis for molecular inheritance, and the link between the classical and molecular pictures of genes. Organisms inherit the characteristics of their parents because the cells of the offspring contain copies of the genes in their parents' cells. In asexually reproducing organisms, the offspring will be a genetic copy or clone of the parent organism. In sexually reproducing organisms, a specialized form of cell division called meiosis produces cells called gametes or germ cells that are haploid, or contain only one copy of each gene.[2]:20.2 The gametes produced by females are called eggs or ova, and those produced by males are called sperm. Two gametes fuse to form a diploid fertilized egg, a single cell that has two sets of genes, with one copy of each gene from the mother and one from the father.[2]:20

During the process of meiotic cell division, an event called genetic recombination or crossing-over can sometimes occur, in which a length of DNA on one chromatid is swapped with a length of DNA on the corresponding homologous non-sister chromatid. This can result in reassortment of otherwise linked alleles.[2]:5.5 The Mendelian principle of independent assortment asserts that each of a parent's two genes for each trait will sort independently into gametes; which allele an organism inherits for one trait is unrelated to which allele it inherits for another trait. This is in fact only true for genes that do not reside on the same chromosome, or are located very far from one another on the same chromosome. The closer two genes lie on the same chromosome, the more closely they will be associated in gametes and the more often they will appear together; genes that are very close are essentially never separated because it is extremely unlikely that a crossover point will occur between them. This is known as genetic linkage.[58]

DNA replication is for the most part extremely accurate, however errors (mutations) do occur.[2]:7.6 The error rate in eukaryotic cells can be as low as 108 per nucleotide per replication,[59][60] whereas for some RNA viruses it can be as high as 103.[61] This means that each generation, each human genome accumulates 12 new mutations.[61] Small mutations can be caused by DNA replication and the aftermath of DNA damage and include point mutations in which a single base is altered and frameshift mutations in which a single base is inserted or deleted. Either of these mutations can change the gene by missense (change a codon to encode a different amino acid) or nonsense (a premature stop codon).[62] Larger mutations can be caused by errors in recombination to cause chromosomal abnormalities including the duplication, deletion, rearrangement or inversion of large sections of a chromosome. Additionally, DNA repair mechanisms can introduce mutational errors when repairing physical damage to the molecule. The repair, even with mutation, is more important to survival than restoring an exact copy, for example when repairing double-strand breaks.[2]:5.4

When multiple different alleles for a gene are present in a species's population it is called polymorphic. Most different alleles are functionally equivalent, however some alleles can give rise to different phenotypic traits. A gene's most common allele is called the wild type, and rare alleles are called mutants. The genetic variation in relative frequencies of different alleles in a population is due to both natural selection and genetic drift.[63] The wild-type allele is not necessarily the ancestor of less common alleles, nor is it necessarily fitter.

Most mutations within genes are neutral, having no effect on the organism's phenotype (silent mutations). Some mutations do not change the amino acid sequence because multiple codons encode the same amino acid (synonymous mutations). Other mutations can be neutral if they lead to amino acid sequence changes, but the protein still functions similarly with the new amino acid (e.g. conservative mutations). Many mutations, however, are deleterious or even lethal, and are removed from populations by natural selection. Genetic disorders are the result of deleterious mutations and can be due to spontaneous mutation in the affected individual, or can be inherited. Finally, a small fraction of mutations are beneficial, improving the organism's fitness and are extremely important for evolution, since their directional selection leads to adaptive evolution.[2]:7.6

Genes with a most recent common ancestor, and thus a shared evolutionary ancestry, are known as homologs.[64] These genes appear either from gene duplication within an organism's genome, where they are known as paralogous genes, or are the result of divergence of the genes after a speciation event, where they are known as orthologous genes,[2]:7.6 and often perform the same or similar functions in related organisms. It is often assumed that the functions of orthologous genes are more similar than those of paralogous genes, although the difference is minimal.[65][66]

The relationship between genes can be measured by comparing the sequence alignment of their DNA.[2]:7.6 The degree of sequence similarity between homologous genes is called conserved sequence. Most changes to a gene's sequence do not affect its function and so genes accumulate mutations over time by neutral molecular evolution. Additionally, any selection on a gene will cause its sequence to diverge at a different rate. Genes under stabilizing selection are constrained and so change more slowly whereas genes under directional selection change sequence more rapidly.[67] The sequence differences between genes can be used for phylogenetic analyses to study how those genes have evolved and how the organisms they come from are related.[68][69]

The most common source of new genes in eukaryotic lineages is gene duplication, which creates copy number variation of an existing gene in the genome.[70][71] The resulting genes (paralogs) may then diverge in sequence and in function. Sets of genes formed in this way comprise a gene family. Gene duplications and losses within a family are common and represent a major source of evolutionary biodiversity.[72] Sometimes, gene duplication may result in a nonfunctional copy of a gene, or a functional copy may be subject to mutations that result in loss of function; such nonfunctional genes are called pseudogenes.[2]:7.6

"Orphan" genes, whose sequence shows no similarity to existing genes, are less common than gene duplicates. Estimates of the number of genes with no homologs outside humans range from 18[73] to 60.[74] Two primary sources of orphan protein-coding genes are gene duplication followed by extremely rapid sequence change, such that the original relationship is undetectable by sequence comparisons, and de novo conversion of a previously non-coding sequence into a protein-coding gene.[75] De novo genes are typically shorter and simpler in structure than most eukaryotic genes, with few if any introns.[70] Over long evolutionary time periods, de novo gene birth may be responsible for a significant fraction of taxonomically-restricted gene families.[76]

Horizontal gene transfer refers to the transfer of genetic material through a mechanism other than reproduction. This mechanism is a common source of new genes in prokaryotes, sometimes thought to contribute more to genetic variation than gene duplication.[77] It is a common means of spreading antibiotic resistance, virulence, and adaptive metabolic functions.[30][78] Although horizontal gene transfer is rare in eukaryotes, likely examples have been identified of protist and alga genomes containing genes of bacterial origin.[79][80]

The genome is the total genetic material of an organism and includes both the genes and non-coding sequences.[81]

The genome size, and the number of genes it encodes varies widely between organisms. The smallest genomes occur in viruses (which can have as few as 2 protein-coding genes),[90] and viroids (which act as a single non-coding RNA gene).[91] Conversely, plants can have extremely large genomes,[92] with rice containing >46,000 protein-coding genes.[93] The total number of protein-coding genes (the Earth's proteome) is estimated to be 5million sequences.[94]

Although the number of base-pairs of DNA in the human genome has been known since the 1960s, the estimated number of genes has changed over time as definitions of genes, and methods of detecting them have been refined. Initial theoretical predictions of the number of human genes were as high as 2,000,000.[95] Early experimental measures indicated there to be 50,000100,000 transcribed genes (expressed sequence tags).[96] Subsequently, the sequencing in the Human Genome Project indicated that many of these transcripts were alternative variants of the same genes, and the total number of protein-coding genes was revised down to ~20,000[89] with 13 genes encoded on the mitochondrial genome.[87] Of the human genome, only 12% consists of protein-coding genes,[97] with the remainder being 'noncoding' DNA such as introns, retrotransposons, and noncoding RNAs.[97][98] Every multicellular organism has all its genes in each cell of its body but not every gene functions in every cell .

Essential genes are the set of genes thought to be critical for an organism's survival.[100] This definition assumes the abundant availability of all relevant nutrients and the absence of environmental stress. Only a small portion of an organism's genes are essential. In bacteria, an estimated 250400 genes are essential for Escherichia coli and Bacillus subtilis, which is less than 10% of their genes.[101][102][103] Half of these genes are orthologs in both organisms and are largely involved in protein synthesis.[103] In the budding yeast Saccharomyces cerevisiae the number of essential genes is slightly higher, at 1000 genes (~20% of their genes).[104] Although the number is more difficult to measure in higher eukaryotes, mice and humans are estimated to have around 2000 essential genes (~10% of their genes).[105] The synthetic organism, Syn 3, has a minimal genome of 473 essential genes and quasi-essential genes (necessary for fast growth), although 149 have unknown function.[99]

Essential genes include Housekeeping genes (critical for basic cell functions)[106] as well as genes that are expressed at different times in the organisms development or life cycle.[107] Housekeeping genes are used as experimental controls when analysing gene expression, since they are constitutively expressed at a relatively constant level.

Gene nomenclature has been established by the HUGO Gene Nomenclature Committee (HGNC) for each known human gene in the form of an approved gene name and symbol (short-form abbreviation), which can be accessed through a database maintained by HGNC. Symbols are chosen to be unique, and each gene has only one symbol (although approved symbols sometimes change). Symbols are preferably kept consistent with other members of a gene family and with homologs in other species, particularly the mouse due to its role as a common model organism.[108]

Genetic engineering is the modification of an organism's genome through biotechnology. Since the 1970s, a variety of techniques have been developed to specifically add, remove and edit genes in an organism.[109] Recently developed genome engineering techniques use engineered nuclease enzymes to create targeted DNA repair in a chromosome to either disrupt or edit a gene when the break is repaired.[110][111][112][113] The related term synthetic biology is sometimes used to refer to extensive genetic engineering of an organism.[114]

Genetic engineering is now a routine research tool with model organisms. For example, genes are easily added to bacteria[115] and lineages of knockout mice with a specific gene's function disrupted are used to investigate that gene's function.[116][117] Many organisms have been genetically modified for applications in agriculture, industrial biotechnology, and medicine.

For multicellular organisms, typically the embryo is engineered which grows into the adult genetically modified organism.[118] However, the genomes of cells in an adult organism can be edited using gene therapy techniques to treat genetic diseases.

Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2002). Molecular Biology of the Cell (Fourth ed.). New York: Garland Science. ISBN978-0-8153-3218-3. A molecular biology textbook available free online through NCBI Bookshelf.

More here:
Gene - Wikipedia

Posted in Gene Medicine | Comments Off on Gene – Wikipedia

The Ron Paul Institute for Peace and Prosperity : Education …

Posted: at 9:56 pm

Maryland Governor Larry Hogan recently signed an executive order forbidding Maryland public schools from beginning classes before Labor Day. Governor Hogans executive order benefits businesses in Marylands coastal areas that lose school-aged summer employees and business from Maryland families when schools start in August. However, as Governor Hogans critics have pointed out, some Maryland school districts, as well as Maryland schoolchildren, benefit from an earlier start to the school year.

Governor Hogans executive order is the latest example of how centralized government control of education leaves many students behind. A centrally planned education system can no more meet the unique needs of every child than a centrally planned economic system can meet the unique needs of every worker and consumer.

Centralizing education at the state or, worse, federal level inevitably leads to political conflicts over issues ranging from whether students should be allowed to pray on school grounds, to what should be the curriculum, to what food should be served in the cafeteria, to who should be allowed to use which bathroom.

The centralization and politicization of education is rooted in the idea that education is a right that must be provided by the government, instead of a good that individuals should obtain in the market. Separating school from state would empower parents to find an education system that meets the needs of their children instead of using the political process to force their idea of a good education on all children.

While many politicians praise local and parental control of education, the fact is both major parties embrace federal control of education. The two sides only differ on the details. Liberals who oppose the testing mandates of No Child Left Behind enthusiastically backed President Clintons national testing proposals. They also back the Obama administrations expansion of federal interference in the classroom via Common Core.

Similarly, conservatives who (correctly) not just opposed Clintons initiatives but called for the abolition of the Department of Education enthusiastically supported No Child Left Behind. Even most conservatives who oppose Common Core, federal bathroom and cafeteria mandates, and other federal education policies, support reforming, instead of eliminating, the Department of Education.

Politicians will not voluntarily relinquish control over education to parents. Therefore, parents and other concerned citizens should take a page from the UK and work to Ed-Exit government-controlled education. Parents and other concerned citizens should pressure Congress to finally shut down the Department of Education and return the money to American families. They also must pressure state governments and local school boards to reject federal mandates, even if it means forgoing federal funding.

Parents should also explore education alternatives, such as private, charter, and religious schools, as well as homeschooling. Homeschooling is the ultimate form of Ed-Exit. Homeschooling parents have the freedom to shape every aspect of education from the curriculum to the length of the school day to what their children have for lunch to who can and cannot use the bathroom to fit their child's unique needs.

Parents interested in providing their children with a quality education emphasizing the ideas of liberty should try out my homeschooling curriculum. The curriculum provides students with a well-rounded education that includes courses in personal finance and public speaking. The government and history sections of the curriculum emphasize Austrian economics, libertarian political theory, and the history of liberty. However, unlike government schools, my curriculum never puts ideological indoctrination ahead of education.

Parents interested in Ed-Exiting from government-run schools can learn more about my curriculum at ronpaulcurriculum.com.

See more here:
The Ron Paul Institute for Peace and Prosperity : Education ...

Posted in Ron Paul | Comments Off on The Ron Paul Institute for Peace and Prosperity : Education …

Libertarianism (metaphysics) – Wikipedia

Posted: at 9:55 pm

Libertarianism is one of the main philosophical positions related to the problems of free will and determinism, which are part of the larger domain of metaphysics.[1] In particular, libertarianism, which is an incompatibilist position,[2][3] argues that free will is logically incompatible with a deterministic universe and that agents have free will, and that, therefore, determinism is false.[4] Although compatibilism, the view that determinism and free will are in fact compatible, is the most popular position on free will amongst professional philosophers,[5] metaphysical libertarianism is discussed, though not necessarily endorsed, by several philosophers, such as Peter van Inwagen, Robert Kane, Robert Nozick,[6]Carl Ginet, Harry Frankfurt, E.J. Lowe, Alfred Mele, Roderick Chisholm, Daniel Dennett,[7] and Galen Strawson.[8]

The first recorded use of the term "libertarianism" was in 1789 by William Belsham in a discussion of free will and in opposition to "necessitarian" (or determinist) views.[9][10]

Metaphysical libertarianism is one philosophical view point under that of incompatibilism. Libertarianism holds onto a concept of free will that requires the agent to be able to take more than one possible course of action under a given set of circumstances.

Accounts of libertarianism subdivide into non-physical theories and physical or naturalistic theories. Non-physical theories hold that the events in the brain that lead to the performance of actions do not have an entirely physical explanation, and consequently the world is not closed under physics. Such interactionist dualists believe that some non-physical mind, will, or soul overrides physical causality.

Explanations of libertarianism that do not involve dispensing with physicalism require physical indeterminism, such as probabilistic subatomic particle behavior a theory unknown to many of the early writers on free will. Physical determinism, under the assumption of physicalism, implies there is only one possible future and is therefore not compatible with libertarian free will. Some libertarian explanations involve invoking panpsychism, the theory that a quality of mind is associated with all particles, and pervades the entire universe, in both animate and inanimate entities. Other approaches do not require free will to be a fundamental constituent of the universe; ordinary randomness is appealed to as supplying the "elbow room" believed to be necessary by libertarians.

Free volition is regarded as a particular kind of complex, high-level process with an element of indeterminism. An example of this kind of approach has been developed by Robert Kane,[11] where he hypothesises that,

In each case, the indeterminism is functioning as a hindrance or obstacle to her realizing one of her purposesa hindrance or obstacle in the form of resistance within her will which has to be overcome by effort.

At the time C. S. Lewis wrote Miracles,[12]quantum mechanics (and physical indeterminism) was only in the initial stages of acceptance, but still Lewis stated the logical possibility that, if the physical world was proved to be indeterministic, this would provide an entry (interaction) point into the traditionally viewed closed system, where a scientifically described physically probable/improbable event could be philosophically described as an action of a non-physical entity on physical reality. He states, however, that none of the arguments in his book will rely on this.[citation needed]

Nozick puts forward an indeterministic theory of free will in Philosophical Explanations.[6]

When human beings become agents through reflexive self-awareness, they express their agency by having reasons for acting, to which they assign weights. Choosing the dimensions of one's identity is a special case, in which the assigning of weight to a dimension is partly self-constitutive. But all acting for reasons is constitutive of the self in a broader sense, namely, by its shaping one's character and personality in a manner analogous to the shaping that law undergoes through the precedent set by earlier court decisions. Just as a judge does not merely apply the law but to some degree makes it through judicial discretion, so too a person does not merely discover weights but assigns them; one not only weighs reasons but also weights them. Set in train is a process of building a framework for future decisions that we are tentatively committed to.

The lifelong process of self-definition in this broader sense is construed indeterministically by Nozick. The weighting is "up to us" in the sense that it is undetermined by antecedent causal factors, even though subsequent action is fully caused by the reasons one has accepted. He compares assigning weights in this deterministic sense to "the currently orthodox interpretation of quantum mechanics", following von Neumann in understanding a quantum mechanical system as in a superposition or probability mixture of states, which changes continuously in accordance with quantum mechanical equations of motion and discontinuously via measurement or observation that "collapses the wave packet" from a superposition to a particular state. Analogously, a person before decision has reasons without fixed weights: he is in a superposition of weights. The process of decision reduces the superposition to a particular state that causes action.

Kane is one of the leading contemporary philosophers on free will.[13][14][verification needed] Advocating what is termed within philosophical circles "libertarian freedom", Kane argues that "(1) the existence of alternative possibilities (or the agent's power to do otherwise) is a necessary condition for acting freely, and that (2) determinism is not compatible with alternative possibilities (it precludes the power to do otherwise)".[15] It is important to note that the crux of Kane's position is grounded not in a defense of alternative possibilities (AP) but in the notion of what Kane refers to as ultimate responsibility (UR). Thus, AP is a necessary but insufficient criterion for free will.[citation needed] It is necessary that there be (metaphysically) real alternatives for our actions, but that is not enough; our actions could be random without being in our control. The control is found in "ultimate responsibility".

Ultimate responsibility entails that agents must be the ultimate creators (or originators) and sustainers of their own ends and purposes. There must be more than one way for a person's life to turn out (AP). More importantly, whichever way it turns out must be based in the person's willing actions. As Kane defines it,

UR: An agent is ultimately responsible for some (event or state) E's occurring only if (R) the agent is personally responsible for E's occurring in a sense which entails that something the agent voluntarily (or willingly) did or omitted either was, or causally contributed to, E's occurrence and made a difference to whether or not E occurred; and (U) for every X and Y (where X and Y represent occurrences of events and/or states) if the agent is personally responsible for X and if Y is an arche (sufficient condition, cause or motive) for X, then the agent must also be personally responsible for Y.

In short, "an agent must be responsible for anything that is a sufficient reason (condition, cause or motive) for the action's occurring."[16]

What allows for ultimacy of creation in Kane's picture are what he refers to as "self-forming actions" or SFAs those moments of indecision during which people experience conflicting wills. These SFAs are the undetermined, regress-stopping voluntary actions or refraining in the life histories of agents that are required for UR. UR does not require that every act done of our own free will be undetermined and thus that, for every act or choice, we could have done otherwise; it requires only that certain of our choices and actions be undetermined (and thus that we could have done otherwise), namely SFAs. These form our character or nature; they inform our future choices, reasons and motivations in action. If a person has had the opportunity to make a character-forming decision (SFA), they are responsible for the actions that are a result of their character.

Randolph Clarke objects that Kane's depiction of free will is not truly libertarian but rather a form of compatibilism.[citation needed] The objection asserts that although the outcome of an SFA is not determined, one's history up to the event is; so the fact that an SFA will occur is also determined. The outcome of the SFA is based on chance,[citation needed] and from that point on one's life is determined. This kind of freedom, says Clarke, is no different than the kind of freedom argued for by compatibilists, who assert that even though our actions are determined, they are free because they are in accordance with our own wills, much like the outcome of an SFA.[citation needed]

Kane responds that the difference between causal indeterminism and compatibilism is "ultimate control the originative control exercised by agents when it is 'up to them' which of a set of possible choices or actions will now occur, and up to no one and nothing else over which the agents themselves do not also have control".[17] UR assures that the sufficient conditions for one's actions do not lie before one's own birth.

Galen Strawson holds that there is a fundamental sense in which free will is impossible, whether determinism is true or not. He argues for this position with what he calls his "basic argument", which aims to show that no-one is ever ultimately morally responsible for their actions, and hence that no one has free will in the sense that usually concerns us.

In his book defending compatibilism, Freedom Evolves, Daniel Dennett spends a chapter criticising Kane's theory.[7] Kane believes freedom is based on certain rare and exceptional events, which he calls self-forming actions or SFA's. Dennett notes that there is no guarantee such an event will occur in an individual's life. If it does not, the individual does not in fact have free will at all, according to Kane. Yet they will seem the same as anyone else. Dennett finds an essentially indetectable notion of free will to be incredible.

Frankfurt counterexamples[18] (also known as Frankfurt cases or Frankfurt-style cases) were presented by philosopher Harry Frankfurt in 1969 as counterexamples to the "principle of alternative possibilities" or PAP, which holds that an agent is morally responsible for an action only if they have the option of free will (i.e. they could have done otherwise).

The principle of alternate possibilities forms part of an influential argument for the incompatibility of responsibility and causal determinism, as detailed below:

Traditionally, compatibilists (defenders of the compatibility of moral responsibility and determinism, like Alfred Ayer and Walter Terence Stace) try to reject premise two, arguing that, properly understood, free will is not incompatible with determinism. According to the traditional analysis of free will, an agent is free to do otherwise when they would have done otherwise had they wanted to do otherwise.[19] Agents may possess free will, according to the conditional analysis, even if determinism is true.

From the PAP definition "a person is morally responsible for what they have done only if they could have done otherwise",[20] Frankfurt infers that a person is not morally responsible for what they have done if they could not have done otherwise a point with which he takes issue: our theoretical ability to do otherwise, he says, does not necessarily make it possible for us to do otherwise.

Frankfurt's examples are significant because they suggest an alternative way to defend compatibilism, in particular by rejecting the first premise of the argument. According to this view, responsibility is compatible with determinism because responsibility does not require the freedom to do otherwise.

Frankfurt's examples involve agents who are intuitively responsible for their behavior even though they lack the freedom to act otherwise. Here is a typical case:

Donald is a Democrat and is likely to vote for the Democrats; in fact, only in one particular circumstance will he not: that is, if he thinks about the prospects of immediate American defeat in Iraq just prior to voting. Ms. White, a representative of the Democratic Party, wants to ensure that Donald votes Democratic, so she secretly plants a device in Donald's head that, if activated, will force him to vote Democratic. Not wishing to reveal her presence unnecessarily, Ms White plans to activate the device only if Donald thinks about the Iraq War prior to voting. As things happen, Donald does not think about the Democrats' promise to ensure defeat in Iraq prior to voting, so Ms White thus sees no reason to activate the device, and Donald votes Democratic of his own accord. Apparently, Donald is responsible for voting Democratic in spite of the fact that, owing to Ms. White's device, he lacks freedom to do otherwise.

If Frankfurt is correct in suggesting both that Donald is morally responsible for voting Democratic and that he is not free to do otherwise, moral responsibility, in general, does not require that an agent have the freedom to do otherwise (that is, the principle of alternate possibilities is false). Thus, even if causal determinism is true, and even if determinism removes the freedom to do otherwise, there is no reason to doubt that people can still be morally responsible for their behavior.

Having rebutted the principle of alternate possibilities, Frankfurt suggests that it be revised to take into account the fallacy of the notion that coercion precludes an agent from moral responsibility. It must be only because of coercion that the agent acts as they do. The best definition, by his reckoning, is this: "[A] person is not morally responsible for what they have done if they did it only because they could not have done otherwise."[21]

Excerpt from:
Libertarianism (metaphysics) - Wikipedia

Posted in Libertarianism | Comments Off on Libertarianism (metaphysics) – Wikipedia

Immortality: When (soon) and How That’s Really Possible

Posted: at 9:55 pm

Last Updated:20 April 2016 Author: Glyn Taylor

Indefinite life extension will be possiblewithin 30 years! Quite awow, really? prediction! This page is updated regularlywith the latest outlook towards our potentially immortal future.Please comment with your thoughts and any new information you would like adding. Like us on Facebook to keep updated, coz that would be awesome!

Want to live forever? Vote in our poll.

Twenty Years ago the idea of postponing aging, let alone reversing it, was weird and off-the-wall. Today there are good reasons for thinking it is fundamentally possible. Michael R. Rose

Within 30 Years? We instinctively fail to see technological growth as being exponential. If you do not understand the concept of exponential growth, then chances are you do not think immortality will ever be possible, let alone understand that it could be achieved within 30 years. To find out more, read our explanation of exponential growth.

The ExpertsWho Agree Dont take our word for it bring in the experts! Expert #1: Google. Larry Page and Sergey Brin, theGoogle co-founderssupport the theories of expert #2: Ray Kurzweil, who is the most popularised living futurist,as well as one of the leaders in the artificial intelligence industry, and chief of engineering at Google. He asserts that immortality could be achieved in as little as 20 years.

Moving from the technological realm to the world of bioengineering, we have expert #3: Aubrey de Gray, who is chief science officer at one of the most famous anti-ageing research foundations, the SRF. Aubrey de Grey, who was born in1963, believes that there is a 50/50 chance he will be alive when humanity reaches immortality. He is one of the leading faces in the fight against ageing, and is often invited to present his anti-aging theories for universities, TED Talks, think tanks, and news outlets.

Another face in the fight against ageing is expert #4: Jason Silva, who is a performance philosopher. To understand the brilliance of how he thinks, you must see his performances at his current YouTube channel, Shots of Awe. He supportsthe theories of both Ray Kurzweil and Aubrey de Grey, and describes immortality as the goal of humanity.

The Researchers

Since 2010, progression in the life extension industry has relatively sky-rocketed, more so in Russia than anywhere else. We have seen the formation of many high profile research companies, departments, foundations, institutes, and initiatives, with the specific aim of radically extending life.

Ageing is a multi-causal complex genetically determined biological process, and so to research how to combat it, you need the merger of many related disciplines.View hereto see just how complicated it is to even just track the bio-marker of ageing. The following example are only of groups that have the specific aim of life extension. Those specialising in sub-disciplines (but contributing to anti-ageing) are not listed.

The SRF aims to help build the industry that will cure the diseases of ageing.With this aim, they supply funding for the universities that are contributing to anti-ageing research. In addition to this, they run their own research centre, which brings together the knowledge of all anti-ageing sub-disciplines to gain an overseeing perspective. It is headed by the infamous, Aubrey de Grey. Here is theSENS FoundationAnnual Report 2015.

In 2013 Google helped launch Calico, an independent research and development biotech company, with the aim of combating ageing.Its CEO, Arthur Levinson is the Chairman of AppleandGenetech. In 2015 it announced its working withAncestryDNA, whocan provide access to a unique combination of resources that will enable Calico to develop potentially ground breaking therapeutic solutions. It is also working with a biopharmaceutical company calledAbbVie,whowill provide scientific and clinical development support and its commercial expertise toallow therapies to enter experimental phases.

This one makes a lot of headlines. It is taking a different approach; they aim to create technologies that will enable the transfer of an individuals consciousness to a more advanced non-biological immortal carrier. Below is their forecast for how they plan to advance.

Even More Researchers

The Buck InstituteMethuselah FoundationLongevity AllianceGeroWake ForestHuman Longevity, Inc.

What is Immortality?

Some think of it as the complete immunity from deaththe ability to get shot 200 times and then spit the bullets out. Maybe that will be possible one day, but it wont be our first version of immorality. The immortality we mean here is the ability to remain a healthy age, indefinitely. Ideally this age will be 21, with our bodies being fully formed, before their decline.

MindUploading is NOT Immortality

The 2045 Initiative are aiming to achieve immortality by uploading our brain dataout of our mortal biological minds and into an artificial one. Even if they manage to create a storage unit capable of working exactly like our own mind, all theyare doing is copy and pasting The copied version might be youin the moment of creation, but not from the next moment onwards. After seeing this copy and talking to it, would you then allow yourself to be turned off and replaced by it;to be killed? Well nah, I wouldnt. That isnt immortality, its reproduction.

Immortality is the indefinite maintenance of our biological minds.

Why Live Forever?

When you read an immortality related article on a mainstreamnews website, half of the people in the comments section seem to hate the idea. Usually the negativity towards immortality is displayed by those who dont understand what possibilities are waiting for us in the future; theythink of an immortal life asboring. I wrote an article calledWhy you will want to be immortal, to argue against that point of view. Another big reason that people do not want to live forever is because they believe that they will miss their lost loved ones too much. In response to that, I wrote, How everyone who has ever died, could be revived in the future.

Mortality is primitive, it is just a problem for humanity to overcome. Immortality is a natural development inthe evolutionary process of life.

How we will Live Forever

Ray Kurzweil has every intention to reach immortality. To do so, he has devised a personal plan to get there which involves 3 bridges. His plan is of course dependent on science achieving our immortality in around 20-30 years. The current priority is surviving for at least 20 years.

Bridge 1 Be Healthy

The first bridge is all about doing everything possible to extend your life with our current knowledge of ageing.The scientifically uncontroversial methods include: following a low-calorie (below 1500 calories), low-carb (below 80 grams) diet, and getting plenty of exercise and lots of sleep. Other methods raise eyebrows, such as drinking 10 glasses of highly alkaline water a day to rid the body of toxins, and having weekly intravenous infusions of vitamins, chelating agents and various other pharmaceuticals. Many other methods exist to rid the body of toxins, which can be found through a Google search. Wehave a guide onhow to get enough antioxidants to extend life.

Bridge 2 Biotechnology

The next bridge takes advantage of the accelerating biotechnology revolution. This will begin to take us beyond simply staying healthy, and into the realm of enhancements. Eventually biotechnology will cure aging, and even allow us to turn back our body clocks, on the journey there though discoveries will be made which will enhance our health, and extend our lifespans. We will see the increasing use of gene therapy, stem cells, therapeutic cloning, and replacement cells, tissues and organs.

Bridge 3 Nanotechnology & Artificial Intelligence

These technologies will completely revolutionise everything we know, how we live, why we live, and yes how long we live. For more information about the future that these technologies will create, read our explanation of the technological singularity.

Nano-sized robotic devices, miniature even compared to the size of a single blood cell, will become commonplace during the 2020s. It is predicted that these devices will progress to be used within the body to maintain perfect health and youth. The devices are already being used for diagnosis purposes. They will provide constant monitoring and notify you if you begin to develop any health problems. For example, they will detect cancer at its very first sign of growth, notify you and latch on to the cancerous cells, tagging them for immediate removal. In the next few decades they will not only diagnose, but also treat illnesses. For more information, read our guide to the nanotechnology revolution.

And we havent even mentioned Artificial Intelligence yet. Eventually through developments in nanotechnology, neural science, artificial brain building, and artificial intelligence, enough understanding will exist to enable our minds to be integrated into other storage mediums; we will have the ability to upload our minds (with the aid of nanotechnology); this is also referred to as digital immortality. Alternatively, we could still operate from our original brains, but outsource its cognition. For example, we could control a robot instead of our own body, or we could plug in to a virtual environment. Our intelligence levels would be significantly increased, we would communicate telepathically, and we would access the internet with our thoughts. The changes that such technology will have on humanity is incomprehensible. For more information about this future, check out ourinformation page about transhumanism.

Video Break! Below you can watch Ray Kurzweil explain more about bridge 3.

What aboutExistential Risk andOverpopulation?

So yeah, immortality would be great. But whos to say we will even get there without destroying each other first? The upcoming security risks related to emerging technologies are immense. We have written an article about the 5 emerging technologies that could destroy the world.

And if we do survive to reach immortality, then what about overpopulation? We will have problems to face with regard to overpopulation and the need for resources. These problems though can be overcome with new technologies, and it will not interrupt humanities transfer to immortality. We have written a detailed article, explaining why immortality wont cause overpopulation.

Security can Prevail

Lets end on a positive. Along with advanced weaponry comes advanced defence. For example, withmolecular manufacturingand early forms of non-conscious AI, a system of surveillance could be established to defend against the creation of illegal weaponry. This system would not be encroaching of privacies because humans will only be notified of your actions, should those actions be flagged by the system as suspicious. The only time your privacy will be invaded in an optimistic (non-dictatorship) future is when you are acting illegally.

Along with the advances may come a rising willingness to globally cooperate in order to progress with mutually beneficial aims such as self-sufficiency, immortality and space exploration; the threat of mutual destruction could become so great that nations will have no option but to come together and collaborate to tackle security problems together. On the subject of religious fundamentalism, with innovations such as immortality and the creation of god-like artificial intelligence, perhaps religions will become more open minded about the potential for science to explain the truth of our creation, acting to dilute religion and increase multiculturalization, secularisation and cooperation.

Have more to add?

Know something important that should be added to this article? Please comment and let us know. In the future we will be allowing users of the website to write their own articles. Please contact us for more information.

What do you think?

Would you like to live forever? Please comment below.

For commenting, please note that this page is continually being modified with updated news.

More:
Immortality: When (soon) and How That's Really Possible

Posted in Immortality Medicine | Comments Off on Immortality: When (soon) and How That’s Really Possible

Post-Human Republic (PHR) – Hawk Wargames

Posted: at 9:55 pm

A tiny portion of humanity turned its back on mankind in the waning days of the last Golden Age. Over one and a half centuries later, the PHR has emerged from the shadows as an unrecognisable civilisation, its people irrevocably changed. They are no longer simple human beings, they are post-humans - cyborgs.

A society no more than three billion strong, the PHR is a nation of elites, each individual more than a match for several lesser mortals. With remarkable speed, they have made technological advancements surpassing those of the UCM. Since its fiery birth, the PHR has been guided by the enigmatic White Sphere, a mysterious object of immense power. It is treated by the people of the PHR with a reverence bordering on worship.

See more information >

Changes to 1.1 Command Cards available to download

Check the changes here > more

New Newsletter #37

New Newsletter now available! Newsletter #37 If you missed this, why not sign up to our newsletters > more

2015 Tournament Packs now available

The Dropzone Commander Tournament Packs are now available to download from our events section: Click here to find out more > more

Link:
Post-Human Republic (PHR) - Hawk Wargames

Posted in Post Human | Comments Off on Post-Human Republic (PHR) – Hawk Wargames