Daily Archives: October 19, 2016

Victimless Crimes Liberal Democrats

Posted: October 19, 2016 at 4:16 am

The LDP does not generally support the criminalisation of victimless crimes and seeks to reduce the intrusion of government into these areas.

Victimless crime is a term used to refer to behaviour that is illegal but does not violate or threaten the rights of anyone else. It can include situations where an individual acts alone as well as consensual acts in which two or more persons agree to commit a criminal offence in which no other person is involved.

The issue in situations of victimless crime is the same. Society has created a formal framework of laws to prohibit types of conduct thought to be against the public interest. Laws proscribing homicide, assaults and rape are common to most cultures. Thus, when the supposed victim freely consents to be the victim in one of these crimes, the question is whether the state should make an exception from the law for this situation.

Take assisted suicide as an example. If one person intentionally takes the life of another, this is usually murder. If the motive for this is to collect the inheritance, society has no difficulty in ignoring the motive and convicting the killer. But if the motive is to relieve the suffering of the victim by providing a clean death that would otherwise be denied, can society so quickly reject the motive?

It is a case of balancing the harms. On the one hand, society could impose pain and suffering on the victim by forcing him or her to endure a long decline into death. Or society could permit a system for terminating life under controlled circumstances so that the victims wishes could be respected without exposing others to the criminal system for assisting in realising those wishes.

But victimless crimes are not always so weighty. Some examples of low level victimless activities that may be criminalised include:

Victimless crimes usually regarded more seriously include:

This includes the elderly and seriously ill as well as less obvious scenarios. For example, helping someone such as a celebrity facing exposure for socially unacceptable behaviour who seeks a gun or other means to end life; a driver trapped in a burning tanker full of gasoline who begs a passing armed police officer toshoot him rather than let him burn to death; a person who suffers traumatic injury in a road accident and wishes to avoid the humiliation and pain of a lingering slow death.

These situations are distinguishable from soliciting the cessation of life-sustaining treatment so that an injured or illperson may die a natural death, or leaving instructions not to resuscitate in the event of death.

Consideration of victimless crime involving more than one participant needs to take account of whether all the participants are capable of giving genuine consent. This may not be the case if one or more of the participants are:

Libertarianism focuses on the autonomy of the individual, asserting each persons right to live their lives with the least possible interference from the law. Libertarians do not necessarily approve, sanction or endorse the victimless action that is criminalised. Indeed, they may strongly disapprove.

Where they differ from non-libertarians is their belief that the government should be exceedingly reluctant to intervene. People are entitled to live their lives and make their own choices whether or not those choices are wise or the same as others would make, provided they do so voluntarily and without infringing the rights of others.

Without necessarily supporting, advocating or approving of them, the LDP does not generally support the criminalisation of victimless crimes. Wherever possible it will seek to reduce the intrusion of government into these areas.

It nonetheless recognises that not all victimless crimes are capable of being entirely de-regulated. It acknowledges there may be unintended coercive consequences from re-legalisation and that some regulation may be warranted in specific instances.

The LDP also favours strong sanctions against crimes that infringe the rights of others, whether deliberately or through negligence.

Further information

Mandatory bicycle helmets not only are such laws offensive to liberty, but they do not achieve their aim.

See the original post here:

Victimless Crimes Liberal Democrats

Posted in Victimless Crimes | Comments Off on Victimless Crimes Liberal Democrats

Phillip D. Collins — Luciferianism: The Religion of …

Posted: at 4:14 am

Other Collins Articles:

Darwinism and the Rise of Gnosticism

Engineering Evolution: The Alchemy of Eugenics

More Collins Articles

LUCIFERIANISM: THE RELIGION OF APOTHEOSIS

Phillip D. Collins January 17, 2006 NewsWithViews.com

Luciferianism constitutes the nucleus of the ruling class religion. While there are definitely political and economic rationales for elite criminality, Luciferianism can account for the longevity of many of the oligarchs projects. Many of the longest and most brutal human endeavors have been underpinned by some form of religious zealotry. The Crusades testify to this historical fact. Likewise, the power elites ongoing campaign to establish a socialist totalitarian global government has Luciferianism to thank for both its longevity and frequently violent character. In the mind of the modern oligarch, Luciferianism provides religious legitimacy for otherwise morally questionable plans.

Luciferianism is the product of religious engineering, which sociologist William Sims Bainbridge defines as the conscious, systematic, skilled creation of a new religion ("New Religions, Science, and Secularization," no pagination). In actuality, this is a tradition that even precedes Bainbridge. It has been the practice of Freemasonry for years. It was also the practice of Masonrys religious and philosophical progenitors, the ancient pagan Mystery cults. The inner doctrines of the Mesopotamian secret societies provided the theological foundations for the Christian and Judaic heresies, Kabbalism and Gnosticism. All modern Luciferian philosophy finds scientific legitimacy in the Gnostic myth of Darwinism. As evolutionary thought was popularized, variants of Luciferianism were popularized along with it (particularly in the form of secular humanism, which shall be examined shortly). A historical corollary of this popularization has been the rise of several cults and mass movements, exemplified by the various mystical sects and gurus of the sixties counterculture. The metastasis of Luciferian thinking continues to this very day.

Luciferianism represents a radical revaluation of humanitys ageless adversary: Satan. It is the ultimate inversion of good and evil. The formula for this inversion is reflected by the narrative paradigm of the Gnostic Hypostasis myth. As opposed to the original Biblical version, the Gnostic account represents a revaluation of the Hebraic story of the first mans temptation, the desire of mere men to be as gods by partaking of the tree of the knowledge of good and evil (Raschke 26). Carl Raschke elaborates:

In The Hypostasis of the Archons, an Egyptian Gnostic document, we read how the traditional story of mans disobedience toward God is reinterpreted as a universal conflict between knowledge (gnosis) and the dark powers (exousia) of the world, which bind the human soul in ignorance. The Hypostasis describes man as a stepchild of Sophia (Wisdom) created according to the model of aion, the imperishable realm of eternity.

On the other hand, it is neither God the Imperishable nor Sophia who actually is responsible in the making of man. On the contrary, the task is undertaken by the archons, the demonic powers who, because of their weakness, entrap man in a material body and thus cut him off from his blessed origin. They place him in paradise and enjoin him against eating of the tree of knowledge. The prohibition, however, is viewed by the author of the text not as a holy command but as a malignant effort on the part of the inferior spirits to prevent Adam from having true communion with the High God, from gaining authentic gnosis. (26)

According to this bowdlerization, Adam is consistently contacted by the High God in hopes of reinitiating mans quest for gnosis (26). The archons intervene and create Eve to distract Adam from the pursuit of gnosis (26-27). However, this Gnostic Eve is actually a sort of undercover agent for the High God, who is charged with divulging to Adam the truth that has been withheld from him (27). The archons manage to sabotage this covert operation by facilitating sexual intercourse between Adam and Eve, an act that Gnostics contend was designed to defile the womans spiritual nature (27). At this juncture, the Hypostasis reintroduces a familiar antagonist from the original Genesis account:

But now the principle of feminine wisdom reappears in the form of the serpent, called the Instructor, who tells the mortal pair to defy the prohibition of the archons and eat of the tree of knowledge. (27)

The serpent successfully entices Adam and Eve to eat the forbidden fruit, but the bodily defilement of the woman prevents man from understanding the true motive underpinning the act (27). Thus, humanity is fettered by the archons curse, suggesting that the orthodox theological view of the violation of the command as sin must be regarded anew as the mindless failure to commit the act rightly in the first place (27). In this revisionist context, the serpent is no longer Satan, but is an incognito savior instead (27). Meanwhile, Gods role as benevolent Heavenly Father is vilified:

The God of Genesis, who comes to reprimand Adam and Eve after their transgression, is rudely caricatured in this tale as the Arrogant archon who opposes the will of the authentic heavenly father. (27)

Of course, within this Gnostic narrative, God incarnate is equally belittled. Jesus Christ, the Word made flesh, is reduced to little more than a forerunner of the coming Gnostic adept. According to the Gnostic mythology, Jesus was but a mere type of this perfect man (27). He came as a teacher and an exemplar, to show others the path to illumination (27-28). The true messiah has yet to come. Equally, the serpent is only a precursor to this messiah. He only initiates mans journey towards gnosis. The developmental voyage must be further facilitated by the serpents predecessor, the Gnostic Christ. The Hypostasis provides the paradigmatic template for all Luciferian mythologies.

Like the Hypostasis, the binary opposition of Luciferian mythology caricatures Jehovah as an oppressive tyrant. He becomes the archon of arrogance, the embodiment of ignorance and religious superstition. Satan, who retains his heavenly title of Lucifer, is the liberator of humanity. Masonry, which acts as the contemporary retainer for the ancient Mystery religion, reconceptualizes Satan in a similar fashion. In Morals and Dogma, 33rd degree Freemason Albert Pike candidly exalts the fallen angel:

LUCIFER, the Light-bearer! Strange and mysterious name to give to the Spirit of Darkness! Lucifer, the Son of the Morning! Is it he who bears the Light, and with its splendors intolerable blinds feeble, sensual, or selfish Souls? Doubt it not. (321)

He makes man aware of his own innate divinity and promises to unlock the god within us all. This theme of apotheosis underpinned both Gnosticism and the pagan Mystery religions. While Gnosticisms origins with the Ancient Mystery cults remains a source of contention amongst scholars, its promises of liberation from humanitys material side is strongly akin to the old pagan Mysterys variety of psychic therapy (28). In addition, the Ancient Mystery religion promised the:

opportunity to erase the curse of mortality by direct encounter with the patron deity, or in many instances by actually undergoing an apotheosis, a transfiguration of human into divine (28).

Like some varieties of Satanism, Luciferianism does not depict the devil as a literal metaphysical entity. Lucifer only symbolizes the cognitive powers of man. He is the embodiment of science and reason. It is the Luciferians religious conviction that these two facilitative forces will dethrone God and apotheosize man. It comes as little surprise that the radicals of the early revolutionary faith celebrated the arrival of Darwinism. Evolutionary theory was the edifying science of Promethean zealotry and the new secular religion of the scientific dictatorship. According to Masonic scholar Wilmshurst, the completion of human evolution involves man becoming a god-like being and unifying his consciousness with the Omniscient (94).

During the Enlightenment, Luciferianism was disseminated on the popular level as secular humanism. All of the governing precepts of Luciferianism are encompassed by secular humanism. This is made evident by the philosophys rejection of theistic morality and enthronement of man as his own absolute moral authority. While Luciferianism has no sacred texts, Humanist Manifesto I and II succinctly delineate its central tenets. Whittaker Chambers, former member of the communist underground in America, eloquently summarizes this truth:

Humanism is not new. It is, in fact, mans second oldest faith. Its promise was whispered in the first days of Creation under the Tree of the knowledge of Good and Evil: Ye shall be as gods. (Qutd. in Baker 206)

Transhumanism offers an updated, hi-tech variety of Luciferianism. The appellation Transhumanism was coined by evolutionary biologist Julian Huxley (Transhumanism, Wikipedia: The Free Encyclopedia, no pagination). Huxley defined the transhuman condition as man remaining man, but transcending himself, by realizing new possibilities of and for his human nature (no pagination). However, by 1990, Dr. Max More would radically redefine Transhumanism as follows:

Transhumanism is a class of philosophies that seek to guide us towards a posthuman condition. Transhumanism shares many elements of humanism, including a respect for reason and science, a commitment to progress, and a valuing of human (or transhuman) existence in this life Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature and possibilities of our lives resulting from various sciences and technologies (No pagination)

Transhumanism advocates the use of nanotechnology, biotechnology, cognitive science, and information technology to propel humanity into a posthuman condition. Once he has arrived at this condition, man will cease to be man. He will become a machine, immune to death and all the other weaknesses intrinsic to his former human condition. The ultimate objective is to become a god. Transhumanism is closely aligned with the cult of artificial intelligence. In the very influential book The Age of Spiritual Machines, AI high priest Ray Kurzweil asserts that technological immortality could be achieved through magnetic resonance imaging or some technique of reading and replicating the human brains neural structure within a computer (Technological Immortality, no pagination). Through the merger of computers and humans, Kurzweil believes that man will become god-like spirits inhabiting cyberspace as well as the material universe (no pagination).

Following the Biblical revisionist tradition of the Gnostic Hypostasis myth, Transhumanists invert the roles of God and Satan. In an essay entitled In Praise of the Devil, Transhumanist ideologue Max More depicts Lucifer as a heroic rebel against a tyrannical God:

The Devil-Lucifer--is a force for good (where I define 'good' simply as that which I value, not wanting to imply any universal validity or necessity to the orientation). 'Lucifer' means 'light-bringer' and this should begin to clue us in to his symbolic importance. The story is that God threw Lucifer out of Heaven because Lucifer had started to question God and was spreading dissension among the angels. We must remember that this story is told from the point of view of the Godists (if I may coin a term) and not from that of the Luciferians (I will use this term to distinguish us from the official Satanists with whom I have fundamental differences). The truth may just as easily be that Lucifer resigned from heaven. (No pagination)

According to More, Lucifer probably exiled himself out of moral outrage towards the oppressive Jehovah:

God, being the well-documented sadist that he is, no doubt wanted to keep Lucifer around so that he could punish him and try to get him back under his (God's) power. Probably what really happened was that Lucifer came to hate God's kingdom, his sadism, his demand for slavish conformity and obedience, his psychotic rage at any display of independent thinking and behavior. Lucifer realized that he could never fully think for himself and could certainly not act on his independent thinking so long as he was under God's control. Therefore he left Heaven, that terrible spiritual-State ruled by the cosmic sadist Jehovah, and was accompanied by some of the angels who had had enough courage to question God's authority and his value-perspective. (No pagination)

More proceeds to reiterate 33rd Degree Mason Albert Pikes depiction of Lucifer:

Lucifer is the embodiment of reason, of intelligence, of critical thought. He stands against the dogma of God and all other dogmas. He stands for the exploration of new ideas and new perspectives in the pursuit of truth. (No pagination)

Lucifer is even considered a patron saint by some Transhumanists (Transtopian Symbolism, no pagination). Transhumanism retains the paradigmatic character of Luciferianism, albeit in a futurist context. Worse still, Transhumanism is hardly some marginalized cult. Richard Hayes, executive director of the Center for Genetics and Society, elaborates:

Last June at Yale University, the World Transhumanist Association held its first national conference. The Transhumanists have chapters in more than 20 countries and advocate the breeding of "genetically enriched" forms of "post-human" beings. Other advocates of the new techno-eugenics, such as Princeton University professor Lee Silver, predict that by the end of this century, "All aspects of the economy, the media, the entertainment industry, and the knowledge industry [will be] controlled by members of the GenRich class. . .Naturals [will] work as low-paid service providers or as laborers. . ." (No pagination)

Subscribe to the NewsWithViews Daily News Alerts!

With a growing body of academic luminaries and a techno-eugenical vision for the future, Transhumanism is carrying the banner of Luciferianism into the 21st century. Through genetic engineering and biotechnological augmentation of the physical body, Transhumanists are attempting to achieve the very same objective of their patron saint. I will ascend into heaven, I will exalt my throne above the stars of God:

I will sit also upon the mount of the congregation, in the sides of the north: I will ascend above the heights of the clouds; I will be like the most High. (Isaiah 14:13-14)

This declaration reflects the aspirations of the power elite as well. Whatever form the Luciferian religion assumes throughout the years, its goal remains the same: Apotheosis.

Sources Cited:

1, Bainbridge, William Sims. "New Religions, Science, and Secularization." Excerpted from Religion and the Social Order, 1993, Volume 3A, pages 277-292, 1993. 2, Hayes, Richard. "Selective Science." TomPaine.commonsense 12 February 2004. 3, More, Max. "Transhumanism: Towards a Futurist Philosophy." Maxmore.com 1996 4, "In Praise of the Devil." Lucifer.com 1999 5, Pike, Albert. Morals and Dogma. 1871. Richmond, Virginia: L.H. Jenkins, Inc., 1942. 6, Raschke, Carl A. The Interruption of Eternity: Modern Gnosticism and the Origins of the New Religious Consciousness. Chicago: Nelson-Hall, 1980. 7, "Transhumanism." Wikipedia: The Free Encyclopedia. 8 January 2006 8, "Transtopian Symbolism." Transtopia: Transhumanism Evolved 2003-2005 9, Wilmshurst, W.L. The Meaning of Masonry. New York: Gramercy, 1980.

2006 Phillip D. Collins - All Rights Reserved

E-Mails are used strictly for NWVs alerts, not for sale

Author Phillip D. Collins acted as the editor for The Hidden Face of Terrorism. He has also written articles for Paranoia Magazine, MKzine, NewsWithViews.com, and B.I.P.E.D.: The Official Website of Darwinian Dissent and Conspiracy Archive. He has an Associate of Arts and Science.

Currently, he is studying for a bachelor's degree in Communications at Wright State University. During the course of his seven-year college career, Phillip has studied philosophy, religion, and classic literature. He also co-authored the book, The Ascendancy of the Scientific Dictatorship: An Examination of Epistemic Autocracy, From the 19th to the 21st Century, which is available at: [Link]

E-Mail: collins.58@wright.edu

Home

Transhumanism advocates the use of nanotechnology, biotechnology, cognitive science, and information technology to propel humanity into a posthuman condition.

Go here to see the original:

Phillip D. Collins -- Luciferianism: The Religion of ...

Posted in Transtopian | Comments Off on Phillip D. Collins — Luciferianism: The Religion of …

Meme – Wikipedia

Posted: at 4:12 am

A meme ( MEEM)[1] is "an idea, behavior, or style that spreads from person to person within a culture".[2] A meme acts as a unit for carrying cultural ideas, symbols, or practices that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena with a mimicked theme. Supporters of the concept regard memes as cultural analogues to genes in that they self-replicate, mutate, and respond to selective pressures.[3]

Proponents theorize that memes are a viral phenomenon that may evolve by natural selection in a manner analogous to that of biological evolution. Memes do this through the processes of variation, mutation, competition, and inheritance, each of which influences a meme's reproductive success. Memes spread through the behavior that they generate in their hosts. Memes that propagate less prolifically may become extinct, while others may survive, spread, and (for better or for worse) mutate. Memes that replicate most effectively enjoy more success, and some may replicate effectively even when they prove to be detrimental to the welfare of their hosts.[4]

A field of study called memetics[5] arose in the 1990s to explore the concepts and transmission of memes in terms of an evolutionary model. Criticism from a variety of fronts has challenged the notion that academic study can examine memes empirically. However, developments in neuroimaging may make empirical study possible.[6] Some commentators in the social sciences question the idea that one can meaningfully categorize culture in terms of discrete units, and are especially critical of the biological nature of the theory's underpinnings.[7] Others have argued that this use of the term is the result of a misunderstanding of the original proposal.[8]

The word meme originated with Richard Dawkins' 1976 book The Selfish Gene. Dawkins's own position is somewhat ambiguous: he welcomed N. K. Humphrey's suggestion that "memes should be considered as living structures, not just metaphorically"[9] and proposed to regard memes as "physically residing in the brain".[10] Later, he argued that his original intentions, presumably before his approval of Humphrey's opinion, had been simpler.[11] At the New Directors' Showcase 2013 in Cannes, Dawkins' opinion on memetics was deliberately ambiguous.[12]

The word meme is a shortening (modeled on gene) of mimeme (from Ancient Greek pronounced[mmma] mmma, "imitated thing", from mimeisthai, "to imitate", from mimos, "mime")[13] coined by British evolutionary biologist Richard Dawkins in The Selfish Gene (1976)[1][14] as a concept for discussion of evolutionary principles in explaining the spread of ideas and cultural phenomena. Examples of memes given in the book included melodies, catchphrases, fashion, and the technology of building arches.[15]Kenneth Pike coined the related term emic and etic, generalizing the linguistic idea of phoneme, morpheme and tagmeme (as set out by Leonard Bloomfield), characterizing them as insider view and outside view of behaviour and extending the concept into a tagmemic theory of human behaviour (culminating in Language in Relation to a Unified Theory of the Structure of Human Behaviour, 1954).

The word meme originated with Richard Dawkins' 1976 book The Selfish Gene. Dawkins cites as inspiration the work of geneticist L. L. Cavalli-Sforza, anthropologist F. T. Cloak[16] and ethologist J. M. Cullen.[17] Dawkins wrote that evolution depended not on the particular chemical basis of genetics, but only on the existence of a self-replicating unit of transmissionin the case of biological evolution, the gene. For Dawkins, the meme exemplified another self-replicating unit with potential significance in explaining human behavior and cultural evolution. Although Dawkins invented the term 'meme' and developed meme theory, the possibility that ideas were subject to the same pressures of evolution as were biological attributes was discussed in Darwin's time. T. H. Huxley claimed that 'The struggle for existence holds as much in the intellectual as in the physical world. A theory is a species of thinking, and its right to exist is coextensive with its power of resisting extinction by its rivals.'[18]

Dawkins used the term to refer to any cultural entity that an observer might consider a replicator. He hypothesized that one could view many cultural entities as replicators, and pointed to melodies, fashions and learned skills as examples. Memes generally replicate through exposure to humans, who have evolved as efficient copiers of information and behavior. Because humans do not always copy memes perfectly, and because they may refine, combine or otherwise modify them with other memes to create new memes, they can change over time. Dawkins likened the process by which memes survive and change through the evolution of culture to the natural selection of genes in biological evolution.[15]

Dawkins defined the meme as a unit of cultural transmission, or a unit of imitation and replication, but later definitions would vary. The lack of a consistent, rigorous, and precise understanding of what typically makes up one unit of cultural transmission remains a problem in debates about memetics.[20] In contrast, the concept of genetics gained concrete evidence with the discovery of the biological functions of DNA. Meme transmission requires a physical medium, such as photons, sound waves, touch, taste or smell because memes can be transmitted only through the senses.

Dawkins noted that in a society with culture a person need not have descendants to remain influential in the actions of individuals thousands of years after their death:

But if you contribute to the world's culture, if you have a good idea...it may live on, intact, long after your genes have dissolved in the common pool. Socrates may or may not have a gene or two alive in the world today, as G.C. Williams has remarked, but who cares? The meme-complexes of Socrates, Leonardo, Copernicus and Marconi are still going strong.[21]

Memes, analogously to genes, vary in their aptitude to replicate; successful memes remain and spread, whereas unfit ones stall and are forgotten. Thus memes that prove more effective at replicating and surviving are selected in the meme pool.

Memes first need retention. The longer a meme stays in its hosts, the higher its chances of propagation are. When a host uses a meme, the meme's life is extended.[22] The reuse of the neural space hosting a certain meme's copy to host different memes is the greatest threat to that meme's copy.[23]

A meme which increases the longevity of its hosts will generally survive longer. On the contrary, a meme which shortens the longevity of its hosts will tend to disappear faster. However, as hosts are mortal, retention is not sufficient to perpetuate a meme in the long term; memes also need transmission.

Life-forms can transmit information both vertically (from parent to child, via replication of genes) and horizontally (through viruses and other means). Memes can replicate vertically or horizontally within a single biological generation. They may also lie dormant for long periods of time.

Memes reproduce by copying from a nervous system to another one, either by communication or imitation. Imitation often involves the copying of an observed behavior of another individual. Communication may be direct or indirect, where memes transmit from one individual to another through a copy recorded in an inanimate source, such as a book or a musical score. Adam McNamara has suggested that memes can be thereby classified as either internal or external memes (i-memes or e-memes).[6]

Some commentators have likened the transmission of memes to the spread of contagions.[24] Social contagions such as fads, hysteria, copycat crime, and copycat suicide exemplify memes seen as the contagious imitation of ideas. Observers distinguish the contagious imitation of memes from instinctively contagious phenomena such as yawning and laughing, which they consider innate (rather than socially learned) behaviors.[25]

Aaron Lynch described seven general patterns of meme transmission, or "thought contagion":[26]

Dawkins initially defined meme as a noun that "conveys the idea of a unit of cultural transmission, or a unit of imitation".[15] John S. Wilkins retained the notion of meme as a kernel of cultural imitation while emphasizing the meme's evolutionary aspect, defining the meme as "the least unit of sociocultural information relative to a selection process that has favorable or unfavorable selection bias that exceeds its endogenous tendency to change".[27] The meme as a unit provides a convenient means of discussing "a piece of thought copied from person to person", regardless of whether that thought contains others inside it, or forms part of a larger meme. A meme could consist of a single word, or a meme could consist of the entire speech in which that word first occurred. This forms an analogy to the idea of a gene as a single unit of self-replicating information found on the self-replicating chromosome.

While the identification of memes as "units" conveys their nature to replicate as discrete, indivisible entities, it does not imply that thoughts somehow become quantized or that "atomic" ideas exist that cannot be dissected into smaller pieces. A meme has no given size. Susan Blackmore writes that melodies from Beethoven's symphonies are commonly used to illustrate the difficulty involved in delimiting memes as discrete units. She notes that while the first four notes of Beethoven's Fifth Symphony (listen(helpinfo)) form a meme widely replicated as an independent unit, one can regard the entire symphony as a single meme as well.[20]

The inability to pin an idea or cultural feature to quantifiable key units is widely acknowledged as a problem for memetics. It has been argued however that the traces of memetic processing can be quantified utilizing neuroimaging techniques which measure changes in the connectivity profiles between brain regions."[6] Blackmore meets such criticism by stating that memes compare with genes in this respect: that while a gene has no particular size, nor can we ascribe every phenotypic feature directly to a particular gene, it has value because it encapsulates that key unit of inherited expression subject to evolutionary pressures. To illustrate, she notes evolution selects for the gene for features such as eye color; it does not select for the individual nucleotide in a strand of DNA. Memes play a comparable role in understanding the evolution of imitated behaviors.[20]

The 1981 book Genes, Mind, and Culture: The Coevolutionary Process by Charles J. Lumsden and E. O. Wilson proposed the theory that genes and culture co-evolve, and that the fundamental biological units of culture must correspond to neuronal networks that function as nodes of semantic memory. They coined their own word, "culturgen", which did not catch on. Coauthor Wilson later acknowledged the term meme as the best label for the fundamental unit of cultural inheritance in his 1998 book Consilience: The Unity of Knowledge, which elaborates upon the fundamental role of memes in unifying the natural and social sciences.[28]

Dawkins noted the three conditions that must exist for evolution to occur:[29]

Dawkins emphasizes that the process of evolution naturally occurs whenever these conditions co-exist, and that evolution does not apply only to organic elements such as genes. He regards memes as also having the properties necessary for evolution, and thus sees meme evolution as not simply analogous to genetic evolution, but as a real phenomenon subject to the laws of natural selection. Dawkins noted that as various ideas pass from one generation to the next, they may either enhance or detract from the survival of the people who obtain those ideas, or influence the survival of the ideas themselves. For example, a certain culture may develop unique designs and methods of tool-making that give it a competitive advantage over another culture. Each tool-design thus acts somewhat similarly to a biological gene in that some populations have it and others do not, and the meme's function directly affects the presence of the design in future generations. In keeping with the thesis that in evolution one can regard organisms simply as suitable "hosts" for reproducing genes, Dawkins argues that one can view people as "hosts" for replicating memes. Consequently, a successful meme may or may not need to provide any benefit to its host.[29]

Unlike genetic evolution, memetic evolution can show both Darwinian and Lamarckian traits. Cultural memes will have the characteristic of Lamarckian inheritance when a host aspires to replicate the given meme through inference rather than by exactly copying it. Take for example the case of the transmission of a simple skill such as hammering a nail, a skill that a learner imitates from watching a demonstration without necessarily imitating every discrete movement modeled by the teacher in the demonstration, stroke for stroke.[30]Susan Blackmore distinguishes the difference between the two modes of inheritance in the evolution of memes, characterizing the Darwinian mode as "copying the instructions" and the Lamarckian as "copying the product."[20]

Clusters of memes, or memeplexes (also known as meme complexes or as memecomplexes), such as cultural or political doctrines and systems, may also play a part in the acceptance of new memes. Memeplexes comprise groups of memes that replicate together and coadapt.[20] Memes that fit within a successful memeplex may gain acceptance by "piggybacking" on the success of the memeplex. As an example, John D. Gottsch discusses the transmission, mutation and selection of religious memeplexes and the theistic memes contained.[31] Theistic memes discussed include the "prohibition of aberrant sexual practices such as incest, adultery, homosexuality, bestiality, castration, and religious prostitution", which may have increased vertical transmission of the parent religious memeplex. Similar memes are thereby included in the majority of religious memeplexes, and harden over time; they become an "inviolable canon" or set of dogmas, eventually finding their way into secular law. This could also be referred to as the propagation of a taboo.

The discipline of memetics, which dates from the mid-1980s, provides an approach to evolutionary models of cultural information transfer based on the concept of the meme. Memeticists have proposed that just as memes function analogously to genes, memetics functions analogously to genetics. Memetics attempts to apply conventional scientific methods (such as those used in population genetics and epidemiology) to explain existing patterns and transmission of cultural ideas.

Principal criticisms of memetics include the claim that memetics ignores established advances in other fields of cultural study, such as sociology, cultural anthropology, cognitive psychology, and social psychology. Questions remain whether or not the meme concept counts as a validly disprovable scientific theory. This view regards memetics as a theory in its infancy: a protoscience to proponents, or a pseudoscience to some detractors.

An objection to the study of the evolution of memes in genetic terms (although not to the existence of memes) involves a perceived gap in the gene/meme analogy: the cumulative evolution of genes depends on biological selection-pressures neither too great nor too small in relation to mutation-rates. There seems no reason to think that the same balance will exist in the selection pressures on memes.[32]

Luis Benitez-Bribiesca M.D., a critic of memetics, calls the theory a "pseudoscientific dogma" and "a dangerous idea that poses a threat to the serious study of consciousness and cultural evolution". As a factual criticism, Benitez-Bribiesca points to the lack of a "code script" for memes (analogous to the DNA of genes), and to the excessive instability of the meme mutation mechanism (that of an idea going from one brain to another), which would lead to a low replication accuracy and a high mutation rate, rendering the evolutionary process chaotic.[33]

British political philosopher John Gray has characterized Dawkins' memetic theory of religion as "nonsense" and "not even a theory... the latest in a succession of ill-judged Darwinian metaphors", comparable to Intelligent Design in its value as a science.[34]

Another critique comes from semiotic theorists such as Deacon[35] and Kull.[36] This view regards the concept of "meme" as a primitivized concept of "sign". The meme is thus described in memetics as a sign lacking a triadic nature. Semioticians can regard a meme as a "degenerate" sign, which includes only its ability of being copied. Accordingly, in the broadest sense, the objects of copying are memes, whereas the objects of translation and interpretation are signs.[clarification needed]

Fracchia and Lewontin regard memetics as reductionist and inadequate.[37] Evolutionary biologist Ernst Mayr disapproved of Dawkins' gene-based view and usage of the term "meme", asserting it to be an "unnecessary synonym" for "concept", reasoning that concepts are not restricted to an individual or a generation, may persist for long periods of time, and may evolve.[38]

Opinions differ as to how best to apply the concept of memes within a "proper" disciplinary framework. One view sees memes as providing a useful philosophical perspective with which to examine cultural evolution. Proponents of this view (such as Susan Blackmore and Daniel Dennett) argue that considering cultural developments from a meme's-eye viewas if memes themselves respond to pressure to maximise their own replication and survivalcan lead to useful insights and yield valuable predictions into how culture develops over time. Others such as Bruce Edmonds and Robert Aunger have focused on the need to provide an empirical grounding for memetics to become a useful and respected scientific discipline.[39][40]

A third approach, described by Joseph Poulshock, as "radical memetics" seeks to place memes at the centre of a materialistic theory of mind and of personal identity.[41]

Prominent researchers in evolutionary psychology and anthropology, including Scott Atran, Dan Sperber, Pascal Boyer, John Tooby and others, argue the possibility of incompatibility between modularity of mind and memetics.[citation needed] In their view, minds structure certain communicable aspects of the ideas produced, and these communicable aspects generally trigger or elicit ideas in other minds through inference (to relatively rich structures generated from often low-fidelity input) and not high-fidelity replication or imitation. Atran discusses communication involving religious beliefs as a case in point. In one set of experiments he asked religious people to write down on a piece of paper the meanings of the Ten Commandments. Despite the subjects' own expectations of consensus, interpretations of the commandments showed wide ranges of variation, with little evidence of consensus. In another experiment, subjects with autism and subjects without autism interpreted ideological and religious sayings (for example, "Let a thousand flowers bloom" or "To everything there is a season"). People with autism showed a significant tendency to closely paraphrase and repeat content from the original statement (for example: "Don't cut flowers before they bloom"). Controls tended to infer a wider range of cultural meanings with little replicated content (for example: "Go with the flow" or "Everyone should have equal opportunity"). Only the subjects with autismwho lack the degree of inferential capacity normally associated with aspects of theory of mindcame close to functioning as "meme machines".[42]

In his book The Robot's Rebellion, Stanovich uses the memes and memeplex concepts to describe a program of cognitive reform that he refers to as a "rebellion". Specifically, Stanovich argues that the use of memes as a descriptor for cultural units is beneficial because it serves to emphasize transmission and acquisition properties that parallel the study of epidemiology. These properties make salient the sometimes parasitic nature of acquired memes, and as a result individuals should be motivated to reflectively acquire memes using what he calls a "Neurathian bootstrap" process.[43]

Although social scientists such as Max Weber sought to understand and explain religion in terms of a cultural attribute, Richard Dawkins called for a re-analysis of religion in terms of the evolution of self-replicating ideas apart from any resulting biological advantages they might bestow.

As an enthusiastic Darwinian, I have been dissatisfied with explanations that my fellow-enthusiasts have offered for human behaviour. They have tried to look for 'biological advantages' in various attributes of human civilization. For instance, tribal religion has been seen as a mechanism for solidifying group identity, valuable for a pack-hunting species whose individuals rely on cooperation to catch large and fast prey. Frequently the evolutionary preconception in terms of which such theories are framed is implicitly group-selectionist, but it is possible to rephrase the theories in terms of orthodox gene selection.

He argued that the role of key replicator in cultural evolution belongs not to genes, but to memes replicating thought from person to person by means of imitation. These replicators respond to selective pressures that may or may not affect biological reproduction or survival.[15]

In her book The Meme Machine, Susan Blackmore regards religions as particularly tenacious memes. Many of the features common to the most widely practiced religions provide built-in advantages in an evolutionary context, she writes. For example, religions that preach of the value of faith over evidence from everyday experience or reason inoculate societies against many of the most basic tools people commonly use to evaluate their ideas. By linking altruism with religious affiliation, religious memes can proliferate more quickly because people perceive that they can reap societal as well as personal rewards. The longevity of religious memes improves with their documentation in revered religious texts.[20]

Aaron Lynch attributed the robustness of religious memes in human culture to the fact that such memes incorporate multiple modes of meme transmission. Religious memes pass down the generations from parent to child and across a single generation through the meme-exchange of proselytism. Most people will hold the religion taught them by their parents throughout their life. Many religions feature adversarial elements, punishing apostasy, for instance, or demonizing infidels. In Thought Contagion Lynch identifies the memes of transmission in Christianity as especially powerful in scope. Believers view the conversion of non-believers both as a religious duty and as an act of altruism. The promise of heaven to believers and threat of hell to non-believers provide a strong incentive for members to retain their belief. Lynch asserts that belief in the Crucifixion of Jesus in Christianity amplifies each of its other replication advantages through the indebtedness believers have to their Savior for sacrifice on the cross. The image of the crucifixion recurs in religious sacraments, and the proliferation of symbols of the cross in homes and churches potently reinforces the wide array of Christian memes.[26]

Although religious memes have proliferated in human cultures, the modern scientific community has been relatively resistant to religious belief. Robertson (2007) [44] reasoned that if evolution is accelerated in conditions of propagative difficulty,[45] then we would expect to encounter variations of religious memes, established in general populations, addressed to scientific communities. Using a memetic approach, Robertson deconstructed two attempts to privilege religiously held spirituality in scientific discourse. Advantages of a memetic approach as compared to more traditional "modernization" and "supply side" theses in understanding the evolution and propagation of religion were explored.

In Cultural Software: A Theory of Ideology, Jack Balkin argued that memetic processes can explain many of the most familiar features of ideological thought. His theory of "cultural software" maintained that memes form narratives, social networks, metaphoric and metonymic models, and a variety of different mental structures. Balkin maintains that the same structures used to generate ideas about free speech or free markets also serve to generate racistic beliefs. To Balkin, whether memes become harmful or maladaptive depends on the environmental context in which they exist rather than in any special source or manner to their origination. Balkin describes racist beliefs as "fantasy" memes that become harmful or unjust "ideologies" when diverse peoples come together, as through trade or competition.[46]

In A Theory of Architecture, Nikos Salingaros speaks of memes as "freely propagating clusters of information" which can be beneficial or harmful. He contrasts memes to patterns and true knowledge, characterizing memes as "greatly simplified versions of patterns" and as "unreasoned matching to some visual or mnemonic prototype".[47] Taking reference to Dawkins, Salingaros emphasizes that they can be transmitted due to their own communicative properties, that "the simpler they are, the faster they can proliferate", and that the most successful memes "come with a great psychological appeal".[48]

Architectural memes, according to Salingaros, can have destructive power. "Images portrayed in architectural magazines representing buildings that could not possibly accommodate everyday uses become fixed in our memory, so we reproduce them unconsciously."[49] He lists various architectural memes that circulated since the 1920s and which, in his view, have led to contemporary architecture becoming quite decoupled from human needs. They lack connection and meaning, thereby preventing "the creation of true connections necessary to our understanding of the world". He sees them as no different from antipatterns in software design as solutions that are false but are re-utilized nonetheless.[50]

An "Internet meme" is a concept that spreads rapidly from person to person via the Internet, largely through Internet-based E-mailing, blogs, forums, imageboards like 4chan, social networking sites like Facebook, Instagram or Twitter, instant messaging, and video hosting services like YouTube and Twitch.tv.[51]

In 2013 Richard Dawkins characterized an Internet meme as one deliberately altered by human creativity, distinguished from Dawkins's original idea involving mutation by random change and a form of Darwinian selection.[52]

One technique of meme mapping represents the evolution and transmission of a meme across time and space.[53] Such a meme map uses a figure-8 diagram (an analemma) to map the gestation (in the lower loop), birth (at the choke point), and development (in the upper loop) of the selected meme. Such meme maps are nonscalar, with time mapped onto the y-axis and space onto the x-axis transect. One can read the temporal progression of the mapped meme from south to north on such a meme map. Paull has published a worked example using the "organics meme" (as in organic agriculture).[53]

Follow this link:

Meme - Wikipedia

Posted in Memetics | Comments Off on Meme – Wikipedia

Free Speech TV

Posted: at 4:10 am

So glad to have found [FSTV] because theres nothing else out there telling us whats really going on. - Rita

I'm watching RING OF FIRE ...That and all your other shows arethe best things on TV!! - John

I am so excited that I found your station flipping through the channels ... Keep up the good work. - Susan

Thom Hartmann is one of my heroes. - John

The Most informative and honest news station on American TV. No B.S. and great documentaries. - Kevin

[FSTV] is the best channel on tv. - Patricia

Most of us seek out media that tell us what we already believe to be true. Free Speech TV actually helps us think. - Alice

I want to thank Mike Papantonio for his wit and razor-sharp intellect, Amy Goodman for the highest standards of journalism... - Gail/Michigan

(Stephanie Miller) is why I started watching. Now watch Democracy Now! and Hartmann as well. - Deborah/Texas

"Free Speech TV is the best source of information that nobody knows about. We need to spread the word and educate the people." - Lorelei S.

"A little known TV station that offers an alternative viewpoint to the usual propaganda of network and cable news." - Ron S.

FSTV is the source. I'm greatful for the access these last four months. - Philadelphia, PA.

See original here:
Free Speech TV

Posted in Free Speech | Comments Off on Free Speech TV

New Atheism – Wikipedia

Posted: at 4:10 am

New Atheism is the journalistic term used to describe the positions promoted by atheists of the twenty-first century. This modern-day atheism and secularism is advanced by critics of religion and religious belief,[1] a group of modern atheist thinkers and writers who advocate the view that superstition, religion and irrationalism should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever its influence arises in government, education and politics.[2]

New Atheism lends itself to and often overlaps with secular humanism and antitheism, particularly in its criticism of what many New Atheists regard as the indoctrination of children and the perpetuation of ideologies founded on belief in the supernatural.

The 2004 publication of The End of Faith: Religion, Terror, and the Future of Reason by Sam Harris, a bestseller in the United States, was joined over the next couple years by a series of popular best-sellers by atheist authors.[3] Harris was motivated by the events of September 11, 2001, which he laid directly at the feet of Islam, while also directly criticizing Christianity and Judaism.[4] Two years later Harris followed up with Letter to a Christian Nation, which was also a severe criticism of Christianity.[5] Also in 2006, following his television documentary The Root of All Evil?, Richard Dawkins published The God Delusion, which was on the New York Times best-seller list for 51 weeks.[6]

In a 2010 column entitled "Why I Don't Believe in the New Atheism", Tom Flynn contends that what has been called "New Atheism" is neither a movement nor new, and that what was new was the publication of atheist material by big-name publishers, read by millions, and appearing on bestseller lists.[7]

These are some of the significant books on the subject of atheism and religion:

On September 30, 2007 four prominent atheists (Richard Dawkins, Christopher Hitchens, Sam Harris, and Daniel Dennett) met at Hitchens' residence for a private two-hour unmoderated discussion. The event was videotaped and titled "The Four Horsemen".[9] During "The God Debate" in 2010 featuring Christopher Hitchens vs Dinesh D'Souza the men were collectively referred to as the "Four Horsemen of the Non-Apocalypse",[10] an allusion to the biblical Four Horsemen from the Book of Revelation.[11]

Sam Harris is the author of the bestselling non-fiction books The End of Faith, Letter to a Christian Nation, The Moral Landscape, and Waking Up: A Guide to Spirituality Without Religion, as well as two shorter works, initially published as e-books, Free Will[12] and Lying.[13] Harris is a co-founder of the Reason Project.

Richard Dawkins is the author of The God Delusion,[14] which was preceded by a Channel 4 television documentary titled The Root of all Evil?. He is also the founder of the Richard Dawkins Foundation for Reason and Science.

Christopher Hitchens was the author of God Is Not Great[15] and was named among the "Top 100 Public Intellectuals" by Foreign Policy and Prospect magazine. In addition, Hitchens served on the advisory board of the Secular Coalition for America. In 2010 Hitchens published his memoir Hitch-22 (a nickname provided by close personal friend Salman Rushdie, whom Hitchens always supported during and following The Satanic Verses controversy).[16] Shortly after its publication, Hitchens was diagnosed with esophageal cancer, which led to his death in December 2011.[17] Before his death, Hitchens published a collection of essays and articles in his book Arguably;[18] a short edition Mortality[19] was published posthumously in 2012. These publications and numerous public appearances provided Hitchens with a platform to remain an astute atheist during his illness, even speaking specifically on the culture of deathbed conversions and condemning attempts to convert the terminally ill, which he opposed as "bad taste".[20][21]

Daniel Dennett, author of Darwin's Dangerous Idea,[22]Breaking the Spell[23] and many others, has also been a vocal supporter of The Clergy Project,[24] an organization that provides support for clergy in the US who no longer believe in God and cannot fully participate in their communities any longer.[25]

The "Four Horsemen" video, convened by Dawkins' Foundation, can be viewed free online at his web site: Part 1, Part 2.

After the death of Hitchens, Ayaan Hirsi Ali (who attended the 2012 Global Atheist Convention, which Hitchens was scheduled to attend) was referred to as the "plus one horse-woman", since she was originally invited to the 2007 meeting of the "Horsemen" atheists but had to cancel at the last minute.[26] Hirsi Ali was born in Mogadishu, Somalia, fleeing in 1992 to the Netherlands in order to escape an arranged marriage.[27] She became involved in Dutch politics, rejected faith, and became vocal in opposing Islamic ideology, especially concerning women, as exemplified by her books Infidel and The Caged Virgin.[28] Hirsi Ali was later involved in the production of the film Submission, for which her friend Theo Van Gogh was murdered with a death threat to Hirsi Ali pinned to his chest.[29] This resulted in Hirsi Ali's hiding and later immigration to the United States, where she now resides and remains a prolific critic of Islam,[30] and the treatment of women in Islamic doctrine and society,[31] and a proponent of free speech and the freedom to offend.[32][33]

While "The Four Horsemen" are arguably the foremost proponents of atheism, there are a number of other current, notable atheists including: Lawrence M. Krauss, (author of A Universe from Nothing),[34]James Randi (paranormal debunker and former illusionist),[35]Jerry Coyne (Why Evolution is True[36] and its complementary blog,[37] which specifically includes polemics against topical religious issues), Greta Christina (Why are you Atheists so Angry?),[38]Victor J. Stenger (The New Atheism),[39]Michael Shermer (Why People Believe Weird Things),[40]David Silverman (President of the American Atheists and author of Fighting God: An Atheist Manifesto for a Religious World), Ibn Warraq (Why I Am Not a Muslim),[41]Matt Dillahunty (host of the Austin-based webcast and cable-access television show The Atheist Experience),[42]Bill Maher (writer and star of the 2008 documentary Religulous),[43]Steven Pinker (noted cognitive scientist, linguist, psychologist and author),[44]Julia Galef (co-host of the podcast Rationally Speaking), A.C. Grayling (philosopher and considered to be the "Fifth Horseman of New Atheism"), and Michel Onfray (Atheist Manifesto: The Case Against Christianity, Judaism, and Islam).

Many contemporary atheists write from a scientific perspective. Unlike previous writers, many of whom thought that science was indifferent, or even incapable of dealing with the "God" concept, Dawkins argues to the contrary, claiming the "God Hypothesis" is a valid scientific hypothesis,[45] having effects in the physical universe, and like any other hypothesis can be tested and falsified. Other contemporary atheists such as Victor Stenger propose that the personal Abrahamic God is a scientific hypothesis that can be tested by standard methods of science. Both Dawkins and Stenger conclude that the hypothesis fails any such tests,[46] and argue that naturalism is sufficient to explain everything we observe in the universe, from the most distant galaxies to the origin of life, species, and the inner workings of the brain and consciousness. Nowhere, they argue, is it necessary to introduce God or the supernatural to understand reality. Atheists have been associated with the argument from divine hiddenness and the idea that "absence of evidence is evidence of absence" when evidence can be expected.[citation needed]

Non-believers assert that many religious or supernatural claims (such as the virgin birth of Jesus and the afterlife) are scientific claims in nature. They argue, as do deists and Progressive Christians, for instance, that the issue of Jesus' supposed parentage is not a question of "values" or "morals", but a question of scientific inquiry.[47] Rational thinkers believe science is capable of investigating at least some, if not all, supernatural claims.[48] Institutions such as the Mayo Clinic and Duke University are attempting to find empirical support for the healing power of intercessory prayer.[49] According to Stenger, these experiments have found no evidence that intercessory prayer works.[50]

Stenger also argues in his book, God: The Failed Hypothesis, that a God having omniscient, omnibenevolent and omnipotent attributes, which he termed a 3O God, cannot logically exist.[51] A similar series of logical disproofs of the existence of a God with various attributes can be found in Michael Martin and Ricki Monnier's The Impossibility of God,[52] or Theodore M. Drange's article, "Incompatible-Properties Arguments".[53]

Richard Dawkins has been particularly critical of the conciliatory view that science and religion are not in conflict, noting, for example, that the Abrahamic religions constantly deal in scientific matters. In a 1998 article published in Free Inquiry magazine,[47] and later in his 2006 book The God Delusion, Dawkins expresses disagreement with the view advocated by Stephen Jay Gould that science and religion are two non-overlapping magisteria (NOMA) each existing in a "domain where one form of teaching holds the appropriate tools for meaningful discourse and resolution". In Gould's proposal, science and religion should be confined to distinct non-overlapping domains: science would be limited to the empirical realm, including theories developed to describe observations, while religion would deal with questions of ultimate meaning and moral value. Dawkins contends that NOMA does not describe empirical facts about the intersection of science and religion, "it is completely unrealistic to claim, as Gould and many others do, that religion keeps itself away from science's turf, restricting itself to morals and values. A universe with a supernatural presence would be a fundamentally and qualitatively different kind of universe from one without. The difference is, inescapably, a scientific difference. Religions make existence claims, and this means scientific claims." Matt Ridley notes that religion does more than talk about ultimate meanings and morals, and science is not proscribed from doing the same. After all, morals involve human behavior, an observable phenomenon, and science is the study of observable phenomena. Ridley notes that there is substantial scientific evidence on evolutionary origins of ethics and morality.[54]

Popularized by Sam Harris is the view that science and thereby currently unknown objective facts may instruct human morality in a globally comparable way. Harris' book The Moral Landscape[55] and accompanying TED Talk How Science can Determine Moral Values[56] proposes that human well-being and conversely suffering may be thought of as a landscape with peaks and valleys representing numerous ways to achieve extremes in human experience, and that there are objective states of well-being.

New atheism is politically engaged in a variety of ways. These include campaigns to reduce the influence of religion in the public sphere, attempts to promote cultural change (centering, in the United States, on the mainstream acceptance of atheism), and efforts to promote the idea of an "atheist identity". Internal strategic divisions over these issues have also been notable, as are questions about the diversity of the movement in terms of its gender and racial balance.[57]

Edward Feser's book The Last Superstition presents arguments based on the philosophy of Aristotle and Thomas Aquinas against New Atheism.[58] According to Feser it necessarily follows from AristotelianThomistic metaphysics that God exists, that the human soul is immortal, and that the highest end of human life (and therefore the basis of morality) is to know God. Feser argues that science never disproved Aristotle's metaphysics, but rather Modern philosophers decided to reject it on the basis of wishful thinking. In the latter chapters Feser proposes that scientism and materialism are based on premises that are inconsistent and self-contradictory and that these conceptions lead to absurd consequences.

Cardinal William Levada believes that New Atheism has misrepresented the doctrines of the church.[59] Cardinal Walter Kasper described New Atheism as "aggressive", and he believed it to be the primary source of discrimination against Christians.[60] In a Salon interview, the journalist Chris Hedges argued that New Atheism propaganda is just as extreme as that of Christian right propaganda.[61]

The theologians Jeffrey Robbins and Christopher Rodkey take issue with what they regard as "the evangelical nature of the new atheism, which assumes that it has a Good News to share, at all cost, for the ultimate future of humanity by the conversion of as many people as possible." They believe they have found similarities between new atheism and evangelical Christianity and conclude that the all-consuming nature of both "encourages endless conflict without progress" between both extremities.[62] Sociologist William Stahl said "What is striking about the current debate is the frequency with which the New Atheists are portrayed as mirror images of religious fundamentalists."[63]

The atheist philosopher of science Michael Ruse has made the claim that Richard Dawkins would fail "introductory" courses on the study of "philosophy or religion" (such as courses on the philosophy of religion), courses which are offered, for example, at many educational institutions such as colleges and universities around the world.[64][65] Ruse also claims that the movement of New Atheismwhich is perceived, by him, to be a "bloody disaster"makes him ashamed, as a professional philosopher of science, to be among those hold to an atheist position, particularly as New Atheism does science a "grave disservice" and does a "disservice to scholarship" at more general level.[64][65]

Glenn Greenwald,[66][67] Toronto-based journalist and Mideast commentator Murtaza Hussain,[66][67]Salon columnist Nathan Lean,[67] scholars Wade Jacoby and Hakan Yavuz,[68] and historian of religion William Emilsen[69] have accused the New Atheist movement of Islamophobia. Wade Jacoby and Hakan Yavuz assert that "a group of 'new atheists' such as Richard Dawkins, Sam Harris, and Christopher Hitchens" have "invoked Samuel Huntington's 'clash of civilizations' theory to explain the current political contestation" and that this forms part of a trend toward "Islamophobia [...] in the study of Muslim societies".[68] William W. Emilson argues that "the 'new' in the new atheists' writings is not their aggressiveness, nor their extraordinary popularity, nor even their scientific approach to religion, rather it is their attack not only on militant Islamism but also on Islam itself under the cloak of its general critique of religion".[69] Murtaza Hussain has alleged that leading figures in the New Atheist movement "have stepped in to give a veneer of scientific respectability to today's politically useful bigotry".[66][70]

See the rest here:
New Atheism - Wikipedia

Posted in Atheism | Comments Off on New Atheism – Wikipedia

National Security Agency – Wikipedia

Posted: at 4:09 am

Not to be confused with NASA. National Security Agency

Seal of the National Security Agency

Flag of the National Security Agency

The National Security Agency (NSA) is an intelligence organization of the United States government, responsible for global monitoring, collection, and processing of information and data for foreign intelligence and counterintelligence purposes, a discipline known as signals intelligence (SIGINT). NSA is concurrently charged with protection of U.S. government communications and information systems against penetration and network warfare.[8][9] Although many of NSA's programs rely on "passive" electronic collection, the agency is authorized to accomplish its mission through active clandestine means,[10] among which are physically bugging electronic systems[11] and allegedly engaging in sabotage through subversive software.[12][13] Moreover, NSA maintains physical presence in a large number of countries across the globe, where its Special Collection Service (SCS) inserts eavesdropping devices in difficult-to-reach places. SCS collection tactics allegedly encompass "close surveillance, burglary, wiretapping, breaking and entering".[14][15]

Unlike the Defense Intelligence Agency (DIA) and the Central Intelligence Agency (CIA), both of which specialize primarily in foreign human espionage, NSA does not unilaterally conduct human-source intelligence gathering, despite often being portrayed so in popular culture. Instead, NSA is entrusted with assistance to and coordination of SIGINT elements at other government organizations, which are prevented by law from engaging in such activities without the approval of the NSA via the Defense Secretary.[16] As part of these streamlining responsibilities, the agency has a co-located organization called the Central Security Service (CSS), which was created to facilitate cooperation between NSA and other U.S. military cryptanalysis components. Additionally, the NSA Director simultaneously serves as the Commander of the United States Cyber Command and as Chief of the Central Security Service.

Originating as a unit to decipher coded communications in World War II, it was officially formed as the NSA by President Harry S. Truman in 1952. Since then, it has become one of the largest U.S. intelligence organizations in terms of personnel and budget,[6][17] operating as part of the Department of Defense and simultaneously reporting to the Director of National Intelligence.

NSA surveillance has been a matter of political controversy on several occasions, such as its spying on anti-Vietnam war leaders or economic espionage. In 2013, the extent of some of the NSA's secret surveillance programs was revealed to the public by Edward Snowden. According to the leaked documents, the NSA intercepts the communications of over a billion people worldwide, many of whom are American citizens, and tracks the movement of hundreds of millions of people using cellphones. Internationally, research has pointed to the NSA's ability to surveil the domestic Internet traffic of foreign countries through "boomerang routing".[18]

The origins of the National Security Agency can be traced back to April 28, 1917, three weeks after the U.S. Congress declared war on Germany in World War I. A code and cipher decryption unit was established as the Cable and Telegraph Section which was also known as the Cipher Bureau. It was headquartered in Washington, D.C. and was part of the war effort under the executive branch without direct Congressional authorization. During the course of the war it was relocated in the army's organizational chart several times. On July 5, 1917, Herbert O. Yardley was assigned to head the unit. At that point, the unit consisted of Yardley and two civilian clerks. It absorbed the navy's cryptoanalysis functions in July 1918. World War I ended on November 11, 1918, and MI-8 moved to New York City on May 20, 1919, where it continued intelligence activities as the Code Compilation Company under the direction of Yardley.[19][20]

MI-8 also operated the so-called "Black Chamber".[22] The Black Chamber was located on East 37th Street in Manhattan. Its purpose was to crack the communications codes of foreign governments. Jointly supported by the State Department and the War Department, the chamber persuaded Western Union, the largest U.S. telegram company, to allow government officials to monitor private communications passing through the company's wires.[23]

Other "Black Chambers" were also found in Europe. They were established by the French and British governments to read the letters of targeted individuals, employing a variety of techniques to surreptitiously open, copy, and reseal correspondence before forwarding it to unsuspecting recipients.[24]

Despite the American Black Chamber's initial successes, it was shut down in 1929 by U.S. Secretary of State Henry L. Stimson, who defended his decision by stating: "Gentlemen do not read each other's mail".[21]

During World War II, the Signal Security Agency (SSA) was created to intercept and decipher the communications of the Axis powers.[25] When the war ended, the SSA was reorganized as the Army Security Agency (ASA), and it was placed under the leadership of the Director of Military Intelligence.[25]

On May 20, 1949, all cryptologic activities were centralized under a national organization called the Armed Forces Security Agency (AFSA).[25] This organization was originally established within the U.S. Department of Defense under the command of the Joint Chiefs of Staff.[26] The AFSA was tasked to direct Department of Defense communications and electronic intelligence activities, except those of U.S. military intelligence units.[26] However, the AFSA was unable to centralize communications intelligence and failed to coordinate with civilian agencies that shared its interests such as the Department of State, Central Intelligence Agency (CIA) and the Federal Bureau of Investigation (FBI).[26] In December 1951, President Harry S. Truman ordered a panel to investigate how AFSA had failed to achieve its goals. The results of the investigation led to improvements and its redesignation as the National Security Agency.[27]

The agency was formally established by Truman in a memorandum of October 24, 1952, that revised National Security Council Intelligence Directive (NSCID) 9.[28] Since President Truman's memo was a classified document,[28] the existence of the NSA was not known to the public at that time. Due to its ultra-secrecy the U.S. intelligence community referred to the NSA as "No Such Agency".[29]

In the 1960s, the NSA played a key role in expanding America's commitment to the Vietnam War by providing evidence of a North Vietnamese attack on the American destroyer USSMaddox during the Gulf of Tonkin incident.[30]

A secret operation, code-named "MINARET", was set up by the NSA to monitor the phone communications of Senators Frank Church and Howard Baker, as well as major civil rights leaders, including Martin Luther King, Jr., and prominent U.S. journalists and athletes who criticized the Vietnam War.[31] However, the project turned out to be controversial, and an internal review by the NSA concluded that its Minaret program was "disreputable if not outright illegal".[31]

In the aftermath of the Watergate scandal, a congressional hearing in 1975 led by Sen. Frank Church[32] revealed that the NSA, in collaboration with Britain's SIGINT intelligence agency Government Communications Headquarters (GCHQ), had routinely intercepted the international communications of prominent anti-Vietnam war leaders such as Jane Fonda and Dr. Benjamin Spock.[33] Following the resignation of President Richard Nixon, there were several investigations of suspected misuse of FBI, CIA and NSA facilities.[34] Senator Frank Church uncovered previously unknown activity,[34] such as a CIA plot (ordered by the administration of President John F. Kennedy) to assassinate Fidel Castro.[35] The investigation also uncovered NSA's wiretaps on targeted American citizens.[36]

After the Church Committee hearings, the Foreign Intelligence Surveillance Act of 1978 was passed into law. This was designed to limit the practice of mass surveillance in the United States.[34]

In 1986, the NSA intercepted the communications of the Libyan government during the immediate aftermath of the Berlin discotheque bombing. The White House asserted that the NSA interception had provided "irrefutable" evidence that Libya was behind the bombing, which U.S. President Ronald Reagan cited as a justification for the 1986 United States bombing of Libya.[37][38]

In 1999, a multi-year investigation by the European Parliament highlighted the NSA's role in economic espionage in a report entitled 'Development of Surveillance Technology and Risk of Abuse of Economic Information'.[39] That year, the NSA founded the NSA Hall of Honor, a memorial at the National Cryptologic Museum in Fort Meade, Maryland.[40] The memorial is a, "tribute to the pioneers and heroes who have made significant and long-lasting contributions to American cryptology".[40] NSA employees must be retired for more than fifteen years to qualify for the memorial.[40]

NSA's infrastructure deteriorated in the 1990s as defense budget cuts resulted in maintenance deferrals. On January 24, 2000, NSA headquarters suffered a total network outage for three days caused by an overloaded network. Incoming traffic was successfully stored on agency servers, but it could not be directed and processed. The agency carried out emergency repairs at a cost of $3 million to get the system running again. (Some incoming traffic was also directed instead to Britain's GCHQ for the time being.) Director Michael Hayden called the outage a "wake-up call" for the need to invest in the agency's infrastructure.[41]

In the aftermath of the September 11 attacks, the NSA created new IT systems to deal with the flood of information from new technologies like the Internet and cellphones. ThinThread contained advanced data mining capabilities. It also had a "privacy mechanism"; surveillance was stored encrypted; decryption required a warrant. The research done under this program may have contributed to the technology used in later systems. ThinThread was cancelled when Michael Hayden chose Trailblazer, which did not include ThinThread's privacy system.[43]

Trailblazer Project ramped up in 2002. SAIC, Boeing, CSC, IBM, and Litton worked on it. Some NSA whistleblowers complained internally about major problems surrounding Trailblazer. This led to investigations by Congress and the NSA and DoD Inspectors General. The project was cancelled in early 2004; it was late, over budget, and didn't do what it was supposed to do. The government then raided the whistleblowers' houses. One of them, Thomas Drake, was charged with violating 18 U.S.C.793(e) in 2010 in an unusual use of espionage law. He and his defenders claim that he was actually being persecuted for challenging the Trailblazer Project. In 2011, all ten original charges against Drake were dropped.[44][45]

Turbulence started in 2005. It was developed in small, inexpensive "test" pieces, rather than one grand plan like Trailblazer. It also included offensive cyber-warfare capabilities, like injecting malware into remote computers. Congress criticized Turbulence in 2007 for having similar bureaucratic problems as Trailblazer.[45] It was to be a realization of information processing at higher speeds in cyberspace.[46]

The massive extent of the NSA's spying, both foreign and domestic, was revealed to the public in a series of detailed disclosures of internal NSA documents beginning in June 2013. Most of the disclosures were leaked by former NSA contractor, Edward Snowden.

It was revealed that the NSA intercepts telephone and Internet communications of over a billion people worldwide, seeking information on terrorism as well as foreign politics, economics[47] and "commercial secrets".[48] In a declassified document it was revealed that 17,835 phone lines were on an improperly permitted "alert list" from 2006 to 2009 in breach of compliance, which tagged these phone lines for daily monitoring.[49][50][51] Eleven percent of these monitored phone lines met the agency's legal standard for "reasonably articulable suspicion" (RAS).[49][52]

A dedicated unit of the NSA locates targets for the CIA for extrajudicial assassination in the Middle East.[53] The NSA has also spied extensively on the European Union, the United Nations and numerous governments including allies and trading partners in Europe, South America and Asia.[54][55]

The NSA tracks the locations of hundreds of millions of cellphones per day, allowing them to map people's movements and relationships in detail.[56] It reportedly has access to all communications made via Google, Microsoft, Facebook, Yahoo, YouTube, AOL, Skype, Apple and Paltalk,[57] and collects hundreds of millions of contact lists from personal email and instant messaging accounts each year.[58] It has also managed to weaken much of the encryption used on the Internet (by collaborating with, coercing or otherwise infiltrating numerous technology companies), so that the majority of Internet privacy is now vulnerable to the NSA and other attackers.[59][60]

Domestically, the NSA collects and stores metadata records of phone calls,[61] including over 120 million US Verizon subscribers,[62] as well as Internet communications,[57] relying on a secret interpretation of the Patriot Act whereby the entirety of US communications may be considered "relevant" to a terrorism investigation if it is expected that even a tiny minority may relate to terrorism.[63] The NSA supplies foreign intercepts to the DEA, IRS and other law enforcement agencies, who use these to initiate criminal investigations. Federal agents are then instructed to "recreate" the investigative trail via parallel construction.[64]

The NSA also spies on influential Muslims to obtain information that could be used to discredit them, such as their use of pornography. The targets, both domestic and abroad, are not suspected of any crime but hold religious or political views deemed "radical" by the NSA.[65]

Although NSAs surveillance activities are controversial, government agencies and private enterprises have common needs, and sometimes cooperate at subtle and complex technical levels. Big data is becoming more advantageous, justifying the cost of required computer hardware, and social media lead the trend. The interests of NSA and Silicon Valley began to converge as advances in computer storage technology drastically reduced the costs of storing enormous amounts of data and at the same time the value of the data for use in consumer marketing began to rise. On the other hand, social media sites are growing as voluntary data mining operations on a scale that rivals or exceeds anything the government could attempt on its own.[66]

According to a report in The Washington Post in July 2014, relying on information provided by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans, and are not the intended targets. The newspaper said it had examined documents including emails, text messages, and online accounts that support the claim.[67]

Despite President Obama's claims that these programs have congressional oversight, members of Congress were unaware of the existence of these NSA programs or the secret interpretation of the Patriot Act, and have consistently been denied access to basic information about them.[68] Obama has also claimed that there are legal checks in place to prevent inappropriate access of data and that there have been no examples of abuse;[69] however, the secret FISC court charged with regulating the NSA's activities is, according to its chief judge, incapable of investigating or verifying how often the NSA breaks even its own secret rules.[70] It has since been reported that the NSA violated its own rules on data access thousands of times a year, many of these violations involving large-scale data interceptions;[71] and that NSA officers have even used data intercepts to spy on love interests.[72] The NSA has "generally disregarded the special rules for disseminating United States person information" by illegally sharing its intercepts with other law enforcement agencies.[73] A March 2009 opinion of the FISC court, released by court order, states that protocols restricting data queries had been "so frequently and systemically violated that it can be fairly said that this critical element of the overall ... regime has never functioned effectively."[74][75] In 2011 the same court noted that the "volume and nature" of the NSA's bulk foreign Internet intercepts was "fundamentally different from what the court had been led to believe".[73] Email contact lists (including those of US citizens) are collected at numerous foreign locations to work around the illegality of doing so on US soil.[58]

Legal opinions on the NSA's bulk collection program have differed. In mid-December 2013, U.S. District Court Judge Richard Leon ruled that the "almost-Orwellian" program likely violates the Constitution, and wrote, "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval. Surely, such a program infringes on 'that degree of privacy' that the Founders enshrined in the Fourth Amendment. Indeed, I have little doubt that the author of our Constitution, James Madison, who cautioned us to beware 'the abridgement of freedom of the people by gradual and silent encroachments by those in power,' would be aghast."[76]

Later that month, U.S. District Judge William Pauley ruled that the NSA's collection of telephone records is legal and valuable in the fight against terrorism. In his opinion, he wrote, "a bulk telephony metadata collection program [is] a wide net that could find and isolate gossamer contacts among suspected terrorists in an ocean of seemingly disconnected data" and noted that a similar collection of data prior to 9/11 might have prevented the attack.[77]

An October 2014 United Nations report condemned mass surveillance by the United States and other countries as violating multiple international treaties and conventions that guarantee core privacy rights.[78]

On March 20, 2013 the Director of National Intelligence, Lieutenant General James Clapper, testified before Congress that the NSA does not wittingly collect any kind of data on millions or hundreds of millions of Americans, but he retracted this in June after details of the PRISM program were published, and stated instead that meta-data of phone and Internet traffic are collected, but no actual message contents.[79] This was corroborated by the NSA Director, General Keith Alexander, before it was revealed that the XKeyscore program collects the contents of millions of emails from US citizens without warrant, as well as "nearly everything a user does on the Internet". Alexander later admitted that "content" is collected, but stated that it is simply stored and never analyzed or searched unless there is "a nexus to al-Qaida or other terrorist groups".[69]

Regarding the necessity of these NSA programs, Alexander stated on June 27 that the NSA's bulk phone and Internet intercepts had been instrumental in preventing 54 terrorist "events", including 13 in the US, and in all but one of these cases had provided the initial tip to "unravel the threat stream".[80] On July 31 NSA Deputy Director John Inglis conceded to the Senate that these intercepts had not been vital in stopping any terrorist attacks, but were "close" to vital in identifying and convicting four San Diego men for sending US$8,930 to Al-Shabaab, a militia that conducts terrorism in Somalia.[81][82][83]

The U.S. government has aggressively sought to dismiss and challenge Fourth Amendment cases raised against it, and has granted retroactive immunity to ISPs and telecoms participating in domestic surveillance.[84][85] The U.S. military has acknowledged blocking access to parts of The Guardian website for thousands of defense personnel across the country,[86][87] and blocking the entire Guardian website for personnel stationed throughout Afghanistan, the Middle East, and South Asia.[88]

The NSA is led by the Director of the National Security Agency (DIRNSA), who also serves as Chief of the Central Security Service (CHCSS) and Commander of the United States Cyber Command (USCYBERCOM) and is the highest-ranking military official of these organizations. He is assisted by a Deputy Director, who is the highest-ranking civilian within the NSA/CSS.

NSA also has an Inspector General, head of the Office of the Inspector General (OIG), a General Counsel, head of the Office of the General Counsel (OGC) and a Director of Compliance, who is head of the Office of the Director of Compliance (ODOC).[89]

Unlike other intelligence organizations such as CIA or DIA, NSA has always been particularly reticent concerning its internal organizational structure.

As of the mid-1990s, the National Security Agency was organized into five Directorates:

Each of these directorates consisted of several groups or elements, designated by a letter. There were for example the A Group, which was responsible for all SIGINT operations against the Soviet Union and Eastern Europe, and G Group, which was responsible for SIGINT related to all non-communist countries. These groups were divided in units designated by an additional number, like unit A5 for breaking Soviet codes, and G6, being the office for the Middle East, North Africa, Cuba, Central and South America.[91][92]

As of 2013[update], NSA has about a dozen directorates, which are designated by a letter, although not all of them are publicly known. The directorates are divided in divisions and units starting with the letter of the parent directorate, followed by a number for the division, the sub-unit or a sub-sub-unit.

The main elements of the organizational structure of the NSA are:[93]

In the year 2000, a leadership team was formed, consisting of the Director, the Deputy Director and the Directors of the Signals Intelligence (SID), the Information Assurance (IAD) and the Technical Directorate (TD). The chiefs of other main NSA divisions became associate directors of the senior leadership team.[101]

After president George W. Bush initiated the President's Surveillance Program (PSP) in 2001, the NSA created a 24-hour Metadata Analysis Center (MAC), followed in 2004 by the Advanced Analysis Division (AAD), with the mission of analyzing content, Internet metadata and telephone metadata. Both units were part of the Signals Intelligence Directorate.[102]

A 2016 proposal would combine the Signals Intelligence Directorate with the Information Assurance Directorate into a Directorate of Operations.[103]

The NSA maintains at least two watch centers:

The number of NSA employees is officially classified[4] but there are several sources providing estimates. In 1961, NSA had 59,000 military and civilian employees, which grew to 93,067 in 1969, of which 19,300 worked at the headquarters at Fort Meade. In the early 1980s NSA had roughly 50,000 military and civilian personnel. By 1989 this number had grown again to 75,000, of which 25,000 worked at the NSA headquarters. Between 1990 and 1995 the NSA's budget and workforce were cut by one third, which led to a substantial loss of experience.[106]

In 2012, the NSA said more than 30,000 employees worked at Fort Meade and other facilities.[2] In 2012, John C. Inglis, the deputy director, said that the total number of NSA employees is "somewhere between 37,000 and one billion" as a joke,[4] and stated that the agency is "probably the biggest employer of introverts."[4] In 2013 Der Spiegel stated that the NSA had 40,000 employees.[5] More widely, it has been described as the world's largest single employer of mathematicians.[107] Some NSA employees form part of the workforce of the National Reconnaissance Office (NRO), the agency that provides the NSA with satellite signals intelligence.

As of 2013 about 1,000 system administrators work for the NSA.[108]

The NSA received criticism early on in 1960 after two agents had defected to the Soviet Union. Investigations by the House Un-American Activities Committee and a special subcommittee of the United States House Committee on Armed Services revealed severe cases of ignorance in personnel security regulations, prompting the former personnel director and the director of security to step down and leading to the adoption of stricter security practices.[109] Nonetheless, security breaches reoccurred only a year later when in an issue of Izvestia of July 23, 1963, a former NSA employee published several cryptologic secrets.

The very same day, an NSA clerk-messenger committed suicide as ongoing investigations disclosed that he had sold secret information to the Soviets on a regular basis. The reluctance of Congressional houses to look into these affairs had prompted a journalist to write, "If a similar series of tragic blunders occurred in any ordinary agency of Government an aroused public would insist that those responsible be officially censured, demoted, or fired." David Kahn criticized the NSA's tactics of concealing its doings as smug and the Congress' blind faith in the agency's right-doing as shortsighted, and pointed out the necessity of surveillance by the Congress to prevent abuse of power.[109]

Edward Snowden's leaking of the existence of PRISM in 2013 caused the NSA to institute a "two-man rule", where two system administrators are required to be present when one accesses certain sensitive information.[108] Snowden claims he suggested such a rule in 2009.[110]

The NSA conducts polygraph tests of employees. For new employees, the tests are meant to discover enemy spies who are applying to the NSA and to uncover any information that could make an applicant pliant to coercion.[111] As part of the latter, historically EPQs or "embarrassing personal questions" about sexual behavior had been included in the NSA polygraph.[111] The NSA also conducts five-year periodic reinvestigation polygraphs of employees, focusing on counterintelligence programs. In addition the NSA conducts periodic polygraph investigations in order to find spies and leakers; those who refuse to take them may receive "termination of employment", according to a 1982 memorandum from the director of the NSA.[112]

There are also "special access examination" polygraphs for employees who wish to work in highly sensitive areas, and those polygraphs cover counterintelligence questions and some questions about behavior.[112] NSA's brochure states that the average test length is between two and four hours.[113] A 1983 report of the Office of Technology Assessment stated that "It appears that the NSA [National Security Agency] (and possibly CIA) use the polygraph not to determine deception or truthfulness per se, but as a technique of interrogation to encourage admissions."[114] Sometimes applicants in the polygraph process confess to committing felonies such as murder, rape, and selling of illegal drugs. Between 1974 and 1979, of the 20,511 job applicants who took polygraph tests, 695 (3.4%) confessed to previous felony crimes; almost all of those crimes had been undetected.[111]

In 2010 the NSA produced a video explaining its polygraph process.[115] The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the Web site of the Defense Security Service. Jeff Stein of The Washington Post said that the video portrays "various applicants, or actors playing them it's not clear describing everything bad they had heard about the test, the implication being that none of it is true."[116] AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video.[115] George Maschke, the founder of the Web site, accused the NSA polygraph video of being "Orwellian".[116]

After Edward Snowden revealed his identity in 2013, the NSA began requiring polygraphing of employees once per quarter.[117]

The number of exemptions from legal requirements has been criticized. When in 1964 the Congress was hearing a bill giving the director of the NSA the power to fire at will any employee,The Washington Post wrote: "This is the very definition of arbitrariness. It means that an employee could be discharged and disgraced on the basis of anonymous allegations without the slightest opportunity to defend himself." Yet, the bill was accepted by an overwhelming majority.[109]

The heraldic insignia of NSA consists of an eagle inside a circle, grasping a key in its talons.[118] The eagle represents the agency's national mission.[118] Its breast features a shield with bands of red and white, taken from the Great Seal of the United States and representing Congress.[118] The key is taken from the emblem of Saint Peter and represents security.[118]

When the NSA was created, the agency had no emblem and used that of the Department of Defense.[119] The agency adopted its first of two emblems in 1963.[119] The current NSA insignia has been in use since 1965, when then-Director, LTG Marshall S. Carter (USA) ordered the creation of a device to represent the agency.[120]

The NSA's flag consists of the agency's seal on a light blue background.

Crews associated with NSA missions have been involved in a number of dangerous and deadly situations.[121] The USS Liberty incident in 1967 and USS Pueblo incident in 1968 are examples of the losses endured during the Cold War.[121]

The National Security Agency/Central Security Service Cryptologic Memorial honors and remembers the fallen personnel, both military and civilian, of these intelligence missions.[122] It is made of black granite, and has 171 names carved into it, as of 2013[update] .[122] It is located at NSA headquarters. A tradition of declassifying the stories of the fallen was begun in 2001.[122]

NSANet stands for National Security Agency Network and is the official NSA intranet.[123] It is a classified network,[124] for information up to the level of TS/SCI[125] to support the use and sharing of intelligence data between NSA and the signals intelligence agencies of the four other nations of the Five Eyes partnership. The management of NSANet has been delegated to the Central Security Service Texas (CSSTEXAS).[126]

NSANet is a highly secured computer network consisting of fiber-optic and satellite communication channels which are almost completely separated from the public Internet. The network allows NSA personnel and civilian and military intelligence analysts anywhere in the world to have access to the agency's systems and databases. This access is tightly controlled and monitored. For example, every keystroke is logged, activities are audited at random and downloading and printing of documents from NSANet are recorded.[127]

In 1998, NSANet, along with NIPRNET and SIPRNET, had "significant problems with poor search capabilities, unorganized data and old information".[128] In 2004, the network was reported to have used over twenty commercial off-the-shelf operating systems.[129] Some universities that do highly sensitive research are allowed to connect to it.[130]

The thousands of Top Secret internal NSA documents that were taken by Edward Snowden in 2013 were stored in "a file-sharing location on the NSA's intranet site" so they could easily be read online by NSA personnel. Everyone with a TS/SCI-clearance had access to these documents and as a system administrator, Snowden was responsible for moving accidentally misplaced highly sensitive documents to more secure storage locations.[131]

The DoD Computer Security Center was founded in 1981 and renamed the National Computer Security Center (NCSC) in 1985. NCSC was responsible for computer security throughout the federal government.[132] NCSC was part of NSA,[133] and during the late 1980s and the 1990s, NSA and NCSC published Trusted Computer System Evaluation Criteria in a six-foot high Rainbow Series of books that detailed trusted computing and network platform specifications.[134] The Rainbow books were replaced by the Common Criteria, however, in the early 2000s.[134]

On July 18, 2013, Greenwald said that Snowden held "detailed blueprints of how the NSA does what they do", thereby sparking fresh controversy.[135]

Headquarters for the National Security Agency is located at 39632N 764617W / 39.10889N 76.77139W / 39.10889; -76.77139 in Fort George G. Meade, Maryland, although it is separate from other compounds and agencies that are based within this same military installation. Ft. Meade is about 20mi (32km) southwest of Baltimore,[136] and 25mi (40km) northeast of Washington, DC.[137] The NSA has its own exit off Maryland Route 295 South labeled "NSA Employees Only".[138][139] The exit may only be used by people with the proper clearances, and security vehicles parked along the road guard the entrance.[140]

NSA is the largest employer in the U.S. state of Maryland, and two-thirds of its personnel work at Ft. Meade.[141] Built on 350 acres (140ha; 0.55sqmi)[142] of Ft. Meade's 5,000 acres (2,000ha; 7.8sqmi),[143] the site has 1,300 buildings and an estimated 18,000 parking spaces.[137][144]

The main NSA headquarters and operations building is what James Bamford, author of Body of Secrets, describes as "a modern boxy structure" that appears similar to "any stylish office building."[145] The building is covered with one-way dark glass, which is lined with copper shielding in order to prevent espionage by trapping in signals and sounds.[145] It contains 3,000,000 square feet (280,000m2), or more than 68 acres (28ha), of floor space; Bamford said that the U.S. Capitol "could easily fit inside it four times over."[145]

The facility has over 100 watchposts,[146] one of them being the visitor control center, a two-story area that serves as the entrance.[145] At the entrance, a white pentagonal structure,[147] visitor badges are issued to visitors and security clearances of employees are checked.[148] The visitor center includes a painting of the NSA seal.[147]

The OPS2A building, the tallest building in the NSA complex and the location of much of the agency's operations directorate, is accessible from the visitor center. Bamford described it as a "dark glass Rubik's Cube".[149] The facility's "red corridor" houses non-security operations such as concessions and the drug store. The name refers to the "red badge" which is worn by someone without a security clearance. The NSA headquarters includes a cafeteria, a credit union, ticket counters for airlines and entertainment, a barbershop, and a bank.[147] NSA headquarters has its own post office, fire department, and police force.[150][151][152]

The employees at the NSA headquarters reside in various places in the Baltimore-Washington area, including Annapolis, Baltimore, and Columbia in Maryland and the District of Columbia, including the Georgetown community.[153]

Following a major power outage in 2000, in 2003 and in follow-ups through 2007, The Baltimore Sun reported that the NSA was at risk of electrical overload because of insufficient internal electrical infrastructure at Fort Meade to support the amount of equipment being installed. This problem was apparently recognized in the 1990s but not made a priority, and "now the agency's ability to keep its operations going is threatened."[154]

Baltimore Gas & Electric (BGE, now Constellation Energy) provided NSA with 65 to 75 megawatts at Ft. Meade in 2007, and expected that an increase of 10 to 15 megawatts would be needed later that year.[155] In 2011, NSA at Ft. Meade was Maryland's largest consumer of power.[141] In 2007, as BGE's largest customer, NSA bought as much electricity as Annapolis, the capital city of Maryland.[154]

One estimate put the potential for power consumption by the new Utah Data Center at US$40million per year.[156]

When the agency was established, its headquarters and cryptographic center were in the Naval Security Station in Washington, D.C. The COMINT functions were located in Arlington Hall in Northern Virginia, which served as the headquarters of the U.S. Army's cryptographic operations.[157] Because the Soviet Union had detonated a nuclear bomb and because the facilities were crowded, the federal government wanted to move several agencies, including the AFSA/NSA. A planning committee considered Fort Knox, but Fort Meade, Maryland, was ultimately chosen as NSA headquarters because it was far enough away from Washington, D.C. in case of a nuclear strike and was close enough so its employees would not have to move their families.[158]

Construction of additional buildings began after the agency occupied buildings at Ft. Meade in the late 1950s, which they soon outgrew.[158] In 1963 the new headquarters building, nine stories tall, opened. NSA workers referred to the building as the "Headquarters Building" and since the NSA management occupied the top floor, workers used "Ninth Floor" to refer to their leaders.[159] COMSEC remained in Washington, D.C., until its new building was completed in 1968.[158] In September 1986, the Operations 2A and 2B buildings, both copper-shielded to prevent eavesdropping, opened with a dedication by President Ronald Reagan.[160] The four NSA buildings became known as the "Big Four."[160] The NSA director moved to 2B when it opened.[160]

On March 30, 2015, shortly before 9am, a stolen sports utility vehicle approached an NSA police vehicle blocking the road near the gate of Fort Meade, after it was told to leave the area. NSA officers fired on the SUV, killing the 27-year-old driver, Ricky Hall (a transgender person also known as Mya), and seriously injuring his 20-year-old male passenger. An NSA officer's arm was injured when Hall subsequently crashed into his vehicle.[161][162]

The two, dressed in women's clothing after a night of partying at a motel with the man they'd stolen the SUV from that morning, "attempted to drive a vehicle into the National Security Agency portion of the installation without authorization", according to an NSA statement.[163] FBI spokeswoman Amy Thoreson said the incident is not believed to be related to terrorism.[164] In June 2015 the FBI closed its investigation into the incident and federal prosecutors have declined to bring charges against anyone involved.[165]

An anonymous police official told The Washington Post, "This was not a deliberate attempt to breach the security of NSA. This was not a planned attack." The two are believed to have made a wrong turn off the highway, while fleeing from the motel after stealing the vehicle. A small amount of cocaine was found in the SUV. A local CBS reporter initially said a gun was found,[166] but her later revision does not.[167] Dozens of journalists were corralled into a parking lot blocks away from the scene, and were barred from photographing the area.[168]

In 1995, The Baltimore Sun reported that the NSA is the owner of the single largest group of supercomputers.[169]

NSA held a groundbreaking ceremony at Ft. Meade in May 2013 for its High Performance Computing Center 2, expected to open in 2016.[170] Called Site M, the center has a 150 megawatt power substation, 14 administrative buildings and 10 parking garages.[150] It cost $3.2billion and covers 227 acres (92ha; 0.355sqmi).[150] The center is 1,800,000 square feet (17ha; 0.065sqmi)[150] and initially uses 60 megawatts of electricity.[171]

Increments II and III are expected to be completed by 2030, and would quadruple the space, covering 5,800,000 square feet (54ha; 0.21sqmi) with 60 buildings and 40 parking garages.[150]Defense contractors are also establishing or expanding cybersecurity facilities near the NSA and around the Washington metropolitan area.[150]

As of 2012, NSA collected intelligence from four geostationary satellites.[156] Satellite receivers were at Roaring Creek Station in Catawissa, Pennsylvania and Salt Creek Station in Arbuckle, California.[156] It operated ten to twenty taps on U.S. telecom switches. NSA had installations in several U.S. states and from them observed intercepts from Europe, the Middle East, North Africa, Latin America, and Asia.[156]

NSA had facilities at Friendship Annex (FANX) in Linthicum, Maryland, which is a 20 to 25-minute drive from Ft. Meade;[172] the Aerospace Data Facility at Buckley Air Force Base in Aurora outside Denver, Colorado; NSA Texas in the Texas Cryptology Center at Lackland Air Force Base in San Antonio, Texas; NSA Georgia at Fort Gordon in Augusta, Georgia; NSA Hawaii in Honolulu; the Multiprogram Research Facility in Oak Ridge, Tennessee, and elsewhere.[153][156]

On January 6, 2011 a groundbreaking ceremony was held to begin construction on NSA's first Comprehensive National Cyber-security Initiative (CNCI) Data Center, known as the "Utah Data Center" for short. The $1.5B data center is being built at Camp Williams, Utah, located 25 miles (40km) south of Salt Lake City, and will help support the agency's National Cyber-security Initiative.[173] It is expected to be operational by September 2013.[156]

In 2009, to protect its assets and to access more electricity, NSA sought to decentralize and expand its existing facilities in Ft. Meade and Menwith Hill,[174] the latter expansion expected to be completed by 2015.[175]

The Yakima Herald-Republic cited Bamford, saying that many of NSA's bases for its Echelon program were a legacy system, using outdated, 1990s technology.[176] In 2004, NSA closed its operations at Bad Aibling Station (Field Station 81) in Bad Aibling, Germany.[177] In 2012, NSA began to move some of its operations at Yakima Research Station, Yakima Training Center, in Washington state to Colorado, planning to leave Yakima closed.[178] As of 2013, NSA also intended to close operations at Sugar Grove, West Virginia.[176]

Following the signing in 19461956[179] of the UKUSA Agreement between the United States, United Kingdom, Canada, Australia and New Zealand, who then cooperated on signals intelligence and ECHELON,[180] NSA stations were built at GCHQ Bude in Morwenstow, United Kingdom; Geraldton, Pine Gap and Shoal Bay, Australia; Leitrim and Ottawa, Canada; Misawa, Japan; and Waihopai and Tangimoana,[181] New Zealand.[182]

NSA operates RAF Menwith Hill in North Yorkshire, United Kingdom, which was, according to BBC News in 2007, the largest electronic monitoring station in the world.[183] Planned in 1954, and opened in 1960, the base covered 562 acres (227ha; 0.878sqmi) in 1999.[184]

The agency's European Cryptologic Center (ECC), with 240 employees in 2011, is headquartered at a US military compound in Griesheim, near Frankfurt in Germany. A 2011 NSA report indicates that the ECC is responsible for the "largest analysis and productivity in Europe" and focusses on various priorities, including Africa, Europe, the Middle East and counterterrorism operations.[185]

In 2013, a new Consolidated Intelligence Center, also to be used by NSA, is being built at the headquarters of the United States Army Europe in Wiesbaden, Germany.[186] NSA's partnership with Bundesnachrichtendienst (BND), the German foreign intelligence service, was confirmed by BND president Gerhard Schindler.[186]

Thailand is a "3rd party partner" of the NSA along with nine other nations.[187] These are non-English-speaking countries that have made security agreements for the exchange of SIGINT raw material and end product reports.

Thailand is the site of at least two US SIGINT collection stations. One is at the US Embassy in Bangkok, a joint NSA-CIA Special Collection Service (SCS) unit. It presumably eavesdrops on foreign embassies, governmental communications, and other targets of opportunity.[188]

Read the original post:
National Security Agency - Wikipedia

Posted in NSA | Comments Off on National Security Agency – Wikipedia

Space station – Wikipedia

Posted: at 4:08 am

A space station, also known as an orbital station or an orbital space station, is a spacecraft capable of supporting a crew, which is designed to remain in space (most commonly as an artificial satellite in low Earth orbit) for an extended period of time and for other spacecraft to dock. A space station is distinguished from other spacecraft used for human spaceflight by lack of major propulsion or landing systems. Instead, other vehicles transport people and cargo to and from the station. As of September 2016[update] three space stations are in orbit: the International Space Station, which is permanently manned, China's Tiangong-1 (defunct) and Tiangong-2 (launched 15 September 2016, unmanned most of the time).[1][2] Previous stations include the Almaz and Salyut series, Skylab, and most recently Mir.

Today's space stations are research platforms, used to study the effects of long-term space flight on the human body as well as to provide platforms for greater number and length of scientific studies than available on other space vehicles. Each crew member stays aboard the station for weeks or months, but rarely more than a year. Most of the time crew remain inside the space station but its not necessary that crew should have to be stay inside the station. Since the ill-fated flight of Soyuz 11 to Salyut 1, all manned spaceflight duration records have been set aboard space stations. The duration record for a single spaceflight is 437.7 days, set by Valeriy Polyakov aboard Mir from 1994 to 1995. As of 2013[update], three astronauts have completed single missions of over a year, all aboard Mir.

Space stations have also been used for both military and civilian purposes. The last military-use space station was Salyut 5, which was used by the Almaz program of the Soviet Union in 1976 and 1977.[3]

Space stations have been envisaged since at least as early as 1869 when Edward Everett Hale wrote "The Brick Moon".[4] The first to give serious consideration to space stations were Konstantin Tsiolkovsky in the early 20th century and Hermann Oberth about two decades later.[4] In 1929 Herman Potonik's The Problem of Space Travel was published, the first to envision a "rotating wheel" space station to create artificial gravity.

During the Second World War, German scientists researched the theoretical concept of an orbital weapon based on a space station. Pursuing Oberth's idea of a space-based weapon, the so-called "sun gun" was a concept of a space station orbiting Earth at a height of 8,200 kilometres (5,100mi), with a weapon that was to utilize the sun's energy.[5]

In 1951, in Collier's weekly, Wernher von Braun published his design for a rotating wheel space station, which referenced Potonik's idea however these concepts would never leave the concept stage during the 20th century.[4]

During the same time as von Braun pursued Potonik's ideas, the Soviet design bureaus chiefly Vladimir Chelomey's OKB-52 were pursuing Tsiolkovsky's ideas for space stations. The work by OKB-52 would lead to the Almaz programme and (together with OKB-1) to the first space station: Salyut 1. The developed hardware laid the ground for the Salyut and Mir space stations, and is even today a considerable part of the ISS space station.

The first space station was Salyut 1, which was launched by the Soviet Union on April 19, 1971. Like all the early space stations, it was "monolithic", intended to be constructed and launched in one piece, and then manned by a crew later. As such, monolithic stations generally contained all their supplies and experimental equipment when launched, and were considered "expended", and then abandoned, when these were used up.

The earlier Soviet stations were all designated "Salyut", but among these there were two distinct types: civilian and military. The military stations, Salyut 2, Salyut 3, and Salyut 5, were also known as Almaz stations.

The civilian stations Salyut 6 and Salyut 7 were built with two docking ports, which allowed a second crew to visit, bringing a new spacecraft with them; the Soyuz ferry could spend 90 days in space, after which point it needed to be replaced by a fresh Soyuz spacecraft.[6] This allowed for a crew to man the station continually. Skylab was also equipped with two docking ports, like second-generation stations, but the extra port was never utilized. The presence of a second port on the new stations allowed Progress supply vehicles to be docked to the station, meaning that fresh supplies could be brought to aid long-duration missions. This concept was expanded on Salyut 7, which "hard docked" with a TKS tug shortly before it was abandoned; this served as a proof-of-concept for the use of modular space stations. The later Salyuts may reasonably be seen as a transition between the two groups.

Unlike previous stations, the Soviet space station Mir had a modular design; a core unit was launched, and additional modules, generally with a specific role, were later added to that. This method allows for greater flexibility in operation, as well as removing the need for a single immensely powerful launch vehicle. Modular stations are also designed from the outset to have their supplies provided by logistical support, which allows for a longer lifetime at the cost of requiring regular support launches.

The core module of the International Space Station was launched in 1998.

The ISS is divided into two main sections, the Russian orbital segment (ROS), and the United States operational segment (USOS).

USOS modules were brought to the station by the Space Shuttle and manually attached to the ISS by crews during EVAs. Connections are made manually for electrical, data, propulsion and cooling fluids. This results in a single piece which is not designed for disassembly.[7]

The Russian orbital segment's modules are able to launch, fly and dock themselves without human intervention using Proton rockets.[8] Connections are automatically made for power, data and propulsion fluids and gases. The Russian approach allows assembly of space stations orbiting other worlds in preparation for manned missions. The Nauka module of the ISS will be used in the 12th Russian/Soviet space station, OPSEK, whose main goal is supporting manned deep space exploration.

Russian Modular or 'next generation' space stations differ from 'Monolithic' single piece stations by allowing reconfiguration of the station to suit changing needs. According to a 2009 report, RKK Energia is considering methods to remove from the station some modules of the Russian Orbital Segment when the end of mission is reached for the ISS and use them as a basis for a new station, known as the Orbital Piloted Assembly and Experiment Complex. None of these modules would have reached the end of their useful lives in 2016 or 2020. The report presents a statement from an unnamed Russian engineer who believes that, based on the experience from Mir, a thirty-year life should be possible, except for micrometeorite damage, because the Russian modules have been built with on-orbit refurbishment in mind.[9]

China's first space laboratory, Tiangong-1 was launched in September 2011. The unmanned Shenzhou 8 then successfully performed an automatic rendezvous and docking in November 2011. The manned Shenzhou 9 then docked with Tiangong-1 in June 2012, the manned Shenzhou 10 in 2013. Tiangong 2 was launched in September 2016 and another space laboratory, Tiangong 3, is expected to be launched in subsequent years, paving the way for the construction of a larger space station around 2020.

In September 2016 it was reported that the Tiangong-1 is falling back to earth and will burn in the atmosphere during 2017.

These stations have various issues that limit their long-term habitability, such as very low recycling rates, relatively high radiation levels and a lack of weight. Some of these problems cause discomfort and long-term health effects. In the case of solar flares, all current habitats are protected by the Earth's magnetic field, and are below the Van Allen belts.

Future space habitats may attempt to address these issues, and could be intended for long-term occupation. Some designs might even accommodate large numbers of people, essentially "cities in space" where people would make their homes. No such design has yet been constructed, since even for a small station, the current (2016) launch costs are not economically or politically viable.

Possible ways to deal with these costs would be to build a large number of rockets (economies of scale), employ reusable rockets, In Situ Resource Utilisation, or non-rocket spacelaunch methods such as space elevators. For example, in 1975, proposing to seek long-term habitability through artificial gravity and enough mass in space to allow high radiation shielding, the most ambitious historical NASA study, a conceptual 10000-person spacestation, envisioned a future mass driver base launching 600 times its own mass in lunar material cumulatively over years.[10]

A space station is a complex system with many interrelated subsystems:

Molds that develop aboard space stations can produce acids that degrade metal, glass and rubber [11]

The Soviet space stations came in two types, the civilian Durable Orbital Station (DOS), and the military Almaz stations. (dates refer to periods when stations were inhabited by crews)

The business arrangement for developing and marketing the station was recently clarified by Russian firm Orbital Technologies, who is collaborating to develop the station with the Rocket and Space Technology Corporation Energia (RSC Energia). [22]

See original here:
Space station - Wikipedia

Posted in Space Station | Comments Off on Space station – Wikipedia

Human genetics – An Introduction to Genetic Analysis …

Posted: at 4:08 am

In the study of rare disorders, four general patterns of inheritance are distinguishable by pedigree analysis: autosomal recessive, autosomal dominant, X-linked recessive, and X-linked dominant.

The affected phenotype of an autosomal recessive disorder is determined by a recessive allele, and the corresponding unaffected phenotype is determined by a dominant allele. For example, the human disease phenylketonuria is inherited in a simple Mendelian manner as a recessive phenotype, with PKU determined by the allele p and the normal condition by P . Therefore, sufferers from this disease are of genotype p /p , and people who do not have the disease are either P /P or P /p . What patterns in a pedigree would reveal such an inheritance? The two key points are that (1) generally the disease appears in the progeny of unaffected parents and (2) the affected progeny include both males and females. When we know that both male and female progeny are affected, we can assume that we are dealing with simple Mendelian inheritance, not sex-linked inheritance. The following typical pedigree illustrates the key point that affected children are born to unaffected parents:

From this pattern, we can immediately deduce simple Mendelian inheritance of the recessive allele responsible for the exceptional phenotype (indicated in black). Furthermore, we can deduce that the parents are both heterozygotes, say A /a ; both must have an a allele because each contributed an a allele to each affected child, and both must have an A allele because they are phenotypically normal. We can identify the genotypes of the children (in the order shown) as A /, a /a , a /a , and A /. Hence, the pedigree can be rewritten as follows:

Note that this pedigree does not support the hypothesis of X-linked recessive inheritance, because, under that hypothesis, an affected daughter must have a heterozygous mother (possible) and a hemizygous father, which is clearly impossible, because he would have expressed the phenotype of the disorder.

Notice another interesting feature of pedigree analysis: even though Mendelian rules are at work, Mendelian ratios are rarely observed in families, because the sample size is too small. In the preceding example, we see a 1:1 phenotypic ratio in the progeny of a monohybrid cross. If the couple were to have, say, 20 children, the ratio would be something like 15 unaffected children and 5 with PKU (a 3:1 ratio); but, in a sample of 4 children, any ratio is possible, and all ratios are commonly found.

The pedigrees of autosomal recessive disorders tend to look rather bare, with few black symbols. A recessive condition shows up in groups of affected siblings, and the people in earlier and later generations tend not to be affected. To understand why this is so, it is important to have some understanding of the genetic structure of populations underlying such rare conditions. By definition, if the condition is rare, most people do not carry the abnormal allele. Furthermore, most of those people who do carry the abnormal allele are heterozygous for it rather than homozygous. The basic reason that heterozygotes are much more common than recessive homozygotes is that, to be a recessive homozygote, both parents must have had the a allele, but, to be a heterozygote, only one parent must carry the a allele.

Geneticists have a quantitative way of connecting the rareness of an allele with the commonness or rarity of heterozygotes and homozygotes in a population. They obtain the relative frequencies of genotypes in a population by assuming that the population is in Hardy-Weinberg equilibrium, to be fully discussed in Chapter 24 . Under this simplifying assumption, if the relative proportions of two alleles A and a in a population are p and q , respectively, then the frequencies of the three possible genotypes are given by p 2 for A /A , 2pq for A /a , and q 2 for a /a . A numerical example illustrates this concept. If we assume that the frequency q of a recessive, disease-causing allele is 1/50, then p is 49/50, the frequency of homozygotes with the disease is q 2 =(1/50)2 =1/250, and the frequency of heterozygotes is 2pq =249/501/50 , or approximately 1/25. Hence, for this example, we see that heterozygotes are 100 times as frequent as disease sufferers, and, as this ratio increases, the rarer the allele becomes. The relation between heterozygotes and homozygotes recessive for a rare allele is shown in the following illustration. Note that the allele frequencies p and q can be used as the gamete frequencies in both sexes.

The formation of an affected person usually depends on the chance union of unrelated heterozygotes. However, inbreeding (mating between relatives) increases the chance that a mating will be between two heterozygotes. An example of a marriage between cousins is shown in . Individuals III-5 and III-6 are first cousins and produce two homozygotes for the rare allele. You can see from that an ancestor who is a heterozygote may produce many descendants who also are heterozygotes. Hence two cousins can carry the same rare recessive allele inherited from a common ancestor. For two unrelated persons to be heterozygous, they would have to inherit the rare allele from both their families. Thus matings between relatives generally run a higher risk of producing abnormal phenotypes caused by homozygosity for recessive alleles than do matings between nonrelatives. For this reason, first-cousin marriages contribute a large proportion of the sufferers of recessive diseases in the population.

Pedigree of a rare recessive phenotype determined by a recessive allele a . Gene symbols are normally not included in pedigree charts, but genotypes are inserted here for reference. Note that individuals II-1 and II-5 marry into the family; they are assumed (more...)

What are some examples of human recessive disorders? PKU has already served as an example of pedigree analysis, but what kind of phenotype is it? PKU is a disease of processing of the amino acid phenylalanine, a component of all proteins in the food that we eat. Phenylalanine is normally converted into tyrosine by the enzyme phenylalanine hydroxylase:

However, if a mutation in the gene encoding this enzyme alters the amino acid sequence in the vicinity of the enzymes active site, the enzyme cannot bind or convert phenylalanine (its substrate). Therefore phenylalanine builds up in the body and is converted instead into phenylpyruvic acid, a compound that interferes with the development of the nervous system, leading to mental retardation.

Babies are now routinely tested for this processing deficiency at birth. If the deficiency is detected, phenylalanine can be withheld by use of a special diet, and the development of the disease can be arrested.

Cystic fibrosis is another disease inherited according to Mendelian rules as a recessive phenotype. The allele that causes cystic fibrosis was isolated in 1989, and the sequence of its DNA was determined. This has led to an understanding of gene function in affected and unaffected persons, giving hope for more effective treatment. Cystic fibrosis is a disease whose most important symptom is the secretion of large amounts of mucus into the lungs, resulting in death from a combination of effects but usually precipitated by upper respiratory infection. The mucus can be dislodged by mechanical chest thumpers, and pulmonary infection can be prevented by antibiotics; so, with treatment, cystic fibrosis patients can live to adulthood. The disorder is caused by a defective protein that transports chloride ions across the cell membrane. The resultant alteration of the salt balance changes the constitution of the lung mucus.

Albinism, which served as a model of allelic determination of contrasting phenotypes in Chapter 1 , also is inherited in the standard autosomal recessive manner. The molecular nature of an albino allele and its inheritance are diagrammed in . This diagram shows a simple autosomal recessive inheritance in a pedigree and shows the molecular nature of the alleles involved. In this example, the recessive allele a is caused by a base pair change that introduces a stop codon into the middle of the gene, resulting in a truncated polypeptide. The mutation, by chance, also introduces a new target site for a restriction enzyme. Hence, a probe for the gene detects two fragments in the case of a and only one in A . (Other types of mutations would produce different effects at the level detected by Southern, Northern, and Western analyses.)

The molecular basis of Mendelian inheritance in a pedigree.

In all the examples heretofore considered, the disorder is caused by an allele for a defective protein. In heterozygotes, the single functional allele provides enough active protein for the cells needs. This situation is called haplosufficiency.

In human pedigrees, an autosomal recessive disorder is revealed by the appearance of the disorder in the male and female progeny of unaffected persons.

Here the normal allele is recessive, and the abnormal allele is dominant. It may seem paradoxical that a rare disorder can be dominant, but remember that dominance and recessiveness are simply properties of how alleles act and are not defined in terms of how common they are in the population. A good example of a rare dominant phenotype with Mendelian inheritance is pseudo-achondroplasia, a type of dwarfism ( ). In regard to this gene, people with normal stature are genotypically d /d , and the dwarf phenotype in principle could be D /d or D /D . However, it is believed that the two doses of the D allele in the D /D genotype produce such a severe effect that this is a lethal genotype. If this is true, all the dwarf individuals are heterozygotes.

The human pseudoachondroplasia phenotype, illustrated by a family of five sisters and two brothers. The phenotype is determined by a dominant allele, which we can call D , that interferes with bone growth during development. Most members of the human population (more...)

In pedigree analysis, the main clues for identifying a dominant disorder with Mendelian inheritance are that the phenotype tends to appear in every generation of the pedigree and that affected fathers and mothers transmit the phenotype to both sons and daughters. Again, the equal representation of both sexes among the affected offspring rules out sex-linked inheritance. The phenotype appears in every generation because generally the abnormal allele carried by a person must have come from a parent in the preceding generation. Abnormal alleles can arise de novo by the process of mutation. This event is relatively rare but must be kept in mind as a possibility. A typical pedigree for a dominant disorder is shown in . Once again, notice that Mendelian ratios are not necessarily observed in families. As with recessive disorders, persons bearing one copy of the rare A allele (A /a ) are much more common than those bearing two copies (A /A ), so most affected people are heterozygotes, and virtually all matings concerning dominant disorders are A /a a /a . Therefore, when the progeny of such matings are totaled, a 1:1 ratio is expected of unaffected (a /a ) to affected (A /a ) persons.

Pedigree of a dominant phenotype determined by a dominant allele A . In this pedigree, all the genotypes have been deduced.

Huntington disease is an example of a disease inherited as a dominant phenotype determined by an allele of a single gene. The phenotype is one of neural degeneration, leading to convulsions and premature death. However, it is a late-onset disease, the symptoms generally not appearing until after the person has begun to have children ( ). Each child of a carrier of the abnormal allele stands a 50 percent chance of inheriting the allele and the associated disease. This tragic pattern has led to a great effort to find ways of identifying people who carry the abnormal allele before they experience the onset of the disease. The application of molecular techniques has resulted in a promising screening procedure.

The age of onset of Huntington disease. The graph shows that people carrying the allele generally do not express the disease until after child-bearing age.

Some other rare dominant conditions are polydactyly (extra digits) and brachydactyly (short digits), shown in , and piebald spotting, shown in .

Some rare dominant phenotypes of the human hand. (a) (right) Polydactyly, a dominant phenotype characterized by extra fingers, toes, or both, determined by an allele P . The numbers in the accompanying pedigree (left) give the number of fingers in the (more...)

Piebald spotting, a rare dominant human phenotype. Although the phenotype is encountered sporadically in all races, the patterns show up best in those with dark skin. (a) The photographs show front and back views of affected persons IV-1, IV-3, III-5, (more...)

Pedigrees of Mendelian autosomal dominant disorders show affected males and females in each generation; they also show that affected men and women transmit the condition to equal proportions of their sons and daughters.

Phenotypes with X-linked recessive inheritance typically show the following patterns in pedigrees:

Many more males than females show the phenotype under study. This is because a female showing the phenotype can result only from a mating in which both the mother and the father bear the allele (for example, XA Xa Xa Y), whereas a male with the phenotype can be produced when only the mother carries the allele. If the recessive allele is very rare, almost all persons showing the phenotype are male.

None of the offspring of an affected male are affected, but all his daughters are carriers, bearing the recessive allele masked in the heterozygous condition. Half of the sons of these carrier daughters are affected ( ). Note that, in common X-linked phenotypes, this pattern might be obscured by inheritance of the recessive allele from a heterozygous mother as well as the father.

None of the sons of an affected male show the phenotype under study, nor will they pass the condition to their offspring. The reason behind this lack of male-to-male transmission is that a son obtains his Y chromosome from his father, so he cannot normally inherit the fathers X chromosome too.

Pedigree showing that X-linked recessive alleles expressed in males are then carried unexpressed by their daughters in the next generation, to be expressed again in their sons. Note that III-3 and III-4 cannot be distinguished phenotypically.

In the pedigree analysis of rare X-linked recessives, a normal female of unknown genotype is assumed to be homo-zygous unless there is evidence to the contrary.

Perhaps the most familiar example of X-linked recessive inheritance is red-green colorblindness. People with this condition are unable to distinguish red from green and see them as the same. The genes for color vision have been characterized at the molecular level. Color vision is based on three different kinds of cone cells in the retina, each sensitive to red, green, or blue wavelengths. The genetic determinants for the red and green cone cells are on the X chromosome. As with any X-linked recessive, there are many more males with the phenotype than females.

Another familiar example is hemophilia, the failure of blood to clot. Many proteins must interact in sequence to make blood clot. The most common type of hemophilia is caused by the absence or malfunction of one of these proteins, called Factor VIII. The most well known cases of hemophilia are found in the pedigree of interrelated royal families in Europe ( ). The original hemophilia allele in the pedigree arose spontaneously (as a mutation) either in the reproductive cells of Queen Victorias parents or of Queen Victoria herself. The son of the last czar of Russia, Alexis, inherited the allele ultimately from Queen Victoria, who was the grandmother of his mother Alexandra. Nowadays, hemophilia can be treated medically, but it was formerly a potentially fatal condition. It is interesting to note that, in the Jewish Talmud, there are rules about exemptions to male circumcision that show clearly that the mode of transmission of the disease through unaffected carrier females was well understood in ancient times. For example, one exemption was for the sons of women whose sisters sons had bled profusely when they were circumcised.

The inheritance of the X-linked recessive condition hemophilia in the royal families of Europe. A recessive allele causing hemophilia (failure of blood clotting) arose in the reproductive cells of Queen Victoria, or one of her parents, through mutation. (more...)

Duchenne muscular dystrophy is a fatal X-linked recessive disease. The phenotype is a wasting and atrophy of muscles. Generally the onset is before the age of 6, with confinement to a wheelchair by 12, and death by 20. The gene for Duchenne muscular dystrophy has now been isolated and shown to encode the muscle protein dystrophin. This discovery holds out hope for a better understanding of the physiology of this condition and, ultimately, a therapy.

A rare X-linked recessive phenotype that is interesting from the point of view of sexual differentiation is a condition called testicular feminization syndrome, which has a frequency of about 1 in 65,000 male births. People afflicted with this syndrome are chromosomally males, having 44 autosomes plus an X and a Y, but they develop as females ( ). They have female external genitalia, a blind vagina, and no uterus. Testes may be present either in the labia or in the abdomen. Although many such persons marry, they are sterile. The condition is not reversed by treatment with the male hormone androgen, so it is sometimes called androgen insensitivity syndrome. The reason for the insensitivity is that the androgen receptor malfunctions, so the male hormone can have no effect on the target organs that contribute to maleness. In humans, femaleness results when the male-determining system is not functional.

Four siblings with testicular feminization syndrome (congenital insensitivity to androgens). All four subjects in this photograph have 44 autosomes plus an X and a Y chromosome, but they have inherited the recessive X-linked allele conferring insensitivity to (more...)

Read the original post:
Human genetics - An Introduction to Genetic Analysis ...

Posted in Human Genetics | Comments Off on Human genetics – An Introduction to Genetic Analysis …

Human Genome Project – Wikipedia

Posted: at 4:07 am

The Human Genome Project (HGP) is an international scientific research project with the goal of determining the sequence of chemical base pairs which make up human DNA, and of identifying and mapping all of the genes of the human genome from both a physical and a functional standpoint.[1] It remains the world's largest collaborative biological project.[2] After the idea was picked up in 1984 by the US government when the planning started, with the project formally launched in 1990, and finally declared complete in 2003. Funding came from the US government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside of government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centers in the United States, the United Kingdom, Japan, France, Germany, Canada, and China.[3]

The Human Genome Project originally aimed to map the nucleotides contained in a human haploid reference genome (more than three billion). The "genome" of any given individual is unique; mapping the "human genome" involves sequencing multiple variations of each gene.[4] In May 2016, scientists considered extending the HGP to include creating a synthetic human genome.[5] In June 2016, scientists formally announced HGP-Write, a plan to synthesize the human genome.[6][7]

The Human Genome Project was a 13-year-long, publicly funded project initiated in 1990 with the objective of determining the DNA sequence of the entire euchromatic human genome within 15 years.[8] In May 1985, Robert Sinsheimer organized a workshop to discuss sequencing the human genome,[9] but for a number of reasons the NIH was uninterested in pursuing the proposal. The following March, the Santa Fe Workshop was organized by Charles DeLisi and David Smith of the Department of Energy's Office of Health and Environmental Research (OHER).[10] At the same time Renato Dulbecco proposed whole genome sequencing in an essay in Science.[11] James Watson followed two months later with a workshop held at the Cold Spring Harbor Laboratory.

The fact that the Santa Fe workshop was motivated and supported by a Federal Agency opened a path, albeit a difficult and tortuous one,[12] for converting the idea into public policy. In a memo to the Assistant Secretary for Energy Research (Alvin Trivelpiece), Charles DeLisi, who was then Director of OHER, outlined a broad plan for the project.[13] This started a long and complex chain of events which led to approved reprogramming of funds that enabled OHER to launch the Project in 1986, and to recommend the first line item for the HGP, which was in President Regan's 1988 budget submission,[12] and ultimately approved by the Congress. Of particular importance in Congressional approval was the advocacy of Senator Peter Domenici, whom DeLisi had befriended.[14] Domenici chaired the Senate Committee on Energy and Natural Resources, as well as the Budget Committee, both of which were key in the DOE budget process. Congress added a comparable amount to the NIH budget, thereby beginning official funding by both agencies.

Alvin Trivelpiece sought and obtained the approval of DeLisi's proposal by Deputy Secretary William Flynn Martin. This chart[15] was used in the spring of 1986 by Trivelpiece, then Director of the Office of Energy Research in the Department of Energy, to brief Martin and Under Secretary Joseph Salgado regarding his intention to reprogram $4 million to initiate the project with the approval of Secretary Herrington. This reprogramming was followed by a line item budget of $16 million in the Reagan Administrations 1987 budget submission to Congress.[16] It subsequently passed both Houses. The Project was planned for 15 years.[17]

Candidate technologies were already being considered for the proposed undertaking at least as early as 1985.[18]

In 1990, the two major funding agencies, DOE and NIH, developed a memorandum of understanding in order to coordinate plans and set the clock for the initiation of the Project to 1990.[19] At that time, David Galas was Director of the renamed Office of Biological and Environmental Research in the U.S. Department of Energys Office of Science and James Watson headed the NIH Genome Program. In 1993, Aristides Patrinos succeeded Galas and Francis Collins succeeded James Watson, assuming the role of overall Project Head as Director of the U.S. National Institutes of Health (NIH) National Center for Human Genome Research (which would later become the National Human Genome Research Institute). A working draft of the genome was announced in 2000 and the papers describing it were published in February 2001. A more complete draft was published in 2003, and genome "finishing" work continued for more than a decade.

The $3-billion project was formally founded in 1990 by the US Department of Energy and the National Institutes of Health, and was expected to take 15 years.[20] In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Australia, China and myriad other spontaneous relationships.[21]

Due to widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as major advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by U.S. President Bill Clinton and the British Prime Minister Tony Blair on June 26, 2000).[22] This first available rough draft assembly of the genome was completed by the Genome Bioinformatics Group at the University of California, Santa Cruz, primarily led by then graduate student Jim Kent. Ongoing sequencing led to the announcement of the essentially complete genome on April 14, 2003, two years earlier than planned.[23][24] In May 2006, another milestone was passed on the way to completion of the project, when the sequence of the last chromosome was published in Nature.[25]

The project did not aim to sequence all the DNA found in human cells. It sequenced only "euchromatic" regions of the genome, which make up about 90% of the genome. The other regions, called "heterochromatic" are found in centromeres and telomeres, and were not sequenced under the project.[26]

The Human Genome Project was declared complete in April 2003. An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on April 14, 2003. Although this was reported to be 99% of the euchromatic human genome with 99.99% accuracy a major quality assessment of the human genome sequence was published on May 27, 2004 indicating over 92% of sampling exceeded 99.99% accuracy which was within the intended goal.[27] Further analyses and papers on the HGP continue to occur.[28]

The sequencing of the human genome holds benefits for many fields, from molecular medicine to human evolution. The Human Genome Project, through its sequencing of the DNA, can help us understand diseases including: genotyping of specific viruses to direct appropriate treatment; identification of mutations linked to different forms of cancer; the design of medication and more accurate prediction of their effects; advancement in forensic applied sciences; biofuels and other energy applications; agriculture, animal husbandry, bioprocessing; risk assessment; bioarcheology, anthropology and evolution. Another proposed benefit is the commercial development of genomics research related to DNA based products, a multibillion-dollar industry.

The sequence of the DNA is stored in databases available to anyone on the Internet. The U.S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations, such as the UCSC Genome Browser at the University of California, Santa Cruz,[29] and Ensembl[30] present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data, because the data itself is difficult to interpret without such programs. Generally speaking, advances in genome sequencing technology have followed Moores Law, a concept from computer science which states that integrated circuits can increase in complexity at an exponential rate.[31] This means that the speeds at which whole genomes can be sequenced can increase at a similar rate, as was seen during the development of the above-mentioned Human Genome Project.

The process of identifying the boundaries between genes and other features in a raw DNA sequence is called genome annotation and is in the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. Beginning in 2008, a new technology known as RNA-seq was introduced that allowed scientists to directly sequence the messenger RNA in cells. This replaced previous methods of annotation, which relied on inherent properties of the DNA sequence, with direct measurement, which was much more accurate. Today, annotation of the human genome and other genomes relies primarily on deep sequencing of the transcripts in every human tissue using RNA-seq. These experiments have revealed that over 90% of genes contain at least one and usually several alternative splice variants, in which the exons are combined in different ways to produce 2 or more gene products from the same locus.[citation needed]

The genome published by the HGP does not represent the sequence of every individual's genome. It is the combined mosaic of a small number of anonymous donors, all of European origin. The HGP genome is a scaffold for future work in identifying differences among individuals. Subsequent projects sequenced the genomes of multiple distinct ethnic groups, though as of today there is still only one "reference genome."[citation needed]

Key findings of the draft (2001) and complete (2004) genome sequences include:

The Human Genome Project was started in 1990 with the goal of sequencing and identifying all three billion chemical units in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. It is considered a Mega Project because the human genome has approximately 3.3 billion base-pairs. With the sequence in hand, the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.[19][36]

It was far too expensive at that time to think of sequencing patients whole genomes. So the National Institutes of Health embraced the idea for a "shortcut", which was to look just at sites on the genome where many people have a variant DNA unit. The theory behind the shortcut was that, since the major diseases are common, so too would be the genetic variants that caused them. Natural selection keeps the human genome free of variants that damage health before children are grown, the theory held, but fails against variants that strike later in life, allowing them to become quite common. (In 2002 the National Institutes of Health started a $138 million dollar project called the HapMap to catalog the common variants in European, East Asian and African genomes.)[37]

The genome was broken into smaller pieces; approximately 150,000 base pairs in length.[36] These pieces were then ligated into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.[38][39]

Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust, as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute, the Sanger Centre, Washington University in St. Louis, and Baylor College of Medicine.[20][40]

The United Nations Educational, Scientific and Cultural Organization (UNESCO) served as an important channel for the involvement of developing countries in the Human Genome Project.[41]

In 1998, a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300,000,000 Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. The Celera approach was able to proceed at a much more rapid rate, and at a lower cost than the public project because it relied upon data made available by the publicly funded project.[42]

Celera used a technique called whole genome shotgun sequencing, employing pairwise end sequencing,[43] which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.

Celera initially announced that it would seek patent protection on "only 200300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes. Celera also promised to publish their findings in accordance with the terms of the 1996 "Bermuda Statement", by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitors were compelled to release the first draft of the human genome before Celera for this reason. On July 7, 2000, the UCSC Genome Bioinformatics Group released a first working draft on the web. The scientific community downloaded about 500 GB of information from the UCSC genome server in the first 24 hours of free and unrestricted access.[44]

In March 2000, President Clinton announced that the genome sequence could not be patented, and should be made freely available to all researchers. The statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days.

Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper)[45] and Science (which published Celera's paper[46]) described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to approximately 92% of the sequence currently.

In the IHGSC international public-sector Human Genome Project (HGP), researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones from many different libraries were used in the overall project, with most of those libraries being created by Pieter J. de Jong's lab. Much of the sequence (>70%) of the reference genome produced by the public HGP came from a single anonymous male donor from Buffalo, New York (code name RP11).[47][48]

HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, due to quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome) compared to female samples (which contain two X chromosomes). The other 22 chromosomes (the autosomes) are the same for both sexes.

Although the main sequencing phase of the HGP has been completed, studies of DNA variation continue in the International HapMap Project, whose goal is to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes, or haps). The DNA samples for the HapMap came from a total of 270 individuals: Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre dEtude du Polymorphisme Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.

In the Celera Genomics private-sector project, DNA from five different individuals were used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of 21 samples in the pool, five of which were selected for use.[49][50]

In 2007, a team led by Jonathan Rothberg published James Watson's entire genome, unveiling the six-billion-nucleotide genome of a single individual for the first time.[51]

The work on interpretation and analysis of genome data is still in its initial stages. It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics, started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, hemostasis disorders, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.[37][52]

There are also many tangible benefits for biologists. For example, a researcher investigating a certain form of cancer may have narrowed down his/her search to a particular gene. By visiting the human genome database on the World Wide Web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, and diseases associated with this gene or other datatypes. Further, deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.[53]

The analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data in this project.[37][54]

The project inspired and paved the way for genomic work in other fields, such as agriculture. For example, by studying the genetic composition of Tritium aestivum, the worlds most commonly used bread wheat, great insight has been gained into the ways that domestication has impacted the evolution of the plant.[55] Which loci are most susceptible to manipulation, and how does this play out in evolutionary terms? Genetic sequencing has allowed these questions to be addressed for the first time, as specific loci can be compared in wild and domesticated strains of the plant. This will allow for advances in genetic modification in the future which could yield healthier, more disease-resistant wheat crops.

At the onset of the Human Genome Project several ethical, legal, and social concerns were raised in regards to how increased knowledge of the human genome could be used to discriminate against people. One of the main concerns of most individuals was the fear that both employers and health insurance companies would refuse to hire individuals or refuse to provide insurance to people because of a health concern indicated by someone's genes.[56] In 1996 the United States passed the Health Insurance Portability and Accountability Act (HIPAA) which protects against the unauthorized and non-consensual release of individually identifiable health information to any entity not actively engaged in the provision of healthcare services to a patient.[57]

Along with identifying all of the approximately 20,00025,000 genes in the human genome, the Human Genome Project also sought to address the ethical, legal, and social issues that were created by the onset of the project. For that the Ethical, Legal, and Social Implications (ELSI) program was founded in 1990. Five percent of the annual budget was allocated to address the ELSI arising from the project.[20][58] This budget started at approximately $1.57 million in the year 1990, but increased to approximately $18 million in the year 2014. [59]

Whilst the project may offer significant benefits to medicine and scientific research, some authors have emphasised the need to address the potential social consequences of mapping the human genome. "Molecularising disease and their possible cure will have a profound impact on what patients expect from medical help and the new generation of doctors' perception of illness."[60]

Read more from the original source:
Human Genome Project - Wikipedia

Posted in Genome | Comments Off on Human Genome Project – Wikipedia