Daily Archives: June 17, 2016

Cloning – Wikipedia, the free encyclopedia

Posted: June 17, 2016 at 4:55 am

In biology, cloning is the process of producing similar populations of genetically identical individuals that occurs in nature when organisms such as bacteria, insects or plants reproduce asexually. Cloning in biotechnology refers to processes used to create copies of DNA fragments (molecular cloning), cells (cell cloning), or organisms. The term also refers to the production of multiple copies of a product such as digital media or software.

The term clone, invented by J. B. S. Haldane, is derived from the Ancient Greek word kln, "twig", referring to the process whereby a new plant can be created from a twig. In horticulture, the spelling clon was used until the twentieth century; the final e came into use to indicate the vowel is a "long o" instead of a "short o".[1][2] Since the term entered the popular lexicon in a more general context, the spelling clone has been used exclusively.

In botany, the term lusus was traditionally used.[3]:21, 43

Cloning is a natural form of reproduction that has allowed life forms to spread for more than 50 thousand years. It is the reproduction method used by plants, fungi, and bacteria, and is also the way that clonal colonies reproduce themselves.[4][5] Examples of these organisms include blueberry plants, hazel trees, the Pando trees,[6][7] the Kentucky coffeetree, Myricas, and the American sweetgum.

Molecular cloning refers to the process of making multiple molecules. Cloning is commonly used to amplify DNA fragments containing whole genes, but it can also be used to amplify any DNA sequence such as promoters, non-coding sequences and randomly fragmented DNA. It is used in a wide array of biological experiments and practical applications ranging from genetic fingerprinting to large scale protein production. Occasionally, the term cloning is misleadingly used to refer to the identification of the chromosomal location of a gene associated with a particular phenotype of interest, such as in positional cloning. In practice, localization of the gene to a chromosome or genomic region does not necessarily enable one to isolate or amplify the relevant genomic sequence. To amplify any DNA sequence in a living organism, that sequence must be linked to an origin of replication, which is a sequence of DNA capable of directing the propagation of itself and any linked sequence. However, a number of other features are needed, and a variety of specialised cloning vectors (small piece of DNA into which a foreign DNA fragment can be inserted) exist that allow protein production, affinity tagging, single stranded RNA or DNA production and a host of other molecular biology tools.

Cloning of any DNA fragment essentially involves four steps[8]

Although these steps are invariable among cloning procedures a number of alternative routes can be selected; these are summarized as a cloning strategy.

Initially, the DNA of interest needs to be isolated to provide a DNA segment of suitable size. Subsequently, a ligation procedure is used where the amplified fragment is inserted into a vector (piece of DNA). The vector (which is frequently circular) is linearised using restriction enzymes, and incubated with the fragment of interest under appropriate conditions with an enzyme called DNA ligase. Following ligation the vector with the insert of interest is transfected into cells. A number of alternative techniques are available, such as chemical sensitivation of cells, electroporation, optical injection and biolistics. Finally, the transfected cells are cultured. As the aforementioned procedures are of particularly low efficiency, there is a need to identify the cells that have been successfully transfected with the vector construct containing the desired insertion sequence in the required orientation. Modern cloning vectors include selectable antibiotic resistance markers, which allow only cells in which the vector has been transfected, to grow. Additionally, the cloning vectors may contain colour selection markers, which provide blue/white screening (alpha-factor complementation) on X-gal medium. Nevertheless, these selection steps do not absolutely guarantee that the DNA insert is present in the cells obtained. Further investigation of the resulting colonies must be required to confirm that cloning was successful. This may be accomplished by means of PCR, restriction fragment analysis and/or DNA sequencing.

Cloning a cell means to derive a population of cells from a single cell. In the case of unicellular organisms such as bacteria and yeast, this process is remarkably simple and essentially only requires the inoculation of the appropriate medium. However, in the case of cell cultures from multi-cellular organisms, cell cloning is an arduous task as these cells will not readily grow in standard media.

A useful tissue culture technique used to clone distinct lineages of cell lines involves the use of cloning rings (cylinders).[9] In this technique a single-cell suspension of cells that have been exposed to a mutagenic agent or drug used to drive selection is plated at high dilution to create isolated colonies, each arising from a single and potentially clonal distinct cell. At an early growth stage when colonies consist of only a few cells, sterile polystyrene rings (cloning rings), which have been dipped in grease, are placed over an individual colony and a small amount of trypsin is added. Cloned cells are collected from inside the ring and transferred to a new vessel for further growth.

Somatic-cell nuclear transfer, known as SCNT, can also be used to create embryos for research or therapeutic purposes. The most likely purpose for this is to produce embryos for use in stem cell research. This process is also called "research cloning" or "therapeutic cloning." The goal is not to create cloned human beings (called "reproductive cloning"), but rather to harvest stem cells that can be used to study human development and to potentially treat disease. While a clonal human blastocyst has been created, stem cell lines are yet to be isolated from a clonal source.[10]

Therapeutic cloning is achieved by creating embryonic stem cells in the hopes of treating diseases such as diabetes and Alzheimer's. The process begins by removing the nucleus (containing the DNA) from an egg cell and inserting a nucleus from the adult cell to be cloned.[11] In the case of someone with Alzheimer's disease, the nucleus from a skin cell of that patient is placed into an empty egg. The reprogrammed cell begins to develop into an embryo because the egg reacts with the transferred nucleus. The embryo will become genetically identical to the patient.[11] The embryo will then form a blastocyst which has the potential to form/become any cell in the body.[12]

The reason why SCNT is used for cloning is because somatic cells can be easily acquired and cultured in the lab. This process can either add or delete specific genomes of farm animals. A key point to remember is that cloning is achieved when the oocyte maintains its normal functions and instead of using sperm and egg genomes to replicate, the oocyte is inserted into the donors somatic cell nucleus.[13] The oocyte will react on the somatic cell nucleus, the same way it would on sperm cells.[13]

The process of cloning a particular farm animal using SCNT is relatively the same for all animals. The first step is to collect the somatic cells from the animal that will be cloned. The somatic cells could be used immediately or stored in the laboratory for later use.[13] The hardest part of SCNT is removing maternal DNA from an oocyte at metaphase II. Once this
has been done, the somatic nucleus can be inserted into an egg cytoplasm.[13] This creates a one-cell embryo. The grouped somatic cell and egg cytoplasm are then introduced to an electrical current.[13] This energy will hopefully allow the cloned embryo to begin development. The successfully developed embryos are then placed in surrogate recipients, such as a cow or sheep in the case of farm animals.[13]

SCNT is seen as a good method for producing agriculture animals for food consumption. It successfully cloned sheep, cattle, goats, and pigs. Another benefit is SCNT is seen as a solution to clone endangered species that are on the verge of going extinct.[13] However, stresses placed on both the egg cell and the introduced nucleus can be enormous, which led to a high loss in resulting cells in early research. For example, the cloned sheep Dolly was born after 277 eggs were used for SCNT, which created 29 viable embryos. Only three of these embryos survived until birth, and only one survived to adulthood.[14] As the procedure could not be automated, and had to be performed manually under a microscope, SCNT was very resource intensive. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from being well-understood. However, by 2014 researchers were reporting cloning success rates of seven to eight out of ten[15] and in 2016, a Korean Company Sooam Biotech was reported to be producing 500 cloned embryos per day.[16]

In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus.

Organism cloning (also called reproductive cloning) refers to the procedure of creating a new multicellular organism, genetically identical to another. In essence this form of cloning is an asexual method of reproduction, where fertilization or inter-gamete contact does not take place. Asexual reproduction is a naturally occurring phenomenon in many species, including most plants (see vegetative reproduction) and some insects. Scientists have made some major achievements with cloning, including the asexual reproduction of sheep and cows. There is a lot of ethical debate over whether or not cloning should be used. However, cloning, or asexual propagation,[17] has been common practice in the horticultural world for hundreds of years.

The term clone is used in horticulture to refer to descendants of a single plant which were produced by vegetative reproduction or apomixis. Many horticultural plant cultivars are clones, having been derived from a single individual, multiplied by some process other than sexual reproduction.[18] As an example, some European cultivars of grapes represent clones that have been propagated for over two millennia. Other examples are potato and banana.[19]Grafting can be regarded as cloning, since all the shoots and branches coming from the graft are genetically a clone of a single individual, but this particular kind of cloning has not come under ethical scrutiny and is generally treated as an entirely different kind of operation.

Many trees, shrubs, vines, ferns and other herbaceous perennials form clonal colonies naturally. Parts of an individual plant may become detached by fragmentation and grow on to become separate clonal individuals. A common example is in the vegetative reproduction of moss and liverwort gametophyte clones by means of gemmae. Some vascular plants e.g. dandelion and certain viviparous grasses also form seeds asexually, termed apomixis, resulting in clonal populations of genetically identical individuals.

Clonal derivation exists in nature in some animal species and is referred to as parthenogenesis (reproduction of an organism by itself without a mate). This is an asexual form of reproduction that is only found in females of some insects, crustaceans, nematodes,[20] fish (for example the hammerhead shark[21]), the Komodo dragon[21] and lizards. The growth and development occurs without fertilization by a male. In plants, parthenogenesis means the development of an embryo from an unfertilized egg cell, and is a component process of apomixis. In species that use the XY sex-determination system, the offspring will always be female. An example is the little fire ant (Wasmannia auropunctata), which is native to Central and South America but has spread throughout many tropical environments.

Artificial cloning of organisms may also be called reproductive cloning.

Hans Spemann, a German embryologist was awarded a Nobel Prize in Physiology or Medicine in 1935 for his discovery of the effect now known as embryonic induction, exercised by various parts of the embryo, that directs the development of groups of cells into particular tissues and organs. In 1928 he and his student, Hilde Mangold, were the first to perform somatic-cell nuclear transfer using amphibian embryos one of the first moves towards cloning.[22]

Reproductive cloning generally uses "somatic cell nuclear transfer" (SCNT) to create animals that are genetically identical. This process entails the transfer of a nucleus from a donor adult cell (somatic cell) to an egg from which the nucleus has been removed, or to a cell from a blastocyst from which the nucleus has been removed.[23] If the egg begins to divide normally it is transferred into the uterus of the surrogate mother. Such clones are not strictly identical since the somatic cells may contain mutations in their nuclear DNA. Additionally, the mitochondria in the cytoplasm also contains DNA and during SCNT this mitochondrial DNA is wholly from the cytoplasmic donor's egg, thus the mitochondrial genome is not the same as that of the nucleus donor cell from which it was produced. This may have important implications for cross-species nuclear transfer in which nuclear-mitochondrial incompatibilities may lead to death.

Artificial embryo splitting or embryo twinning, a technique that creates monozygotic twins from a single embryo, is not considered in the same fashion as other methods of cloning. During that procedure, an donor embryo is split in two distinct embryos, that can then be transferred via embryo transfer. It is optimally performed at the 6- to 8-cell stage, where it can be used as an expansion of IVF to increase the number of available embryos.[24] If both embryos are successful, it gives rise to monozygotic (identical) twins.

Dolly, a Finn-Dorset ewe, was the first mammal to have been successfully cloned from an adult somatic cell. Dolly was formed by taking a cell from the udder of her 6-year old biological mother.[25] Dolly's embryo was created by taking the cell and inserting it into a sheep ovum. It took 434 attempts before an embryo was successful.[26] The embryo was then placed inside a female sheep that went through a normal pregnancy.[27] She was cloned at the Roslin Institute in Scotland and lived there from her birth in 1996 until her death in 2003 when she was six. She was born on 5 July 1996 but not announced to the world until 22 February 1997.[28] Her stuffed remains were placed at Edinburgh's Royal Museum, part of the National Museums of Scotland.[29]

Dolly was publicly significant because the effort showed that genetic material from a specific adult cell, programmed to express only a distinct subset of its genes, can be reprogrammed to grow an entirely new
organism. Before this demonstration, it had been shown by John Gurdon that nuclei from differentiated cells could give rise to an entire organism after transplantation into an enucleated egg.[30] However, this concept was not yet demonstrated in a mammalian system.

The first mammalian cloning (resulting in Dolly the sheep) had a success rate of 29 embryos per 277 fertilized eggs, which produced three lambs at birth, one of which lived. In a bovine experiment involving 70 cloned calves, one-third of the calves died young. The first successfully cloned horse, Prometea, took 814 attempts. Notably, although the first[clarification needed] clones were frogs, no adult cloned frog has yet been produced from a somatic adult nucleus donor cell.

There were early claims that Dolly the sheep had pathologies resembling accelerated aging. Scientists speculated that Dolly's death in 2003 was related to the shortening of telomeres, DNA-protein complexes that protect the end of linear chromosomes. However, other researchers, including Ian Wilmut who led the team that successfully cloned Dolly, argue that Dolly's early death due to respiratory infection was unrelated to deficiencies with the cloning process. This idea that the nuclei have not irreversibly aged was shown in 2013 to be true for mice.[31]

Dolly was named after performer Dolly Parton because the cells cloned to make her were from a mammary gland cell, and Parton is known for her ample cleavage.[32]

The modern cloning techniques involving nuclear transfer have been successfully performed on several species. Notable experiments include:

Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning, which is the reproduction of human cells and tissues. It does not refer to the natural conception and delivery of identical twins. The possibility of human cloning has raised controversies. These ethical concerns have prompted several nations to pass legislature regarding human cloning and its legality.

Two commonly discussed types of theoretical human cloning are therapeutic cloning and reproductive cloning. Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants, and is an active area of research, but is not in medical practice anywhere in the world, as of 2014. Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and, more recently, pluripotent stem cell induction. Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues.[57]

There are a variety of ethical positions regarding the possibilities of cloning, especially human cloning. While many of these views are religious in origin, the questions raised by cloning are faced by secular perspectives as well. Perspectives on human cloning are theoretical, as human therapeutic and reproductive cloning are not commercially used; animals are currently cloned in laboratories and in livestock production.

Advocates support development of therapeutic cloning in order to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants,[58] to avoid the need for immunosuppressive drugs,[57] and to stave off the effects of aging.[59] Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology.[60]

Opponents of cloning have concerns that technology is not yet developed enough to be safe[61] and that it could be prone to abuse (leading to the generation of humans from whom organs and tissues would be harvested),[62][63] as well as concerns about how cloned individuals could integrate with families and with society at large.[64][65]

Religious groups are divided, with some opposing the technology as usurping "God's place" and, to the extent embryos are used, destroying a human life; others support therapeutic cloning's potential life-saving benefits.[66][67]

Cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die,[68][69] and while food from cloned animals has been approved by the US FDA,[70][71] its use is opposed by groups concerned about food safety.[72][73][74]

Cloning, or more precisely, the reconstruction of functional DNA from extinct species has, for decades, been a dream. Possible implications of this were dramatized in the 1984 novel Carnosaur and the 1990 novel Jurassic Park.[75][76] The best current cloning techniques have an average success rate of 9.4 percent[77] (and as high as 25 percent[31]) when working with familiar species such as mice,[note 1] while cloning wild animals is usually less than 1 percent successful.[80] Several tissue banks have come into existence, including the "Frozen Zoo" at the San Diego Zoo, to store frozen tissue from the world's rarest and most endangered species.[75][81][82]

In 2001, a cow named Bessie gave birth to a cloned Asian gaur, an endangered species, but the calf died after two days. In 2003, a banteng was successfully cloned, followed by three African wildcats from a thawed frozen embryo. These successes provided hope that similar techniques (using surrogate mothers of another species) might be used to clone extinct species. Anticipating this possibility, tissue samples from the last bucardo (Pyrenean ibex) were frozen in liquid nitrogen immediately after it died in 2000. Researchers are also considering cloning endangered species such as the giant panda and cheetah.

In 2002, geneticists at the Australian Museum announced that they had replicated DNA of the thylacine (Tasmanian tiger), at the time extinct for about 65 years, using polymerase chain reaction.[83] However, on 15 February 2005 the museum announced that it was stopping the project after tests showed the specimens' DNA had been too badly degraded by the (ethanol) preservative. On 15 May 2005 it was announced that the thylacine project would be revived, with new participation from researchers in New South Wales and Victoria.

In January 2009, for the first time, an extinct animal, the Pyrenean ibex mentioned above was cloned, at the Centre of Food Technology and Research of Aragon, using the preserved frozen cell nucleus of the skin samples from 2001 and domestic goat egg-cells. The ibex died shortly after birth due to physical defects in its lungs.[84]

One of the most anticipated targets for cloning was once the woolly mammoth, but attempts to extract DNA from frozen mammoths have been unsuccessful, though a joint Russo-Japanese team is currently working toward this goal. In January 2011, it was reported by Yomiuri Shimbun that a team of scientists headed by Akira Iritani of Kyoto University had built upon research by Dr. Wakayama, saying that they will extract DNA from a mammoth carcass that had been preserved in a Russian laboratory and insert it into the egg cells of an African elephant in hopes of producing a mammoth embryo. The researchers said they hoped to produce a baby mammoth within six years.[85][86] It was noted, however that the result, if possible, would be an elephant-mammoth hybrid rather than a true mammoth.[87] Another problem is the survival of the reconstructed mammoth: ruminants rely on a symbiosis with specific microbiota in their stomachs for digestion.[87]

Scientists at the University of Newcastle and University of New South Wales announced in March 2013 that the very recently extinct gastric-brooding frog would be the subject of a cloning attempt to resurrect the species.[88] < /p>

Many such "de-extinction" projects are described in the Long Now Foundation's Revive and Restore Project.[89]

After an eight-year project involving the use of a pioneering cloning technique, Japanese researchers created 25 generations of healthy cloned mice with normal lifespans, demonstrating that clones are not intrinsically shorter-lived than naturally born animals.[31][90]

In an article in the 8 November 1993 article of Time, cloning was portrayed in a negative way, modifying Michelangelo's Creation of Adam to depict Adam with five identical hands. Newsweek's 10 March 1997 issue also critiqued the ethics of human cloning, and included a graphic depicting identical babies in beakers.

Cloning is a recurring theme in a wide variety of contemporary science fiction, ranging from action films such as Jurassic Park (1993), The 6th Day (2000), Resident Evil (2002), Star Wars (2002) and The Island (2005), to comedies such as Woody Allen's 1973 film Sleeper.[91]

Science fiction has used cloning, most commonly and specifically human cloning, due to the fact that it brings up controversial questions of identity.[92][93]A Number is a 2002 play by English playwright Caryl Churchill which addresses the subject of human cloning and identity, especially nature and nurture. The story, set in the near future, is structured around the conflict between a father (Salter) and his sons (Bernard 1, Bernard 2, and Michael Black) two of whom are clones of the first one. A Number was adapted by Caryl Churchill for television, in a co-production between the BBC and HBO Films.[94]

A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation. The 2005 Kazuo Ishiguro novel Never Let Me Go and the 2010 film adaption[95] are set in an alternate history in which cloned humans are created for the sole purpose of providing organ donations to naturally born humans, despite the fact that they are fully sentient and self-aware. The 2005 film The Island[96] revolves around a similar plot, with the exception that the clones are unaware of the reason for their existence.

The use of human cloning for military purposes has also been explored in several works. Star Wars portrays human cloning in Clone Wars.[97]

The exploitation of human clones for dangerous and undesirable work was examined in the 2009 British science fiction film Moon.[98] In the futuristic novel Cloud Atlas and subsequent film, one of the story lines focuses on a genetically-engineered fabricant clone named Sonmi~451 who is one of millions raised in an artificial "wombtank," destined to serve from birth. She is one of thousands of clones created for manual and emotional labor; Sonmi herself works as a server in a restaurant. She later discovers that the sole source of food for clones, called 'Soap', is manufactured from the clones themselves.[99]

Cloning has been used in fiction as a way of recreating historical figures. In the 1976 Ira Levin novel The Boys from Brazil and its 1978 film adaptation, Josef Mengele uses cloning to create copies of Adolf Hitler.[100]

In 2012, a Japanese television show named "Bunshin" was created. The story's main character, Mariko, is a woman studying child welfare in Hokkaido. She grew up always doubtful about the love from her mother, who looked nothing like her and who died nine years before. One day, she finds some of her mother's belongings at a relative's house, and heads to Tokyo to seek out the truth behind her birth. She later discovered that she was a clone.[101]

In the 2013 television show Orphan Black, cloning is used as a scientific study on the behavioral adaptation of the clones.[102] In a similar vein, the book The Double by Nobel Prize winner Jos Saramago explores the emotional experience of a man who discovers that he is a clone.[103]

The rest is here:

Cloning - Wikipedia, the free encyclopedia

Posted in Cloning | Comments Off on Cloning – Wikipedia, the free encyclopedia

How Cloning Works | HowStuffWorks

Posted: at 4:55 am

On Jan. 8, 2001, scientists at Advanced Cell Technology, Inc., announced the birth of the first clone of an endangered animal, a baby bull gaur (a large wild ox from India and southeast Asia) named Noah. Although Noah died of an infection unrelated to the procedure, the experiment demonstrated that it is possible to save endangered species through cloning.

Cloning is the process of making a genetically identical organism through nonsexual means. It has been used for many years to produce plants (even growing a plant from a cutting is a type of cloning).

Animal cloning has been the subject of scientific experiments for years, but garnered little attention until the birth of the first cloned mammal in 1996, a sheep named Dolly. Since Dolly, several scientists have cloned other animals, including cows and mice. The recent success in cloning animals has sparked fierce debates among scientists, politicians and the general public about the use and morality of cloning plants, animals and possibly humans.

In this article, we will examine how cloning works and look at possible uses of this technology.

Read the original post:

How Cloning Works | HowStuffWorks

Posted in Cloning | Comments Off on How Cloning Works | HowStuffWorks

Reasons Against Cloning – VIDEOS & ARTICLES

Posted: at 4:55 am

Written by Patrick Dixon

Futurist Keynote Speaker: Posts, Slides, Videos - What is Human Cloning? How to Clone. But Ethical?

Human cloning: who is cloning humans and arguments against cloning (2007)

How human clones are being made - for medical research. Arguments for and against human cloning research. Why some people want to clone themselves or even to clone the dead (and not just cloning pets).

Why investors are moving away from human cloning and why human cloning now looks a last-century way to fight disease (2007)

Should we ban human cloning? Arguments against cloning

An abnormal baby would be a nightmare come true. The technique is extremely risky right now. A particular worry is the possibility that the genetic material used from the adult will continue to age so that the genes in a newborn baby clone could be - say - 30 years old or more on the day of birth. Many attempts at animal cloning produced disfigured monsters with severe abnormalities. So that would mean creating cloned embryos, implanting them and destroying (presumably) those that look imperfect as they grow in the womb. However some abnormalities may not appear till after birth. A cloned cow recently died several weeks after birth with a huge abnormality of blood cell production. Dolly the Sheep died prematurely of severe lung disease in February 2003, and also suffered from arthritis at an unexpectedly early age - probably linked to the cloning process.

Even if a few cloned babies are born apparently normal we will have to wait up to 20 years to be sure they are not going to have problems later -for example growing old too fast. Every time a clone is made it is like throwing the dice and even a string of "healthy" clones being born would not change the likelihood that many clones born in future may have severe medical problems. And of course, that's just the ones born. What about all the disfigured and highly abnormal clones that either spontaneously aborted or were destroyed / terminated by scientists worried about the horrors they might be creating.

A child grows up knowing her mother is her sister, her grandmother is her mother. Her father is her brother-in-law. Every time her mother looks at her, she is seeing herself growing up. Unbearable emotional pressures on a teenager trying to establish his or her identity. What happens to a marriage when the "father" sees his wife's clone grow up into the exact replica (by appearance) of the beautiful 18 year old he fell in love with 35 years ago? A sexual relationship would of course be with his wife's twin, no incest involved technically.

Or maybe the child knows it is the twin of a dead brother or sister. What kind of pressures will he or she feel, knowing they were made as a direct replacement for another? It is a human experiment doomed to failure because the child will NOT be identical in every way, despite the hopes of the parents. One huge reason will be that the child will be brought up in a highly abnormal household: one where grief has been diverted into makeing a clone instead of adjusting to loss. The family environment will be totally different than that the other twin experienced. That itself will place great pressures on the emotional development of the child. You will not find a child psychiatrist in the world who could possibly say that there will not be very significant emotional risk to the cloned child as a result of these pressures.

What would Hitler have done with cloning technology if available in the 1940s? There are powerful leaders in every generation who will seek to abuse this technology for their own purposes. Going ahead with cloning technology makes this far more likely. You cannot have so-called therapeutic cloning without reproductive cloning because the technique to make cloned babies is the same as to make a cloned embryo to try to make replacement tissues. And at the speed at which biotech is accelerating there will soon be other ways to get such cells - adult stem cell technology. It is rather crude to create a complete embryonic identical twin embryo just to get hold of stem cells to make - say - nervous tissue. Much better to take cells from the adult and trigger them directly to regress to a more primitive form without the ethical issues raised by inserting a full adult set of genes into an unfertilised egg.

Related news items:

Older news items:

Thanks for promoting with Facebook LIKE or Tweet. Really interested to hear your views. Post below.

Javacript is required for help and viewing images.

Reply to Trinity Heckathorn

Reply to the great cornholio

Reply to the great cornholio

Reply to the great cornholio

Reply to jazmine Smith =)

Reply to jazmine Smith =)

1

See original here:

Reasons Against Cloning - VIDEOS & ARTICLES

Posted in Cloning | Comments Off on Reasons Against Cloning – VIDEOS & ARTICLES

Social Darwinism – Wikipedia, the free encyclopedia

Posted: at 4:54 am

Social Darwinism is a name given to various theories of society which emerged in the United Kingdom, North America, and Western Europe in the 1870s, and which claim to apply biological concepts of natural selection and survival of the fittest to sociology and politics.[1][2] According to their critics, at least, social Darwinists argue that the strong should see their wealth and power increase while the weak should see their wealth and power decrease. Different social-Darwinist groups have differing views about which groups of people are considered to be the strong and which groups of people are considered to be the weak, and they also hold different opinions about the precise mechanisms that should be used to reward strength and punish weakness. Many such views stress competition between individuals in laissez-faire capitalism, while others are claimed[by whom?] to have motivated ideas of authoritarianism, eugenics, racism, imperialism,[3]fascism, Nazism, and struggle between national or racial groups.[4][5]

The term Social Darwinism gained widespread currency when used after 1944 by opponents of these earlier concepts. The majority of those who have been categorised as social Darwinists did not identify themselves by such a label.[6]

Creationists have often maintained that social Darwinismleading to policies designed to reward the most competitiveis a logical consequence of "Darwinism" (the theory of natural selection in biology).[7] Biologists and historians have stated that this is a fallacy of appeal to nature, since the theory of natural selection is merely intended as a description of a biological phenomenon and should not be taken to imply that this phenomenon is good or that it ought to be used as a moral guide in human society.[citation needed] While most scholars recognize some historical links between the popularisation of Darwin's theory and forms of social Darwinism, they also maintain that social Darwinism is not a necessary consequence of the principles of biological evolution.

Scholars debate the extent to which the various social Darwinist ideologies reflect Charles Darwin's own views on human social and economic issues. His writings have passages that can be interpreted as opposing aggressive individualism, while other passages appear to promote it.[8] Some scholars argue that Darwin's view gradually changed and came to incorporate views from other theorists such as Herbert Spencer.[9] Spencer published[10] his Lamarckian evolutionary ideas about society before Darwin first published his theory in 1859, and both Spencer and Darwin promoted their own conceptions of moral values. Spencer supported laissez-faire capitalism on the basis of his Lamarckian belief that struggle for survival spurred self-improvement which could be inherited.[11]

The term first appeared in Europe in 1877,[12] and around this time it was used by sociologists opposed to the concept.[13] The term was popularized in the United States in 1944 by the American historian Richard Hofstadter who used it in the ideological war effort against fascism to denote a reactionary creed which promoted competitive strife, racism and chauvinism. Hofstadter later also recognized (what he saw as) the influence of Darwinist and other evolutionary ideas upon those with collectivist views, enough to devise a term for the phenomenon, "Darwinist collectivism".[3] Before Hofstadter's work the use of the term "social Darwinism" in English academic journals was quite rare.[14] In fact,

... there is considerable evidence that the entire concept of "social Darwinism" as we know it today was virtually invented by Richard Hofstadter. Eric Foner, in an introduction to a then-new edition of Hofstadter's book published in the early 1990s, declines to go quite that far. "Hofstadter did not invent the term Social Darwinism", Foner writes, "which originated in Europe in the 1860s and crossed the Atlantic in the early twentieth century. But before he wrote, it was used only on rare occasions; he made it a standard shorthand for a complex of late-nineteenth-century ideas, a familiar part of the lexicon of social thought."

The term "social Darwinism" has rarely been used by advocates of the supposed ideologies or ideas; instead it has almost always been used pejoratively by its opponents.[6] The term draws upon the common use of the term Darwinism, which has been used to describe a range of evolutionary views, but in the late 19th century was applied more specifically to natural selection as first advanced by Charles Darwin to explain speciation in populations of organisms. The process includes competition between individuals for limited resources, popularly but inaccurately described by the phrase "survival of the fittest", a term coined by sociologist Herbert Spencer.

While the term has been applied to the claim that Darwin's theory of evolution by natural selection can be used to understand the social endurance of a nation or country, social Darwinism commonly refers to ideas that predate Darwin's publication of On the Origin of Species. Others whose ideas are given the label include the 18th century clergyman Thomas Malthus, and Darwin's cousin Francis Galton who founded eugenics towards the end of the 19th century.

The term Darwinism had been coined by Thomas Henry Huxley in his April 1860 review of "On the Origin of Species",[15] and by the 1870s it was used to describe a range of concepts of evolutionism or development, without any specific commitment to Charles Darwin's own theory.[16]

The first use of the phrase "social Darwinism" was in Joseph Fisher's 1877 article on The History of Landholding in Ireland which was published in the Transactions of the Royal Historical Society.[12] Fisher was commenting on how a system for borrowing livestock which had been called "tenure" had led to the false impression that the early Irish had already evolved or developed land tenure;[17]

These arrangements did not in any way affect that which we understand by the word " tenure", that is, a man's farm, but they related solely to cattle, which we consider a chattel. It has appeared necessary to devote some space to this subject, inasmuch as that usually acute writer Sir Henry Maine has accepted the word " tenure " in its modern interpretation, and has built up a theory under which the Irish chief " developed " into a feudal baron. I can find nothing in the Brehon laws to warrant this theory of social Darwinism, and believe further study will show that the Cain Saerrath and the Cain Aigillue relate solely to what we now call chattels, and did not in any way affect what we now call the freehold, the possession of the land.

Despite the fact that social Darwinism bears Charles Darwin's name, it is also linked today with others, notably Herbert Spencer, Thomas Malthus, and Francis Galton, the founder of eugenics. In fact, Spencer was not described as a social Darwinist until the 1930s, long after his death.[18]

Darwin himself gave serious consideration to Galton's work, but considered the ideas of "hereditary improvement" impractical. Aware of weaknesses in his own family, Darwin was sure that families would naturally refuse such selection and wreck the scheme. He thought that even if compulsory registration was the only way to improve the human race, this illiberal idea would be unacceptable, and it would be better to publicize the "principle of inheritance" and let people decide for themselves.[19]

In The Descent of Man, and Selection in Relation to Sex of 1882 Darwin described ho
w medical advances meant that the weaker were able to survive and have families, and as he commented on the effects of this, he cautioned that hard reason should not override sympathy and considered how other factors might reduce the effect:

Thus the weak members of civilized societies propagate their kind. No one who has attended to the breeding of domestic animals will doubt that this must be highly injurious to the race of man. It is surprising how soon a want of care, or care wrongly directed, leads to the degeneration of a domestic race; but excepting in the case of man himself, hardly any one is so ignorant as to allow his worst animals to breed. The aid which we feel impelled to give to the helpless is mainly an incidental result of the instinct of sympathy, which was originally acquired as part of the social instincts, but subsequently rendered, in the manner previously indicated, more tender and more widely diffused. Nor could we check our sympathy, even at the urging of hard reason, without deterioration in the noblest part of our nature. The surgeon may harden himself whilst performing an operation, for he knows that he is acting for the good of his patient; but if we were intentionally to neglect the weak and helpless, it could only be for a contingent benefit, with an overwhelming present evil. ... We must therefore bear the undoubtedly bad effects of the weak surviving and propagating their kind; but there appears to be at least one check in steady action, namely that the weaker and inferior members of society do not marry so freely as the sound; and this check might be indefinitely increased by the weak in body or mind refraining from marriage, though this is more to be hoped for than expected.[20]

Herbert Spencer's ideas, like those of evolutionary progressivism, stemmed from his reading of Thomas Malthus, and his later theories were influenced by those of Darwin. However, Spencer's major work, Progress: Its Law and Cause (1857), was released two years before the publication of Darwin's On the Origin of Species, and First Principles was printed in 1860.

In The Social Organism (1860), Spencer compares society to a living organism and argues that, just as biological organisms evolve through natural selection, society evolves and increases in complexity through analogous processes.[21]

In many ways, Spencer's theory of cosmic evolution has much more in common with the works of Lamarck and Auguste Comte's positivism than with Darwin's.

Jeff Riggenbach argues that Spencer's view was that culture and education made a sort of Lamarckism possible[1] and notes that Herbert Spencer was a proponent of private charity.[1]

Spencer's work also served to renew interest in the work of Malthus. While Malthus's work does not itself qualify as social Darwinism, his 1798 work An Essay on the Principle of Population, was incredibly popular and widely read by social Darwinists. In that book, for example, the author argued that as an increasing population would normally outgrow its food supply, this would result in the starvation of the weakest and a Malthusian catastrophe.

According to Michael Ruse, Darwin read Malthus' famous Essay on a Principle of Population in 1838, four years after Malthus' death. Malthus himself anticipated the social Darwinists in suggesting that charity could exacerbate social problems.

Another of these social interpretations of Darwin's biological views, later known as eugenics, was put forth by Darwin's cousin, Francis Galton, in 1865 and 1869. Galton argued that just as physical traits were clearly inherited among generations of people, the same could be said for mental qualities (genius and talent). Galton argued that social morals needed to change so that heredity was a conscious decision in order to avoid both the over-breeding by less fit members of society and the under-breeding of the more fit ones.

In Galton's view, social institutions such as welfare and insane asylums were allowing inferior humans to survive and reproduce at levels faster than the more "superior" humans in respectable society, and if corrections were not soon taken, society would be awash with "inferiors". Darwin read his cousin's work with interest, and devoted sections of Descent of Man to discussion of Galton's theories. Neither Galton nor Darwin, though, advocated any eugenic policies restricting reproduction, due to their Whiggish distrust of government.[22]

Friedrich Nietzsche's philosophy addressed the question of artificial selection, yet Nietzsche's principles did not concur with Darwinian theories of natural selection. Nietzsche's point of view on sickness and health, in particular, opposed him to the concept of biological adaptation as forged by Spencer's "fitness". Nietzsche criticized Haeckel, Spencer, and Darwin, sometimes under the same banner by maintaining that in specific cases, sickness was necessary and even helpful.[23] Thus, he wrote:

Wherever progress is to ensue, deviating natures are of greatest importance. Every progress of the whole must be preceded by a partial weakening. The strongest natures retain the type, the weaker ones help to advance it. Something similar also happens in the individual. There is rarely a degeneration, a truncation, or even a vice or any physical or moral loss without an advantage somewhere else. In a warlike and restless clan, for example, the sicklier man may have occasion to be alone, and may therefore become quieter and wiser; the one-eyed man will have one eye the stronger; the blind man will see deeper inwardly, and certainly hear better. To this extent, the famous theory of the survival of the fittest does not seem to me to be the only viewpoint from which to explain the progress of strengthening of a man or of a race.[24]

Ernst Haeckel's recapitulation theory was not Darwinism, but rather attempted to combine the ideas of Goethe, Lamarck and Darwin. It was adopted by emerging social sciences to support the concept that non-European societies were "primitive" in an early stage of development towards the European ideal, but since then it has been heavily refuted on many fronts[25] Haeckel's works led to the formation of the Monist League in 1904 with many prominent citizens among its members, including the Nobel Prize winner Wilhelm Ostwald.

The simpler aspects of social Darwinism followed the earlier Malthusian ideas that humans, especially males, require competition in their lives in order to survive in the future. Further, the poor should have to provide for themselves and not be given any aid. However, amidst this climate, most social Darwinists of the early twentieth century actually supported better working conditions and salaries. Such measures would grant the poor a better chance to provide for themselves yet still distinguish those who are capable of succeeding from those who are poor out of laziness, weakness, or inferiority.

"Social Darwinism" was first described by Oscar Schmidt of the University of Strasbourg, reporting at a scientific and medical conference held in Munich in 1877. He noted how socialists, although opponents of Darwin's theory, used it to add force to their political arguments. Schmidt's essay first appeared in English in Popular Science in March 1879.[26] There followed an anarchist tract published in Paris in 1880 entitled "Le darwinisme social" by mile Gautier. However, the use of the term was very rareat least in the English-speaking world (Hodgson, 2004)[27]until the American historian Richard Hofstadter published his influentia
l Social Darwinism in American Thought (1944) during World War II.

Hypotheses of social evolution and cultural evolution were common in Europe. The Enlightenment thinkers who preceded Darwin, such as Hegel, often argued that societies progressed through stages of increasing development. Earlier thinkers also emphasized conflict as an inherent feature of social life. Thomas Hobbes's 17th century portrayal of the state of nature seems analogous to the competition for natural resources described by Darwin. Social Darwinism is distinct from other theories of social change because of the way it draws Darwin's distinctive ideas from the field of biology into social studies.

Darwin, unlike Hobbes, believed that this struggle for natural resources allowed individuals with certain physical and mental traits to succeed more frequently than others, and that these traits accumulated in the population over time, which under certain conditions could lead to the descendants being so different that they would be defined as a new species.

However, Darwin felt that "social instincts" such as "sympathy" and "moral sentiments" also evolved through natural selection, and that these resulted in the strengthening of societies in which they occurred, so much so that he wrote about it in Descent of Man:

The following proposition seems to me in a high degree probablenamely, that any animal whatever, endowed with well-marked social instincts, the parental and filial affections being here included, would inevitably acquire a moral sense or conscience, as soon as its intellectual powers had become as well, or nearly as well developed, as in man. For, firstly, the social instincts lead an animal to take pleasure in the society of its fellows, to feel a certain amount of sympathy with them, and to perform various services for them.[28]

Spencer proved to be a popular figure in the 1880s primarily because his application of evolution to areas of human endeavor promoted an optimistic view of the future as inevitably becoming better. In the United States, writers and thinkers of the gilded age such as Edward L. Youmans, William Graham Sumner, John Fiske, John W. Burgess, and others developed theories of social evolution as a result of their exposure to the works of Darwin and Spencer.

In 1883, Sumner published a highly influential pamphlet entitled "What Social Classes Owe to Each Other", in which he insisted that the social classes owe each other nothing, synthesizing Darwin's findings with free enterprise Capitalism for his justification.[citation needed] According to Sumner, those who feel an obligation to provide assistance to those unequipped or under-equipped to compete for resources, will lead to a country in which the weak and inferior are encouraged to breed more like them, eventually dragging the country down. Sumner also believed that the best equipped to win the struggle for existence was the American businessman, and concluded that taxes and regulations serve as dangers to his survival. This pamphlet makes no mention of Darwinism, and only refers to Darwin in a statement on the meaning of liberty, that "There never has been any man, from the primitive barbarian up to a Humboldt or a Darwin, who could do as he had a mind to."[29]

Sumner never fully embraced Darwinian ideas, and some contemporary historians do not believe that Sumner ever actually believed in social Darwinism.[30] The great majority of American businessmen rejected the anti-philanthropic implications of the theory. Instead they gave millions to build schools, colleges, hospitals, art institutes, parks and many other institutions. Andrew Carnegie, who admired Spencer, was the leading philanthropist in the world (18901920), and a major leader against imperialism and warfare.[31]

H. G. Wells was heavily influenced by Darwinist thoughts, and novelist Jack London wrote stories of survival that incorporated his views on social Darwinism.[32]Film director Stanley Kubrick has been quoted to have held social Darwinist opinions.[33]

Social Darwinism has influenced political, public health and social movements in Japan since the late 19th and early 20th century. Social Darwinism was originally brought to Japan through the works of Francis Galton and Ernst Haeckel as well as United States, British and French Lamarkian eugenic written studies of the late 19th and early 20th centuries.[34] Eugenism as a science was hotly debated at the beginning of the 20th century, in Jinsei-Der Mensch, the first eugenics journal in the empire. As Japan sought to close ranks with the west, this practice was adopted wholesale along with colonialism and its justifications.

Social Darwinism was formally introduced to China through the translation by Yan Fu of Huxley's Evolution and Ethics, in the course of an extensive series of translations of influential Western thought.[35] Yan's translation strongly impacted Chinese scholars because he added national elements not found in the original. He understood Spencer's sociology as "not merely analytical and descriptive, but prescriptive as well", and saw Spencer building on Darwin, whom Yan summarized thus:

By the 1920s, social Darwinism found expression in the promotion of eugenics by the Chinese sociologist Pan Guangdan. When Chiang Kai-shek started the New Life movement in 1934, he

Nazi Germany's justification for its aggression was regularly promoted in Nazi propaganda films depicting scenes such as beetles fighting in a lab setting to demonstrate the principles of "survival of the fittest" as depicted in Alles Leben ist Kampf (English translation: All Life is Struggle). Hitler often refused to intervene in the promotion of officers and staff members, preferring instead to have them fight amongst themselves to force the "stronger" person to prevail"strength" referring to those social forces void of virtue or principle.[38] Key proponents were Alfred Rosenberg, who was hanged later at Nuremberg. Such ideas also helped to advance euthanasia in Germany, especially Action T4, which led to the murder of mentally ill and disabled people in Germany.

The argument that Nazi ideology was strongly influenced by social Darwinist ideas is often found in historical and social science literature.[39] For example, the philosopher and historian Hannah Arendt analysed the historical development from a politically indifferent scientific Darwinism via social Darwinist ethics to racist ideology.[40]

By 1985, creationists were taking up the argument that Nazi ideology was directly influenced by Darwinian evolutionary theory.[41] Such claims have been presented by creationists such as Jonathan Sarfati.[42][43][undue weight? discuss]Intelligent design creationism supporters have promoted this position as well. For example, it is a theme in the work of Richard Weikart, who is a historian at California State University, Stanislaus, and a senior fellow for the Center for Science and Culture of the Discovery Institute.[44] It is also a main argument in the 2008 intelligent-design/creationist movie Expelled: No Intelligence Allowed. These claims are widely criticized.[45][46][47][48][49][50] The Anti-Defamation League has rejected such attempts to link Darwin's ideas with Nazi atrocities, and has stated that "Using the Holocaust in order to tarnish those who promote the theory of evolution is outrageous and trivializes the complex factors that led to the mass extermination of European Jewry."[51]

Similar criticisms are sometimes applied (or misapplied) to other political or scientific theories that resemble social Darwinism, for ex
ample criticisms leveled at evolutionary psychology. For example, a critical reviewer of Weikart's book writes that "(h)is historicization of the moral framework of evolutionary theory poses key issues for those in sociobiology and evolutionary psychology, not to mention bioethicists, who have recycled many of the suppositions that Weikart has traced."[48]

Another example is recent scholarship that portrays Ernst Haeckel's Monist League as a mystical progenitor of the Vlkisch movement and, ultimately, of the Nazi Party of Adolf Hitler. Scholars opposed to this interpretation, however, have pointed out that the Monists were freethinkers who opposed all forms of mysticism, and that their organizations were immediately banned following the Nazi takeover in 1933 because of their association with a wide variety of causes including feminism, pacifism, human rights, and early gay rights movements.[52]

Social Darwinism has many definitions, and some of them are incompatible with each other. As such, social Darwinism has been criticized for being an inconsistent philosophy, which does not lead to any clear political conclusions. For example, The Concise Oxford Dictionary of Politics states:

Part of the difficulty in establishing sensible and consistent usage is that commitment to the biology of natural selection and to 'survival of the fittest' entailed nothing uniform either for sociological method or for political doctrine. A 'social Darwinist' could just as well be a defender of laissez-faire as a defender of state socialism, just as much an imperialist as a domestic eugenist.[53]

Social Darwinism was predominantly found in laissez-faire societies where the prevailing view was that of an individualist order to society. As such, social Darwinism supposed that human progress would generally favor the most individualistic races, which were those perceived as stronger. A different form of social Darwinism was part of the ideological foundations of Nazism and other fascist movements. This form did not envision survival of the fittest within an individualist order of society, but rather advocated a type of racial and national struggle where the state directed human breeding through eugenics.[54] Names such as "Darwinian collectivism" or "Reform Darwinism" have been suggested to describe these views, in order to differentiate them from the individualist type of social Darwinism.[3]

Some pre-twentieth century doctrines subsequently described as social Darwinism appear to anticipate state imposed eugenics[3] and the race doctrines of Nazism. Critics have frequently linked evolution, Charles Darwin and social Darwinism with racialism, nationalism, imperialism and eugenics, contending that social Darwinism became one of the pillars of fascism and Nazi ideology, and that the consequences of the application of policies of "survival of the fittest" by Nazi Germany eventually created a very strong backlash against the theory.[51][44]

As mentioned above, social Darwinism has often been linked to nationalism and imperialism.[55] During the age of New Imperialism, the concepts of evolution justified the exploitation of "lesser breeds without the law" by "superior races".[55] To elitists, strong nations were composed of white people who were successful at expanding their empires, and as such, these strong nations would survive in the struggle for dominance.[55] With this attitude, Europeans, except for Christian missionaries, seldom adopted the customs and languages of local people under their empires.[55]

Peter Kropotkin argued in his 1902 book Mutual Aid: A Factor of Evolution that Darwin did not define the fittest as the strongest, or most clever, but recognized that the fittest could be those who cooperated with each other. In many animal societies, "struggle is replaced by co-operation".

It may be that at the outset Darwin himself was not fully aware of the generality of the factor which he first invoked for explaining one series only of facts relative to the accumulation of individual variations in incipient species. But he foresaw that the term [evolution] which he was introducing into science would lose its philosophical and its only true meaning if it were to be used in its narrow sense onlythat of a struggle between separate individuals for the sheer means of existence. And at the very beginning of his memorable work he insisted upon the term being taken in its "large and metaphorical sense including dependence of one being on another, and including (which is more important) not only the life of the individual, but success in leaving progeny." [Quoting Origin of Species, chap. iii, p. 62 of first edition.]

While he himself was chiefly using the term in its narrow sense for his own special purpose, he warned his followers against committing the error (which he seems once to have committed himself) of overrating its narrow meaning. In The Descent of Man he gave some powerful pages to illustrate its proper, wide sense. He pointed out how, in numberless animal societies, the struggle between separate individuals for the means of existence disappears, how struggle is replaced by co-operation, and how that substitution results in the development of intellectual and moral faculties which secure to the species the best conditions for survival. He intimated that in such cases the fittest are not the physically strongest, nor the cunningest, but those who learn to combine so as mutually to support each other, strong and weak alike, for the welfare of the community. "Those communities", he wrote, "which included the greatest number of the most sympathetic members would flourish best, and rear the greatest number of offspring" (2nd edit., p. 163). The term, which originated from the narrow Malthusian conception of competition between each and all, thus lost its narrowness in the mind of one who knew Nature.[56]

Noam Chomsky discussed briefly Kropotkin's views in a July 8, 2011 YouTube video from Renegade Economist, in which he said Kropotkin argued

... the exact opposite [of Social Darwinism]. He argued that on Darwinian grounds, you would expect cooperation and mutual aid to develop leading towards community, workers' control and so on. Well, you know, he didn't prove his point. It's at least as well argued as Herbert Spencer is ...[57]

Continued here:

Social Darwinism - Wikipedia, the free encyclopedia

Posted in Darwinism | Comments Off on Social Darwinism – Wikipedia, the free encyclopedia

What is Social Darwinism – AllAboutScience.org

Posted: at 4:54 am

QUESTION: What is Social Darwinism?

ANSWER:

Herbert Spencer, a 19th century philosopher, promoted the idea of Social Darwinism. Social Darwinism is an application of the theory of natural selection to social, political, and economic issues. In its simplest form, Social Darwinism follows the mantra of "the strong survive," including human issues. This theory was used to promote the idea that the white European race was superior to others, and therefore, destined to rule over them.

At the time that Spencer began to promote Social Darwinism, the technology, economy, and government of the "White European" was advanced in comparison to that of other cultures. Looking at this apparent advantage, as well as the economic and military structures, some argued that natural selection was playing out, and that the race more suited to survival was winning. Some even extended this philosophy into a micro-economic issue, claiming that social welfare programs that helped the poor and disadvantaged were contrary to nature itself. Those who reject any and all forms of charity or governmental welfare often use arguments rooted in Social Darwinism.

At its worst, the implications of Social Darwinism were used as scientific justification for the Holocaust. The Nazis claimed that the murder of Jews in World War II was an example of cleaning out the inferior genetics. Many philosophers noted evolutionary echoes in Hitler's march to exterminate an entire race of people. Various other dictators and criminals have claimed the cause of Social Darwinism in carrying out their acts. Even without such actions, Social Darwinism has proven to be a false and dangerous philosophy.

Scientists and evolutionists maintain that this interpretation is only loosely based on Darwin's theory of natural selection. They will admit to an obvious parallel between Darwin's theory of Natural Selection and Spencer's beliefs. In nature, the strong survive and those best suited to survival will out-live the weak. According to Social Darwinism, those with strength (economic, physical, technological) flourish and those without are destined for extinction.

It is important to note that Darwin did not extend his theories to a social or economic level, nor are any credible evolutionists subscribing to the theories of Social Darwinism. Herbert Spencer's philosophy is only loosely based on the premises of Darwin's work.

However, according to evolutionary theory, nature is a "kill-or-be-killed" system. Those that cannot keep up are either left behind or cut off. If evolution, through chance, is solely responsible for life as we now know it, why should that process be countered? If "survival of the fittest" or "kill or be killed" cannot apply in what we define as "decent society," then, which is wrong, society or evolution? If neither, then how do we explain morality, charity, and compassion? Why drain resources from the strong to support the weak? Certainly, we should be charitable and help those in need.

Though Darwin did not promote Social Darwinism, basic evolutionary theory raises some nagging questions.

What is your response?

Yes, today I am deciding to follow Jesus

Yes, I am already a follower of Jesus

I still have questions

View post:

What is Social Darwinism - AllAboutScience.org

Posted in Darwinism | Comments Off on What is Social Darwinism – AllAboutScience.org

Digital Darwinism: How Disruptive Technology Is Changing …

Posted: at 4:54 am

Skip Article Header. Skip to: Start of Article. Image: keoni101/Flickr

Social media, mobile, wearables, Internet of Things, real-time these are just some of the technologies that are disrupting markets. Changes in how people communicate, connect, and discover are carrying incredible implications for businesses and just about anything where people are involved. Its not so much that technology is part of our everyday life or that technology is relentless in its barrage on humanity.

The real threat and opportunity in technologys disruption lies in the evolution of customer and employee behavior, values, and expectations. Companies are faced with a quandary as they invest resources and budgets in current technology and business strategies (business as usual) versus that of the unknown in how those investments align, or dont, with market and behavior shifts.

This is a time of digital Darwinism an era where technology and society are evolving faster than businesses can naturally adapt. This sets the stage for a new era of leadership, a new generation of business models, charging behind a mantra of adapt or die.

Rather than react to change or be disrupted by it, some forward-looking companies are investing in digital transformation to adapt and outperform peers.In November 2012, research-based consultancy Capgemini published a report studying the digital maturity of companies pursuing digital transformation. In its report, The Digital Advantage: How digital leaders outperform their peers in every industry, Capgemini found that those companies that are highly vested in both digital intensity and transformation management intensity, aka The Digirati, derive more revenue from their physical assets, theyre more profitable, and they also possess high market valuations.

Why is That?

It comes down to one word, relevance. If consumer behavior is evolving as a result of technology, businesses either compete to get ahead of it, they perpetually react to it, or they belittle it. One of the most problematic aspects around digital maturity is that technology is both part of the solution and also part of the problem.

Enter digital transformation.

Digital transformation may sound like its something youd hear in buzzword bingo, but it is one of the most important movements facing businesses today. It is forcing businesses to look beyond the world as they know it, observe how things are changing on the outside, to change transform philosophies, models, and systems on the inside. Ask 10 different experts in digital transformation for their definition of it though and you may just get 10 different answers. Before strategists can consider digital transformation, they at least have to know what it is, why its important, and what they need to do.

In 2013, I set out to better understand the catalyst and challenges around digital transformation and also the people driving it forward. It is indeed a deep and complex topic. I had to focus my research. Capgemini among others have already made tremendous headway in their work around technology and process models defining the evolution of digital maturity. One of the things I heard over and over was the need to know whos responsible for it and how do companies take steps in the right direction. Specifically, strategists wanted to know how to make the case in the absence of executive leadership pointing in new directions and leading teams to adapt or die! As a result, I explored digital transformation from a more human perspective. After a year of interviewing 20 leading digital strategists at some of the biggest brands around the world, I released my latest report, Digital Transformation: Why and How Companies are Investing in New Business Models to Lead Digital Customer Experiences.

What is Digital Transformation?

Again, it is a sweeping topic. Simply defined, digital transformation the intentional effots to adapt to this onslaught of disruptive technologies and how its affecting customer and employee behavior. As technology becomes a permanent fixture in everyday life, organizations are forced to update legacy technology strategies and supporting methodologies to better reflect how the real world is evolving. And, the need to do so is becoming increasingly obligatory.

In my research, I concentrated on how businesses are pursuing digital transformation in their quest to specifically understand how disruptive technology affects the customer experience. In turn, I learned how companies are reverse engineering investments, processes, and systems to better align with how markets are changing.

Because its focusing on customer behavior, digital transformation is actually in its own way making businesses more human. As such, digital transformation is not a specifically about technology, its empowered by it. Without an end in mind, digital transformation continually seeks out how to use technology in ways that improve customer experiences and relationships. It also represents an effort that introduces new models for business and, equally, creates a way of staying in business as customers become increasingly digital.

Some key findings from my research include:

While early in its evolution, digital transformation represents the next big thing in customer experience and, ultimately, how business is done. Those companies that get it and invest more in learning about their digital customers behaviors, preferences, and expectations will carry a significant competitive advantage over those that figure it out later (if at all). What separates typical new technology investments from those pursued by companies in my report is the ongoing search to find answers to problems and opportunities presented by the nuances of digital customers.

For example:

In the end, digital transformation is not a fad or a trendy moniker. It represents the future of business through the re-alignment of, or new investment in, technology and business models to more effectively engage digital consumers at every touchpoint in the customer experience lifecycle. Its bigger than any one area of technology disruption though and thats the point. Social media, mobile, cloud, et al. are converging into a greater force to push businesses out of comfort zones and into areas where true innovation can manifest.

The Result?

The roles and objectives of everyday marketing, social media, web, mobile and customer service and loyalty, can evolve to meet the needs and expectations of a more connected and discerning digital customer. Additionally, the outcome of even the smallest investments in change brings together typically disparate groups to work in harmony across the entire customer journey. This allows teams to cooperate, or merge into new groups, in uniting the digital journey to improve engagement; deliver a holistic experience; and eliminate friction, gaps, and overlap.

Perhaps the most important takeaway from my research is the pure ambition to make businesses relevant in a digital era.

The road to digital transformation is far from easy, but it carries great rewards for businesses and customers alike. It takes a village to bring about change, and it also takes the spark and perseverance of one person to spot important trends and create a sense of urgency around new possibilities.

But make no mistake. Digital transformation efforts grow market opportunities and profits as well as scaling efficiently in the process.

#AdaptorDie

Brian Solis is a principal analyst at Altimeter Group. He is also an award-winning author, promine
nt blogger, and keynote speaker. @briansolis

See the article here:

Digital Darwinism: How Disruptive Technology Is Changing ...

Posted in Darwinism | Comments Off on Digital Darwinism: How Disruptive Technology Is Changing …

Artificial Intelligence – The New York Times

Posted: at 4:54 am

Latest Articles

The titan of consumer technology has a conundrum: Can it create consumer delight in technology without snooping on its customers?

By QUENTIN HARDY

Apple still seems to view online services as add-ons to its devices, not as products or platforms that rise above the equipment.

By FARHAD MANJOO

Virtual travel assistant services designed to understand conversational language are expected to change the way travel is planned.

By JANE L. LEVERE

At an event sponsored by the Office of Science and Technology Policy, experts explored questions about systems that would make decisions without human input.

By JOHN MARKOFF

Googles Home device puts it in the race to become the go-to company for A.I. along with several rivals.

By PUI-WING TAM

A virtual assistant designed to compete with the Echo from Amazon and other artificial intelligence devices coming from Microsoft, Apple and Facebook.

By DAVID STREITFELD

Smartphone apps offer a hint of the possible uses for emerging artificial intelligence technology.

By KIT EATON

Apple has Siri, Amazon has its Echo, and now Google will introduce its virtual agent, Google Home.

By DAVID STREITFELD

The Pentagon is trying to build bridges with Silicon Valley as it looks to build a new generation of smart weapons.

By QUENTIN HARDY

Fund-Raisers Pitch for Trump at Hedge Fund Conference | Online Art Auctioneers to Merge

Secretary of Defense Ashton B. Carter takes his bridge-building message to Silicon Valley despite skepticism among some in the tech community.

By JOHN MARKOFF

A submersible robot in humanoid form, developed at Stanford University, completed its first dive in April, recovering a 17th-century vase.

By JAMES GORMAN

Mr. Cohen, an abstract painter, developed Aaron, a software program that learned to create art in a manner similar to freehand drawing.

By WILLIAM GRIMES

Regulators and others question whether robo-advisers, which assemble investment portfolios online, can grasp clients situations the way humans can.

By TARA SIEGEL BERNARD

The two-foot-tall robot Xianer dwells in a Buddhist temple, dispensing wisdom about religion and life.

By DIDI KIRSTEN TATLOW

Silicon Valley has fallen in love with A.I. assistants, but so far theyre hardly impressive. Is it the industrys fault, or is it ours?

By JENNA WORTHAM

The company reported a 21 percent decline in first-quarter earnings for the first quarter, though operating earnings per share were above estimates.

By STEVE LOHR

Wall Street Veterans Bet on Low-Income Homebuyers | Jose Cuervo Said to Be Preparing for I.P.O.

These automated software critters are growing in popularity, especially now that theyre doing more than pretending to be a human in a call center.

By JIM KERSTETTER

Features being added to Googles calendar will let users program in their aspirations for times when they dont have work or meetings scheduled.

By QUENTIN HARDY

The titan of consumer technology has a conundrum: Can it create consumer delight in technology without snooping on its customers?

By QUENTIN HARDY

Apple still seems to view online services as add-ons to its devices, not as products or platforms that rise above the equipment.

By FARHAD MANJOO

Virtual travel assistant services designed to understand conversational language are expected to change the way travel is planned.

By JANE L. LEVERE

At an event sponsored by the Office of Science and Technology Policy, experts explored questions about systems that would make decisions without human input.

By JOHN MARKOFF

Googles Home device puts it in the race to become the go-to company for A.I. along with several rivals.

By PUI-WING TAM

A virtual assistant designed to compete with the Echo from Amazon and other artificial intelligence devices coming from Microsoft, Apple and Facebook.

By DAVID STREITFELD

Smartphone apps offer a hint of the possible uses for emerging artificial intelligence technology.

By KIT EATON

Apple has Siri, Amazon has its Echo, and now Google will introduce its virtual agent, Google Home.

By DAVID STREITFELD

The Pentagon is trying to build bridges with Silicon Valley as it looks to build a new generation of smart weapons.

By QUENTIN HARDY

Fund-Raisers Pitch for Trump at Hedge Fund Conference | Online Art Auctioneers to Merge

Secretary of Defense Ashton B. Carter takes his bridge-building message to Silicon Valley despite skepticism among some in the tech community.

By JOHN MARKOFF

A submersible robot in humanoid form, developed at Stanford University, completed its first dive in April, recovering a 17th-century vase.

By JAMES GORMAN

Mr. Cohen, an abstract painter, developed Aaron, a software program that learned to create art in a manner similar to freehand drawing.

By WILLIAM GRIMES

Regulators and others question whether robo-advisers, which assemble investment portfolios online, can grasp clients situations the way humans can.

By TARA SIEGEL BERNARD

The two-foot-tall robot Xianer dwells in a Buddhist temple, dispensing wisdom about religion and life.

By DIDI KIRSTEN
TATLOW

Silicon Valley has fallen in love with A.I. assistants, but so far theyre hardly impressive. Is it the industrys fault, or is it ours?

By JENNA WORTHAM

The company reported a 21 percent decline in first-quarter earnings for the first quarter, though operating earnings per share were above estimates.

By STEVE LOHR

Wall Street Veterans Bet on Low-Income Homebuyers | Jose Cuervo Said to Be Preparing for I.P.O.

These automated software critters are growing in popularity, especially now that theyre doing more than pretending to be a human in a call center.

By JIM KERSTETTER

Features being added to Googles calendar will let users program in their aspirations for times when they dont have work or meetings scheduled.

By QUENTIN HARDY

Read more here:

Artificial Intelligence - The New York Times

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence – The New York Times

A.I. Artificial Intelligence – Wikipedia, the free …

Posted: at 4:54 am

A.I. Artificial Intelligence, also known as A.I., is a 2001 American science fiction drama film directed by Steven Spielberg. The screenplay by Spielberg was based on a screen story by Ian Watson and the 1969 short story Super-Toys Last All Summer Long by Brian Aldiss. The film was produced by Kathleen Kennedy, Spielberg and Bonnie Curtis. It stars Haley Joel Osment, Jude Law, Frances O'Connor, Brendan Gleeson and William Hurt. Set in a futuristic post-climate change society, A.I. tells the story of David (Osment), a childlike android uniquely programmed with the ability to love.

Development of A.I. originally began with producer-director Stanley Kubrick in the early 1970s. Kubrick hired a series of writers until the mid-1990s, including Brian Aldiss, Bob Shaw, Ian Watson, and Sara Maitland. The film languished in protracted development for years, partly because Kubrick felt computer-generated imagery was not advanced enough to create the David character, whom he believed no child actor would convincingly portray. In 1995, Kubrick handed A.I. to Spielberg, but the film did not gain momentum until Kubrick's death in 1999. Spielberg remained close to Watson's film treatment for the screenplay. The film was greeted with generally positive reviews from critics, grossed approximately $235 million, and was nominated for two Academy Awards at the 74th Academy Awards for Best Visual Effects and Best Original Score (by John Williams). The film is dedicated to Stanley Kubrick.

In the late 21st century, global warming has flooded the coastlines, wiping out coastal cities (such as Amsterdam, Venice, and New York City) and drastically reducing the human population. There is a new class of robots called Mecha, advanced humanoids capable of emulating thoughts and emotions.

David (Haley Joel Osment), a prototype model created by Cybertronics of New Jersey, is designed to resemble a human child and to display love for its human owners. They test their creation with one of their employees, Henry Swinton (Sam Robards), and his wife Monica (Frances O'Connor). The Swintons' son, Martin (Jake Thomas), had been placed in suspended animation until a cure could be found for his rare disease. Initially frightened of David, Monica eventually warms up enough to him to activate his imprinting protocol, which irreversibly causes David to have an enduring childlike love for her. He is also befriended by Teddy (Jack Angel), a robotic teddy bear, who takes it upon himself to care for David's well-being.

A cure is found for Martin and he is brought home; as he recovers, it becomes clear he does not want a sibling and soon makes moves to cause issues for David. First, he attempts to make Teddy choose whom he likes more. He then makes David promise to do something and in return Martin will tell Monica that he loves his new "brother", making her love him more. The promise David makes is to go to Monica in the middle of the night and cut off a lock of her hair. This upsets the parents, particularly Henry, who fears that the scissors are a weapon, and warns Monica that a robot programmed to love may also be able to hate.

At a pool party, one of Martin's friends unintentionally activates David's self-protection programming by poking him with a knife. David grabs Martin, apparently for protection, but they both fall into the pool. David sinks to the bottom while still clinging to Martin. Martin is saved from drowning, but Henry mistakes David's fear during the pool incident as hate for Martin.

Henry persuades Monica to return David to Cybertronics, where he will be destroyed. However, Monica cannot bring herself to do this and, instead, tearfully abandons David in the forest (with Teddy) to hide as an unregistered Mecha.

David is captured for an anti-Mecha "Flesh Fair", an event where obsolete and unlicensed Mecha are destroyed in front of cheering crowds. David is nearly killed, but the crowd is swayed by his fear (since Mecha do not plea for their lives) into believing he is human and he escapes with Gigolo Joe (Jude Law), a male prostitute Mecha on the run after being framed for murder.

The two set out to find the Blue Fairy, who David remembers from the story The Adventures of Pinocchio. He is convinced that the Blue Fairy will transform him into a human boy, allowing Monica to love him and take him home.

Joe and David make their way to Rouge City, a Las Vegas-esque resort. Information from a holographic answer engine called "Dr. Know" (Robin Williams) eventually leads them to the top of Rockefeller Center in the flooded ruins of Manhattan. There, David meets an identical copy of himself and, believing he is not special, becomes filled with anger and destroys the copy Mecha. David then meets his human creator, Professor Allen Hobby (William Hurt), who excitedly tells David that finding him was a test, which has demonstrated the reality of his love and desire. However, David learns that he is the namesake and image of Professor Hobby's deceased son and that many copies of David, along with female versions, are already being manufactured.

Sadly realizing that he is not unique, a disheartened David attempts to commit suicide by falling from a ledge into the ocean, but Joe rescues him with their stolen amphibicopter. David tells Joe he saw the Blue Fairy underwater and wants to go down to her. At that moment, Joe is captured by the authorities with the use of an electromagnet, but he sets the amphibicopter on submerge. David and Teddy take it to the fairy, which turns out to be a statue from a submerged attraction at Coney Island. Teddy and David become trapped when the Wonder Wheel falls on their vehicle. Believing the Blue Fairy to be real, David asks to be turned into a real boy, repeating his wish without an end, until the ocean freezes in another ice age and his internal power source drains away.

Two thousand years later, humans are extinct and Manhattan is buried under several hundred feet of glacial ice. The now highly advanced Mecha have evolved into an intelligent, silicon-based form. On their project to study humans believing it was the key to understanding the meaning of existence they find David and Teddy and discover they are original Mecha who knew living humans, making the pair very special and unique.

David is revived and walks to the frozen Blue Fairy statue, which cracks and collapses as he touches it. Having downloaded and comprehended his memories, the advanced Mecha use these to reconstruct the Swinton home and explain to David via an interactive image of the Blue Fairy (Meryl Streep) that it is impossible to make him human. However, at David's insistence, they recreate Monica from DNA in the lock of her hair, which Teddy had saved. One of the Mecha warns David that the clone can live for only a single day and that the process cannot be repeated. The next morning, David is reunited with Monica and spends the happiest day of his life with her and Teddy. Monica tells David that she loves him and has always loved him as she drifts to sleep for the last time. David lies down next to her, closes his eyes and goes "to that place where dreams are born." Teddy climbs onto the bed and watches as David and Monica lie peacefully together.

Kubrick began development on an adaptation of Super-Toys Last All Summer Long in the early 1970s, hiring the short story's author, Brian Aldiss, to write a film treatment. In 1985, Kubrick brought longtime friend Steven Spielberg on board to produce the film,[5] along with Jan Harlan. Warner Bros. agreed to co-finance A.I. and cover distribution duties.[6] The
film labored in development hell, and Aldiss was fired by Kubrick over creative differences in 1989.[7]Bob Shaw served as writer very briefly, leaving after six weeks because of Kubrick's demanding work schedule, and Ian Watson was hired as the new writer in March 1990. Aldiss later remarked, "Not only did the bastard fire me, he hired my enemy [Watson] instead." Kubrick handed Watson The Adventures of Pinocchio for inspiration, calling A.I. "a picaresque robot version of Pinocchio".[6][8]

Three weeks later Watson gave Kubrick his first story treatment, and concluded his work on A.I. in May 1991 with another treatment, at 90 pages. Gigolo Joe was originally conceived as a GI Mecha, but Watson suggested changing him to a male prostitute. Kubrick joked, "I guess we lost the kiddie market."[6] In the meantime, Kubrick dropped A.I. to work on a film adaptation of Wartime Lies, feeling computer animation was not advanced enough to create the David character. However, after the release of Spielberg's Jurassic Park (with its innovative use of computer-generated imagery), it was announced in November 1993 that production would begin in 1994.[9]Dennis Muren and Ned Gorman, who worked on Jurassic Park, became visual effects supervisors,[7] but Kubrick was displeased with their previsualization, and with the expense of hiring Industrial Light & Magic.[10]

Stanley [Kubrick] showed Steven [Spielberg] 650 drawings which he had, and the script and the story, everything. Stanley said, "Look, why don't you direct it and I'll produce it." Steven was almost in shock.

In early 1994, the film was in pre-production with Christopher "Fangorn" Baker as concept artist, and Sara Maitland assisting on the story, which gave it "a feminist fairy-tale focus".[6] Maitland said that Kubrick never referred to the film as A.I., but as Pinocchio.[10]Chris Cunningham became the new visual effects supervisor. Some of his unproduced work for A.I. can be seen on the DVD, The Work of Director Chris Cunningham.[12] Aside from considering computer animation, Kubrick also had Joseph Mazzello do a screen test for the lead role.[10] Cunningham helped assemble a series of "little robot-type humans" for the David character. "We tried to construct a little boy with a movable rubber face to see whether we could make it look appealing," producer Jan Harlan reflected. "But it was a total failure, it looked awful." Hans Moravec was brought in as a technical consultant.[10] Meanwhile, Kubrick and Harlan thought A.I. would be closer to Steven Spielberg's sensibilities as director.[13][14] Kubrick handed the position to Spielberg in 1995, but Spielberg chose to direct other projects, and convinced Kubrick to remain as director.[11][15] The film was put on hold due to Kubrick's commitment to Eyes Wide Shut (1999).[16] After the filmmaker's death in March 1999, Harlan and Christiane Kubrick approached Spielberg to take over the director's position.[17][18] By November 1999, Spielberg was writing the screenplay based on Watson's 90-page story treatment. It was his first solo screenplay credit since Close Encounters of the Third Kind (1977).[19] Spielberg remained close to Watson's treatment, but removed various sex scenes with Gigolo Joe. Pre-production was briefly halted during February 2000, because Spielberg pondered directing other projects, which were Harry Potter and the Philosopher's Stone, Minority Report and Memoirs of a Geisha.[16][20] The following month Spielberg announced that A.I. would be his next project, with Minority Report as a follow-up.[21] When he decided to fast track A.I., Spielberg brought Chris Baker back as concept artist.[15]

The original start date was July 10, 2000,[14] but filming was delayed until August.[22] Aside from a couple of weeks shooting on location in Oxbow Regional Park in Oregon, A.I. was shot entirely using sound stages at Warner Bros. Studios and the Spruce Goose Dome in Long Beach, south LA.[23] The Swinton house was constructed on Stage 16, while Stage 20 was used for Rouge City and other sets.[24][25] Spielberg copied Kubrick's obsessively secretive approach to filmmaking by refusing to give the complete script to cast and crew, banning press from the set, and making actors sign confidentiality agreements. Social robotics expert Cynthia Breazeal served as technical consultant during production.[14][26] Haley Joel Osment and Jude Law applied prosthetic makeup daily in an attempt to look shinier and robotic.[3] Costume designer Bob Ringwood (Batman, Troy) studied pedestrians on the Las Vegas Strip for his influence on the Rouge City extras.[27] Spielberg found post-production on A.I. difficult because he was simultaneously preparing to shoot Minority Report.[28]

The film's soundtrack was released by Warner Bros. Records in 2001. The original score was composed by John Williams and featured singers Lara Fabian on two songs and Josh Groban on one. The film's score also had a limited release as an official "For your consideration Academy Promo", as well as a complete score issue by La-La Land Records in 2015. The band Ministry appears in the film playing the song "What About Us?" (but the song does not appear on the official soundtrack album).

Warner Bros. used an alternate reality game titled The Beast to promote the film. Over forty websites were created by Atomic Pictures in New York City (kept online at Cloudmakers.org) including the website for Cybertronics Corp. There were to be a series of video games for the Xbox video game console that followed the storyline of The Beast, but they went undeveloped. To avoid audiences mistaking A.I. for a family film, no action figures were created, although Hasbro released a talking Teddy following the film's release in June 2001.[14]

In November 2000, during production, a video-only webcam (dubbed the "Bagel Cam") was placed in the craft services truck on the film's set at the Queen Mary Dome in Long Beach, California. Steven Spielberg, producer Kathleen Kennedy and various other production personnel visited the camera and interacted with fans over the course of three days.[29][30]

A.I. had its premiere at the Venice Film Festival in 2001.[31]

The film opened in 3,242 theaters in the United States on June 29, 2001, earning $29,352,630 during its opening weekend. A.I went on to gross $78.62 million in US totals as well as $157.31 million in foreign countries, coming to a worldwide total of $235.93 million.[32]

The film received generally positive reviews. Based on 190 reviews collected by Rotten Tomatoes, 73% of the critics gave the film positive notices with a score of 6.6 out of 10. The website described the critical consensus perceiving the film as "a curious, not always seamless, amalgamation of Kubrick's chilly bleakness and Spielberg's warm-hearted optimism. [The film] is, in a word, fascinating."[33] By comparison, Metacritic collected an average score of 65, based on 32 reviews, which is considered favorable.[34]

Producer Jan Harlan stated that Kubrick "would have applauded" the final film, while Kubrick's widow Christiane also enjoyed A.I.[35] Brian Aldiss admired the film as well: "I thought what an inventive, intriguing, ingenious, involving film this was. There are flaws in it and I suppose I might have a personal quibble but it's so long since I wrote it." Of the film's ending, he wondered how it might have been had Kubrick directed the film: "That is one of the 'ifs' of film history - at least the ending indicates Spielberg adding some sugar to Kubrick's wine. The actual ending is overly sympathetic and moreover rather overt
ly engineered by a plot device that does not really bear credence. But it's a brilliant piece of film and of course it's a phenomenon because it contains the energies and talents of two brilliant filmmakers."[36]Richard Corliss heavily praised Spielberg's direction, as well as the cast and visual effects.[37]Roger Ebert awarded the film 3 out of 4 stars, saying that it was "Audacious, technically masterful, challenging, sometimes moving [and] ceaselessly watchable. [But] the movie's conclusion is too facile and sentimental, given what has gone before. It has mastered the artificial, but not the intelligence."[38] On July 8, 2011, Ebert reviewed A.I. again when he added it to his "Great Movies" pantheon.[39]Leonard Maltin gives the film a not-so-positive review in his Movie Guide, giving it two stars out of four, writing: "[The] intriguing story draws us in, thanks in part to Osment's exceptional performance, but takes several wrong turns; ultimately, it just doesn't work. Spielberg rewrote the adaptation Stanley Kubrick commissioned of the Brian Aldiss short story 'Super Toys Last All Summer Long'; [the] result is a curious and uncomfortable hybrid of Kubrick and Spielberg sensibilities." However, he calls John Williams' music score "striking". Jonathan Rosenbaum compared A.I. to Solaris (1972), and praised both "Kubrick for proposing that Spielberg direct the project and Spielberg for doing his utmost to respect Kubrick's intentions while making it a profoundly personal work."[40] Film critic Armond White, of the New York Press, praised the film noting that "each part of Davids journey through carnal and sexual universes into the final eschatological devastation becomes as profoundly philosophical and contemplative as anything by cinemas most thoughtful, speculative artists Borzage, Ozu, Demy, Tarkovsky."[41] Filmmaker Billy Wilder hailed A.I. as "the most underrated film of the past few years."[42] When British filmmaker Ken Russell saw the film, he wept during the ending.[43]

Mick LaSalle gave a largely negative review. "A.I. exhibits all its creators' bad traits and none of the good. So we end up with the structureless, meandering, slow-motion endlessness of Kubrick combined with the fuzzy, cuddly mindlessness of Spielberg." Dubbing it Spielberg's "first boring movie", LaSalle also believed the robots at the end of the film were aliens, and compared Gigolo Joe to the "useless" Jar Jar Binks, yet praised Robin Williams for his portrayal of a futuristic Albert Einstein.[44][not in citation given]Peter Travers gave a mixed review, concluding "Spielberg cannot live up to Kubrick's darker side of the future." But he still put the film on his top ten list that year for best movies.[45] David Denby in The New Yorker criticized A.I. for not adhering closely to his concept of the Pinocchio character. Spielberg responded to some of the criticisms of the film, stating that many of the "so called sentimental" elements of A.I., including the ending, were in fact Kubrick's and the darker elements were his own.[46] However, Sara Maitland, who worked on the project with Kubrick in the 1990s, claimed that one of the reasons Kubrick never started production on A.I. was because he had a hard time making the ending work.[47]James Berardinelli found the film "consistently involving, with moments of near-brilliance, but far from a masterpiece. In fact, as the long-awaited 'collaboration' of Kubrick and Spielberg, it ranks as something of a disappointment." Of the film's highly debated finale, he claimed, "There is no doubt that the concluding 30 minutes are all Spielberg; the outstanding question is where Kubrick's vision left off and Spielberg's began."[48]

Screenwriter Ian Watson has speculated, "Worldwide, A.I. was very successful (and the 4th highest earner of the year) but it didn't do quite so well in America, because the film, so I'm told, was too poetical and intellectual in general for American tastes. Plus, quite a few critics in America misunderstood the film, thinking for instance that the Giacometti-style beings in the final 20 minutes were aliens (whereas they were robots of the future who had evolved themselves from the robots in the earlier part of the film) and also thinking that the final 20 minutes were a sentimental addition by Spielberg, whereas those scenes were exactly what I wrote for Stanley and exactly what he wanted, filmed faithfully by Spielberg."[49]

In 2002, Spielberg told film critic Joe Leydon that "People pretend to think they know Stanley Kubrick, and think they know me, when most of them don't know either of us". "And what's really funny about that is, all the parts of A.I. that people assume were Stanley's were mine. And all the parts of A.I. that people accuse me of sweetening and softening and sentimentalizing were all Stanley's. The teddy bear was Stanley's. The whole last 20 minutes of the movie was completely Stanley's. The whole first 35, 40 minutes of the film all the stuff in the house was word for word, from Stanley's screenplay. This was Stanley's vision." "Eighty percent of the critics got it all mixed up. But I could see why. Because, obviously, I've done a lot of movies where people have cried and have been sentimental. And I've been accused of sentimentalizing hard-core material. But in fact it was Stanley who did the sweetest parts of A.I., not me. I'm the guy who did the dark center of the movie, with the Flesh Fair and everything else. That's why he wanted me to make the movie in the first place. He said, 'This is much closer to your sensibilities than my own.'"[50]

Upon rewatching the film many years after its release, BBC film critic Mark Kermode apologized to Spielberg in an interview in January 2013 for "getting it wrong" on the film when he first viewed it in 2001. He now believes the film to be Spielberg's "enduring masterpiece".[51]

Visual effects supervisors Dennis Muren, Stan Winston, Michael Lantieri and Scott Farrar were nominated for the Academy Award for Best Visual Effects, while John Williams was nominated for Best Original Music Score.[52] Steven Spielberg, Jude Law and Williams received nominations at the 59th Golden Globe Awards.[53] The visual effects department was once again nominated at the 55th British Academy Film Awards.[54]A.I. was successful at the Saturn Awards. Spielberg (for his screenplay), the visual effects department, Williams and Haley Joel Osment (Performance by a Younger Actor) won in their respective categories. The film also won Best Science Fiction Film and for its DVD release. Frances O'Connor and Spielberg (as director) were also nominated.[55]

American Film Institute lists

Excerpt from:

A.I. Artificial Intelligence - Wikipedia, the free ...

Posted in Artificial Intelligence | Comments Off on A.I. Artificial Intelligence – Wikipedia, the free …

Understanding Memetics – SCP Foundation

Posted: at 4:54 am

Summary, for those in a hurry:

Memetics deals with information transfer, specifically cultural information in society. The basic idea is to conflate the exchange of information between people with genetic material, to track the mutation of ideas as they are transmitted from one person to the next in the way you could track viral transmissions and mutations.

Meme : Memetics :: Gene : Genetics

Memetics does NOT refer to telepathy, ESP or any imaginary psychic mental magic. These words are memetic, and if you understand them then they are having a completely ordinary memetic effect on you.

Memetics in regards to SCP objects tends to focus on the impossible rather than the mundane, regarding effects that are transmitted via information. In general, the effects themselves should remain in the realm of information. A memetic SCP would be more likely to be a phrase that makes you think you have wings as opposed to a phrase that makes you actually grow a pair of wings. If you write up magic words that make people grow wings, it should be described as something other than memetic.

Memetic SCPs do not emanate auras or project beams. They are SCPs which involve ideas and symbols which trigger a response in those who understand them.

Memetic is often incorrectly used by new personnel as the official sounding term for "Weird Mind Shit." However, that is not actually what memetic means. These words are memetic. They are producing a memetic effect in your mind right now, without any magical mind rays lashing out of your computer monitor to grasp your fragile consciousness. Memes are information, more specifically, cultural information.

Outside of the Foundation's walls the concept of memetics is not taken very seriously; it is a theory that conflates the transfer of cultural information with evolutionary biology.

meme : memetic :: gene : genetic

The idea was that certain memes prosper and others wither the same way certain genes produce stronger offspring that out-compete creatures with different genes. Also, it is easy to compare the spread and mutation of information to the spread of a virus. The reason we use the term memetic in our work is largely due to this, as the truly dangerous memes out there can spread like wildfire due to the fact that the very knowledge of them can count as an infection.

Understanding the true nature of memetic threats is critical to surviving them. You cannot wear a special set of magical goggles made of telekill to protect yourself from a meme. THE GOGGLES DO NOTHING. If you just read those words in your head with a bad Teutonic accent, congratulations on being victim to yet another memetic effect. If you did not know that phrase was an oft-repeated quote from the Simpsons then congratulations; you are now infected with that knowledge and are free to participate in its spread.

An artifact can no more have a memetic aura or project a memetic beam than a creature could have a genetic aura or genetic beam. Even though you could imagine a creature with genes that allow it to produce some kind of aura or beam like a big doofy X-man, remember that the examples we have of such creatures in containment are not getting their super-powered emanations from anything resembling our scientific understanding of genetics and biology. Neither are the memetic artifacts. We contain these things specifically because we cannot understand or explain them yet. At the end of the day we're still using a clumsy concept to describe things we don't have a full grasp of.

It is very rare that anything with a dangerous memetic component could be described as hostile to begin with. We do not contain memetic threats because they are out to get us. They are threats because it is dangerous for us to merely perceive them. It is exceptionally rare for dangerous memes to even have anything resembling sapience with the exception of certain known entities which exist entirely within the medium of "cultural information" such as SCP-, SCP-732 and SCP-423.

A dangerous meme is basically a trigger that sets off something inside of you that you may or may have not been aware of. What would your knee jerk reaction be to knowing that your rival is sleeping with your one true love? How would you react if you were to unwittingly catch them in the act? That kind of sudden revelation can make a mild mannered citizen into a killer, so don't be surprised that there are other strange bits of information out there that can break the human mind in different yet equally drastic ways.

Protecting yourself from memetic threat is very tricky and can be worse than the threat itself. There are reasons that we behave the way we do, there are reasons our emotions soar when we hear just the right combination of sounds in a piece of music. Do you want to stop thinking about the Simpsons or your obnoxious nerdy friends that quote it every time you hear the phrase THE GOGGLES DO NOTHING? That would require forgetting about the Simpsons and your friends.

Do you want to survive hearing or reading the phrase " ?" Well, sadly we don't quite know what other information you need to forget or know to prevent [DATA EXPUNGED] but we're getting better. Lobotomies and pills help, and are one of the few times that the cure is not worse than the disease. The sum total of our human condition; our cultural knowledge and upbringing and memories and identity; this is what makes us susceptible to the occasional memetic compulsion.

So it's not the basalt monolith or its bizarre carvings that is making you strangle your companions with your own intestines, the problem was within you all along.

Should you ever find yourself under a memetic compulsion and aware of the fact, remember that there are certain mental exercises that you can perform which may save your life or the lives of your companions. Changing the information your mind is being presented with may just change how you react to it, and the more abrupt or absurd the change is the better.

Imagine the fearsome entity is wearing a bright pink nightgown. Draw a mustache on the haunted painting. Pee on the stone altar. Wear the terrible sculpture like a hat.

And if all else fails, bend over and kiss your ass goodbye. I'm not kidding. That could actually help.

- Dr. Johannes Sorts received a special dispensation to use the word "doofy" in this document

But seriously

This was originally intended as a piece of fiction on its own before it got stuck into the information bar with plenty of other plainly out-of-character writing guides. So here's the important things to take away:

1 - "Memetics" is a specific concept regarding information exchange. It has nothing to do with telepathy or ESP or psychic compulsions.

2 - SCP-148 has no effect on anything memetic. Don't screw this up or we will give you an incredibly hard time about it.

3 - Psychic compulsions are lame and you should think twice before using them in your new SCP, even if you avoid misusing the term "memetic" when you do it.

4 - Sorts' Rule for all memetic SCPs is "Memetic effect + crazy to death = failure."

5 - Wear it like a haaaaat!!

Go here to read the rest:

Understanding Memetics - SCP Foundation

Posted in Memetics | Comments Off on Understanding Memetics – SCP Foundation

Meme – Wikipedia, the free encyclopedia

Posted: at 4:54 am

A meme ( MEEM)[1] is "an idea, behavior, or style that spreads from person to person within a culture".[2] A meme acts as a unit for carrying cultural ideas, symbols, or practices that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena with a mimicked theme. Supporters of the concept regard memes as cultural analogues to genes in that they self-replicate, mutate, and respond to selective pressures.[3]

Proponents theorize that memes are a viral phenomenon that may evolve by natural selection in a manner analogous to that of biological evolution. Memes do this through the processes of variation, mutation, competition, and inheritance, each of which influences a meme's reproductive success. Memes spread through the behavior that they generate in their hosts. Memes that propagate less prolifically may become extinct, while others may survive, spread, and (for better or for worse) mutate. Memes that replicate most effectively enjoy more success, and some may replicate effectively even when they prove to be detrimental to the welfare of their hosts.[4]

A field of study called memetics[5] arose in the 1990s to explore the concepts and transmission of memes in terms of an evolutionary model. Criticism from a variety of fronts has challenged the notion that academic study can examine memes empirically. However, developments in neuroimaging may make empirical study possible.[6] Some commentators in the social sciences question the idea that one can meaningfully categorize culture in terms of discrete units, and are especially critical of the biological nature of the theory's underpinnings.[7] Others have argued that this use of the term is the result of a misunderstanding of the original proposal.[8]

The word meme originated with Richard Dawkins' 1976 book The Selfish Gene. Dawkins's own position is somewhat ambiguous: he welcomed N. K. Humphrey's suggestion that "memes should be considered as living structures, not just metaphorically"[9] and proposed to regard memes as "physically residing in the brain".[10] Later, he argued that his original intentions, presumably before his approval of Humphrey's opinion, had been simpler.[11] At the New Directors' Showcase 2013 in Cannes, Dawkins' opinion on memetics was deliberately ambiguous.[12]

The word meme is a shortening (modeled on gene) of mimeme (from Ancient Greek pronounced[mmma] mmma, "imitated thing", from mimeisthai, "to imitate", from mimos, "mime")[13] coined by British evolutionary biologist Richard Dawkins in The Selfish Gene (1976)[1][14] as a concept for discussion of evolutionary principles in explaining the spread of ideas and cultural phenomena. Examples of memes given in the book included melodies, catchphrases, fashion, and the technology of building arches. [15]Kenneth Pike coined the related term emic and etic, generalizing the linguistic idea of phoneme, morpheme and tagmeme (as set out by Leonard Bloomfield), characterizing them as insider view and outside view of behaviour and extending the concept into a tagmemic theory of human behaviour (culminating in Language in Relation to a Unified Theory of the Structure of Human Behaviour, 1954).

The word meme originated with Richard Dawkins' 1976 book The Selfish Gene. Dawkins cites as inspiration the work of geneticist L. L. Cavalli-Sforza, anthropologist F. T. Cloak [16] and ethologist J. M. Cullen.[17] Dawkins wrote that evolution depended not on the particular chemical basis of genetics, but only on the existence of a self-replicating unit of transmission in the case of biological evolution, the gene. For Dawkins, the meme exemplified another self-replicating unit with potential significance in explaining human behavior and cultural evolution.

Dawkins used the term to refer to any cultural entity that an observer might consider a replicator. He hypothesized that one could view many cultural entities as replicators, and pointed to melodies, fashions and learned skills as examples. Memes generally replicate through exposure to humans, who have evolved as efficient copiers of information and behavior. Because humans do not always copy memes perfectly, and because they may refine, combine or otherwise modify them with other memes to create new memes, they can change over time. Dawkins likened the process by which memes survive and change through the evolution of culture to the natural selection of genes in biological evolution.[15]

Dawkins defined the meme as a unit of cultural transmission, or a unit of imitation and replication, but later definitions would vary. The lack of a consistent, rigorous, and precise understanding of what typically makes up one unit of cultural transmission remains a problem in debates about memetics.[19] In contrast, the concept of genetics gained concrete evidence with the discovery of the biological functions of DNA. Meme transmission requires a physical medium, such as photons, sound waves, touch, taste or smell because memes can be transmitted only through the senses.

Dawkins noted that in a society with culture a person need not have descendants to remain influential in the actions of individuals thousands of years after their death:

But if you contribute to the world's culture, if you have a good idea...it may live on, intact, long after your genes have dissolved in the common pool. Socrates may or may not have a gene or two alive in the world today, as G.C. Williams has remarked, but who cares? The meme-complexes of Socrates, Leonardo, Copernicus and Marconi are still going strong.[20]

Memes, analogously to genes, vary in their aptitude to replicate; successful memes remain and spread, whereas unfit ones stall and are forgotten. Thus memes that prove more effective at replicating and surviving are selected in the meme pool.

Memes first need retention. The longer a meme stays in its hosts, the higher its chances of propagation are. When a host uses a meme, the meme's life is extended.[21] The reuse of the neural space hosting a certain meme's copy to host different memes is the greatest threat to that meme's copy.[22]

A meme which increases the longevity of its hosts will generally survive longer. On the contrary, a meme which shortens the longevity of its hosts will tend to disappear faster. However, as hosts are mortal, retention is not sufficient to perpetuate a meme in the long term; memes also need transmission.

Life-forms can transmit information both vertically (from parent to child, via replication of genes) and horizontally (through viruses and other means). Memes can replicate vertically or horizontally within a single biological generation. They may also lie dormant for long periods of time.

Memes reproduce by copying from a nervous system to another one, either by communication or imitation. Imitation often involves the copying of an observed behavior of another individual. Communication may be direct or indirect, where memes transmit from one individual to another through a copy recorded in an inanimate source, such as a book or a musical score. Adam McNamara has suggested that memes can be thereby classified as either internal or external memes (i-memes or e-memes).[6]

Some commentators have likened the transmission of memes to the spread of contagions.[23] Social contagions such as fads, hysteria, copycat crime, and copycat suicide exemplify memes seen as the contagious imitation of ideas. Observers distinguish the contagious imitation of memes from instinctively contagious phenomena such as yawning and laughing, w
hich they consider innate (rather than socially learned) behaviors.[24]

Aaron Lynch described seven general patterns of meme transmission, or "thought contagion":[25]

Dawkins initially defined meme as a noun that "conveys the idea of a unit of cultural transmission, or a unit of imitation".[15] John S. Wilkins retained the notion of meme as a kernel of cultural imitation while emphasizing the meme's evolutionary aspect, defining the meme as "the least unit of sociocultural information relative to a selection process that has favorable or unfavorable selection bias that exceeds its endogenous tendency to change".[26] The meme as a unit provides a convenient means of discussing "a piece of thought copied from person to person", regardless of whether that thought contains others inside it, or forms part of a larger meme. A meme could consist of a single word, or a meme could consist of the entire speech in which that word first occurred. This forms an analogy to the idea of a gene as a single unit of self-replicating information found on the self-replicating chromosome.

While the identification of memes as "units" conveys their nature to replicate as discrete, indivisible entities, it does not imply that thoughts somehow become quantized or that "atomic" ideas exist that cannot be dissected into smaller pieces. A meme has no given size. Susan Blackmore writes that melodies from Beethoven's symphonies are commonly used to illustrate the difficulty involved in delimiting memes as discrete units. She notes that while the first four notes of Beethoven's Fifth Symphony (listen(helpinfo)) form a meme widely replicated as an independent unit, one can regard the entire symphony as a single meme as well.[19]

The inability to pin an idea or cultural feature to quantifiable key units is widely acknowledged as a problem for memetics. It has been argued however that the traces of memetic processing can be quantified utilizing neuroimaging techniques which measure changes in the connectivity profiles between brain regions."[6] Blackmore meets such criticism by stating that memes compare with genes in this respect: that while a gene has no particular size, nor can we ascribe every phenotypic feature directly to a particular gene, it has value because it encapsulates that key unit of inherited expression subject to evolutionary pressures. To illustrate, she notes evolution selects for the gene for features such as eye color; it does not select for the individual nucleotide in a strand of DNA. Memes play a comparable role in understanding the evolution of imitated behaviors.[19]

The 1981 book Genes, Mind, and Culture: The Coevolutionary Process by Charles J. Lumsden and E. O. Wilson proposed the theory that genes and culture co-evolve, and that the fundamental biological units of culture must correspond to neuronal networks that function as nodes of semantic memory. They coined their own word, "culturgen", which did not catch on. Coauthor Wilson later acknowledged the term meme as the best label for the fundamental unit of cultural inheritance in his 1998 book Consilience: The Unity of Knowledge, which elaborates upon the fundamental role of memes in unifying the natural and social sciences.[27]

Dawkins noted the three conditions that must exist for evolution to occur:[28]

Dawkins emphasizes that the process of evolution naturally occurs whenever these conditions co-exist, and that evolution does not apply only to organic elements such as genes. He regards memes as also having the properties necessary for evolution, and thus sees meme evolution as not simply analogous to genetic evolution, but as a real phenomenon subject to the laws of natural selection. Dawkins noted that as various ideas pass from one generation to the next, they may either enhance or detract from the survival of the people who obtain those ideas, or influence the survival of the ideas themselves. For example, a certain culture may develop unique designs and methods of tool-making that give it a competitive advantage over another culture. Each tool-design thus acts somewhat similarly to a biological gene in that some populations have it and others do not, and the meme's function directly affects the presence of the design in future generations. In keeping with the thesis that in evolution one can regard organisms simply as suitable "hosts" for reproducing genes, Dawkins argues that one can view people as "hosts" for replicating memes. Consequently, a successful meme may or may not need to provide any benefit to its host.[28]

Unlike genetic evolution, memetic evolution can show both Darwinian and Lamarckian traits. Cultural memes will have the characteristic of Lamarckian inheritance when a host aspires to replicate the given meme through inference rather than by exactly copying it. Take for example the case of the transmission of a simple skill such as hammering a nail, a skill that a learner imitates from watching a demonstration without necessarily imitating every discrete movement modeled by the teacher in the demonstration, stroke for stroke.[29] Susan Blackmore distinguishes the difference between the two modes of inheritance in the evolution of memes, characterizing the Darwinian mode as "copying the instructions" and the Lamarckian as "copying the product."[19]

Clusters of memes, or memeplexes (also known as meme complexes or as memecomplexes), such as cultural or political doctrines and systems, may also play a part in the acceptance of new memes. Memeplexes comprise groups of memes that replicate together and coadapt.[19] Memes that fit within a successful memeplex may gain acceptance by "piggybacking" on the success of the memeplex. As an example, John D. Gottsch discusses the transmission, mutation and selection of religious memeplexes and the theistic memes contained.[30] Theistic memes discussed include the "prohibition of aberrant sexual practices such as incest, adultery, homosexuality, bestiality, castration, and religious prostitution", which may have increased vertical transmission of the parent religious memeplex. Similar memes are thereby included in the majority of religious memeplexes, and harden over time; they become an "inviolable canon" or set of dogmas, eventually finding their way into secular law. This could also be referred to as the propagation of a taboo.

The discipline of memetics, which dates from the mid-1980s, provides an approach to evolutionary models of cultural information transfer based on the concept of the meme. Memeticists have proposed that just as memes function analogously to genes, memetics functions analogously to genetics. Memetics attempts to apply conventional scientific methods (such as those used in population genetics and epidemiology) to explain existing patterns and transmission of cultural ideas.

Principal criticisms of memetics include the claim that memetics ignores established advances in other fields of cultural study, such as sociology, cultural anthropology, cognitive psychology, and social psychology. Questions remain whether or not the meme concept counts as a validly disprovable scientific theory. This view regards memetics as a theory in its infancy: a protoscience to proponents, or a pseudoscience to some detractors.

An objection to the study of the evolution of memes in genetic terms (although not to the existence of memes) involves a perceived gap in the gene/meme analogy: the cumulative evolution of genes depends on biological selection-pressures neither too great nor too small in relation to mutation-rates. There seems no reason to think that the same balance will exist in the selection pressu
res on memes.[31]

Luis Benitez-Bribiesca M.D., a critic of memetics, calls the theory a "pseudoscientific dogma" and "a dangerous idea that poses a threat to the serious study of consciousness and cultural evolution". As a factual criticism, Benitez-Bribiesca points to the lack of a "code script" for memes (analogous to the DNA of genes), and to the excessive instability of the meme mutation mechanism (that of an idea going from one brain to another), which would lead to a low replication accuracy and a high mutation rate, rendering the evolutionary process chaotic.[32]

British political philosopher John Gray has characterized Dawkins' memetic theory of religion as "nonsense" and "not even a theory... the latest in a succession of ill-judged Darwinian metaphors", comparable to Intelligent Design in its value as a science.[33]

Another critique comes from semiotic theorists such as Deacon[34] and Kull.[35] This view regards the concept of "meme" as a primitivized concept of "sign". The meme is thus described in memetics as a sign lacking a triadic nature. Semioticians can regard a meme as a "degenerate" sign, which includes only its ability of being copied. Accordingly, in the broadest sense, the objects of copying are memes, whereas the objects of translation and interpretation are signs.[clarification needed]

Fracchia and Lewontin regard memetics as reductionist and inadequate.[36] Evolutionary biologist Ernst Mayr disapproved of Dawkins' gene-based view and usage of the term "meme", asserting it to be an "unnecessary synonym" for "concept", reasoning that concepts are not restricted to an individual or a generation, may persist for long periods of time, and may evolve.[37]

Opinions differ as to how best to apply the concept of memes within a "proper" disciplinary framework. One view sees memes as providing a useful philosophical perspective with which to examine cultural evolution. Proponents of this view (such as Susan Blackmore and Daniel Dennett) argue that considering cultural developments from a meme's-eye viewas if memes themselves respond to pressure to maximise their own replication and survivalcan lead to useful insights and yield valuable predictions into how culture develops over time. Others such as Bruce Edmonds and Robert Aunger have focused on the need to provide an empirical grounding for memetics to become a useful and respected scientific discipline.[38][39]

A third approach, described by Joseph Poulshock, as "radical memetics" seeks to place memes at the centre of a materialistic theory of mind and of personal identity.[40]

Prominent researchers in evolutionary psychology and anthropology, including Scott Atran, Dan Sperber, Pascal Boyer, John Tooby and others, argue the possibility of incompatibility between modularity of mind and memetics.[citation needed] In their view, minds structure certain communicable aspects of the ideas produced, and these communicable aspects generally trigger or elicit ideas in other minds through inference (to relatively rich structures generated from often low-fidelity input) and not high-fidelity replication or imitation. Atran discusses communication involving religious beliefs as a case in point. In one set of experiments he asked religious people to write down on a piece of paper the meanings of the Ten Commandments. Despite the subjects' own expectations of consensus, interpretations of the commandments showed wide ranges of variation, with little evidence of consensus. In another experiment, subjects with autism and subjects without autism interpreted ideological and religious sayings (for example, "Let a thousand flowers bloom" or "To everything there is a season"). People with autism showed a significant tendency to closely paraphrase and repeat content from the original statement (for example: "Don't cut flowers before they bloom"). Controls tended to infer a wider range of cultural meanings with little replicated content (for example: "Go with the flow" or "Everyone should have equal opportunity"). Only the subjects with autismwho lack the degree of inferential capacity normally associated with aspects of theory of mindcame close to functioning as "meme machines".[41]

In his book The Robot's Rebellion, Stanovich uses the memes and memeplex concepts to describe a program of cognitive reform that he refers to as a "rebellion". Specifically, Stanovich argues that the use of memes as a descriptor for cultural units is beneficial because it serves to emphasize transmission and acquisition properties that parallel the study of epidemiology. These properties make salient the sometimes parasitic nature of acquired memes, and as a result individuals should be motivated to reflectively acquire memes using what he calls a "Neurathian bootstrap" process.[42]

Although social scientists such as Max Weber sought to understand and explain religion in terms of a cultural attribute, Richard Dawkins called for a re-analysis of religion in terms of the evolution of self-replicating ideas apart from any resulting biological advantages they might bestow.

As an enthusiastic Darwinian, I have been dissatisfied with explanations that my fellow-enthusiasts have offered for human behaviour. They have tried to look for 'biological advantages' in various attributes of human civilization. For instance, tribal religion has been seen as a mechanism for solidifying group identity, valuable for a pack-hunting species whose individuals rely on cooperation to catch large and fast prey. Frequently the evolutionary preconception in terms of which such theories are framed is implicitly group-selectionist, but it is possible to rephrase the theories in terms of orthodox gene selection.

He argued that the role of key replicator in cultural evolution belongs not to genes, but to memes replicating thought from person to person by means of imitation. These replicators respond to selective pressures that may or may not affect biological reproduction or survival.[15]

In her book The Meme Machine, Susan Blackmore regards religions as particularly tenacious memes. Many of the features common to the most widely practiced religions provide built-in advantages in an evolutionary context, she writes. For example, religions that preach of the value of faith over evidence from everyday experience or reason inoculate societies against many of the most basic tools people commonly use to evaluate their ideas. By linking altruism with religious affiliation, religious memes can proliferate more quickly because people perceive that they can reap societal as well as personal rewards. The longevity of religious memes improves with their documentation in revered religious texts.[19]

Aaron Lynch attributed the robustness of religious memes in human culture to the fact that such memes incorporate multiple modes of meme transmission. Religious memes pass down the generations from parent to child and across a single generation through the meme-exchange of proselytism. Most people will hold the religion taught them by their parents throughout their life. Many religions feature adversarial elements, punishing apostasy, for instance, or demonizing infidels. In Thought Contagion Lynch identifies the memes of transmission in Christianity as especially powerful in scope. Believers view the conversion of non-believers both as a religious duty and as an act of altruism. The promise of heaven to believers and threat of hell to non-believers provide a strong incentive for members to retain their belief. Lynch asserts that belief in the Crucifixion of Jesus in Christianity amplifies each of its o
ther replication advantages through the indebtedness believers have to their Savior for sacrifice on the cross. The image of the crucifixion recurs in religious sacraments, and the proliferation of symbols of the cross in homes and churches potently reinforces the wide array of Christian memes.[25]

Although religious memes have proliferated in human cultures, the modern scientific community has been relatively resistant to religious belief. Robertson (2007) [43] reasoned that if evolution is accelerated in conditions of propagative difficulty,[44] then we would expect to encounter variations of religious memes, established in general populations, addressed to scientific communities. Using a memetic approach, Robertson deconstructed two attempts to privilege religiously held spirituality in scientific discourse. Advantages of a memetic approach as compared to more traditional "modernization" and "supply side" theses in understanding the evolution and propagation of religion were explored.

In Cultural Software: A Theory of Ideology, Jack Balkin argued that memetic processes can explain many of the most familiar features of ideological thought. His theory of "cultural software" maintained that memes form narratives, social networks, metaphoric and metonymic models, and a variety of different mental structures. Balkin maintains that the same structures used to generate ideas about free speech or free markets also serve to generate racistic beliefs. To Balkin, whether memes become harmful or maladaptive depends on the environmental context in which they exist rather than in any special source or manner to their origination. Balkin describes racist beliefs as "fantasy" memes that become harmful or unjust "ideologies" when diverse peoples come together, as through trade or competition.[45]

In A Theory of Architecture, Nikos Salingaros speaks of memes as "freely propagating clusters of information" which can be beneficial or harmful. He contrasts memes to patterns and true knowledge, characterizing memes as "greatly simplified versions of patterns" and as "unreasoned matching to some visual or mnemonic prototype".[46] Taking reference to Dawkins, Salingaros emphasizes that they can be transmitted due to their own communicative properties, that "the simpler they are, the faster they can proliferate", and that the most successful memes "come with a great psychological appeal".[47]

Architectural memes, according to Salingaros, can have destructive power. "Images portrayed in architectural magazines representing buildings that could not possibly accommodate everyday uses become fixed in our memory, so we reproduce them unconsciously."[48] He lists various architectural memes that circulated since the 1920s and which, in his view, have led to contemporary architecture becoming quite decoupled from human needs. They lack connection and meaning, thereby preventing "the creation of true connections necessary to our understanding of the world". He sees them as no different from antipatterns in software design as solutions that are false but are re-utilized nonetheless.[49]

An "Internet meme" is a concept that spreads rapidly from person to person via the Internet, largely through Internet-based E-mailing, blogs, forums, imageboards like 4chan, social networking sites like Facebook, Instagram or Twitter, instant messaging, and video hosting services like YouTube and Twitch.tv.[50]

In 2013 Richard Dawkins characterized an Internet meme as one deliberately altered by human creativity, distinguished from Dawkins's original idea involving mutation by random change and a form of Darwinian selection.[51]

One technique of meme mapping represents the evolution and transmission of a meme across time and space.[52] Such a meme map uses a figure-8 diagram (an analemma) to map the gestation (in the lower loop), birth (at the choke point), and development (in the upper loop) of the selected meme. Such meme maps are nonscalar, with time mapped onto the y-axis and space onto the x-axis transect. One can read the temporal progression of the mapped meme from south to north on such a meme map. Paull has published a worked example using the "organics meme" (as in organic agriculture).[52]

Aspects

View original post here:

Meme - Wikipedia, the free encyclopedia

Posted in Memetics | Comments Off on Meme – Wikipedia, the free encyclopedia