Daily Archives: October 23, 2016

Home | Discover Oil & Gas | Rigzone

Posted: October 23, 2016 at 4:25 am

EMPLOYMENT

COLUMN: How to Get Noticed by Young Recruiters

A public relations blogger offers tips for seasoned pros in her industry. How well do they apply to oil and gas candidates?

O&G

Iran is Open for the Oil Business - Sort Of

Hunger for Iran's cheap oil will lure some foreign companies, but questions remain on how much it will cost commodity prices.

TECHNOLOGY

Digital Technology to Transform Oil, Gas Hiring Practices

Industry insiders discuss the skill sets that oil and gas companies will need as they move towards digitalization.

VIDEO

Passive Aggressive Behaviors to Avoid at Work

Rigzone highlights common workplace behaviors that are considered to be passive aggressive.

Rigzone tracks the worldwide offshore rig fleet through its proprietary RigLogix database, and we make some of the key rig fleet data available to you here. You'll find information on offshore rig utilization, day rates, contracts, equipment specs, and much more.

The Rigzone Equipment Market brings buyers and sellers of oilfield equipment including land rigs, offshore rigs, drilling equipment, production equipment, and more in a seamless, worldwide exchange.

Outside Plant Services

FUEL TANK

Location: HOUSTON, TX OUR YARD Condition: USED GOOD Price: $9500.00

See the rest here:

Home | Discover Oil & Gas | Rigzone

Posted in Offshore | Comments Off on Home | Discover Oil & Gas | Rigzone

Psychedelic drug – Wikipedia

Posted: at 4:24 am

"Psychedelics" redirects here. For other uses, see Psychedelic.

A psychedelic drug is a drug whose primary action is to alter cognition and perception, typically by agonising serotonin receptors.[2]

Not to be confused with psychoactive drugs, such as stimulants and opioids, which induce familiar states of consciousness, psychedelics tend to affect the mind in ways that result in the experience being qualitatively different from those of ordinary consciousness. The psychedelic experience is often compared to non-ordinary forms of consciousness such as trance, meditation, yoga, religious ecstasy, dreaming and even near-death experiences. With a few exceptions, most psychedelic drugs fall into one of the three following families of chemical compounds; tryptamines, phenethylamines, and lysergamides.

Many psychedelic drugs are illegal worldwide under the UN conventions unless used in a medical or religious context. Despite these regulations, recreational use of psychedelics is common.

The term psychedelic is derived from the Greek words (psyche, "soul, mind") and (delein, "to manifest"), hence "soul-manifesting", the implication being that psychedelics can access the soul and develop unused potentials of the human mind.[3] The word was coined in 1956 by British psychiatrist, Humphry Osmond, the spelling loathed by American ethnobotanist, Richard Schultes, but championed by the American psychologist, Timothy Leary.[4]

Aldous Huxley had suggested to Humphry Osmond in 1956 his own coinage phanerothyme (Greek "phaneroein-" visible + Greek "thymos" soul, thus "visible soul").[5] Recently, the term entheogenic has come into use to denote the use of psychedelic drugs in a religious/spiritual/mystical context.

Psychedelics have a long history of traditional use in medicine and religion, where they are prized for their perceived ability to promote physical and mental healing. In this context, they are often known as entheogens. Native American practitioners using mescaline-containing cacti (most notably peyote, San Pedro, and Peruvian torch) have reported success against alcoholism, and Mazatec practitioners routinely use psilocybin mushrooms for divination and healing. Ayahuasca, which contains the powerful psychedelic DMT, is used in Peru and other parts of South America for spiritual and physical healing as well as in religious festivals.

Classical or serotonergic psychedelics (agonists for the 5-HT2A serotonin receptors) include LSD (also known as "acid"), psilocin (the active constituent of psilocybin mushrooms, commonly known as "magic mushrooms" or "shrooms"), mescaline (the active constituent of peyote), and DMT (the active constituent of ayahuasca and an endogenous compound produced in the human body). Salvia divinorum is an atypical psychedelic that has been gaining popularity over the past decade, due to its legality in many US states. It is often compared to DMT due to its short and very intense trip. A few newer synthetics such as 2C-B have also enjoyed some popularity.

This class of psychedelics includes the classical hallucinogens, including the lysergamides like LSD and LSA, tryptamines like psilocybin and DMT, and phenethylamines like mescaline and 2C-B. Many of these psychedelics cause remarkably similar effects, despite their different chemical structure. However, many users report that the three families have subjectively different qualities in the "feel" of the experience, which are difficult to describe. At lower doses, these include sensory alterations, such as the warping of surfaces, shape suggestibility, and color variations. Users often report intense colors that they have not previously experienced, and repetitive geometric shapes are common. Higher doses often cause intense and fundamental alterations of sensory perception, such as synesthesia or the experience of additional spatial or temporal dimensions.[6] Some compounds, such as 2C-B, have extremely tight "dose curves", meaning the difference between a non-event and an overwhelming disconnection from reality can be very slight. There can be very substantial differences between the drugs, however. For instance, 5-MeO-DMT rarely produces the visual effects typical of other psychedelics and ibogaine (a 'complex tryptamine') is also an NMDA receptor antagonist and -opioid receptor agonist in addition to being an agonist for the 5-HT2A receptors, resulting in dissociative effects as well (see dissociatives below).

The empathogen-entactogens are phenethylamines of the MDxx class such as MDMA, MDEA, and MDA. Their effects are characterized by feelings of openness, euphoria, empathy, love, heightened self-awareness, and by mild audio and visual distortions (an overall enhancement of sensory experience is often reported). Their adoption by the rave subculture is probably due to the enhancement of the overall social and musical experience. MDA is atypical to this experience, often causing hallucinations and psychedelic effects in equal profundity to the chemicals in the 5-HT2A agonist category, but with substantially less mental involvement, and is both a serotonin releaser and 5-HT2A receptor agonist.

Certain dissociative drugs acting via NMDA antagonism are known to produce what some might consider psychedelic effects. The main differences between dissociative psychedelics and serotonergic hallucinogens are that the dissociatives cause more intense derealization and depersonalization.[7] For example, ketamine produces sensations of being disconnected from one's body and that the surrounding environment is unreal, as well as perceptual alterations seen with other psychedelics.[8]

Salvia divinorum is a dissociative that is sometimes classified as an atypical psychedelic. The active molecule in the plant, salvinorin A, is a kappa opioid receptor agonist, working on a part of the brain that deals with pain. Activation of this receptor is also linked to the dysphoria sometimes experienced by users of opiates either therapeutically or recreationally. An unusual feature of S. divinorum is its high potency (dosage is in the microgram range) and extremely disorienting effects, which often include "entity contact", complete loss of reality-perception and user's experiencing their consciousness as being housed in different objects e.g. a pane of glass or a pencil.

Despite many psychedelic drugs being non-addictive[9] and there being no evidence to support long term harm on mental health[10] many of these drugs have been declared illegal under the UN Convention on Psychotropic Substances of 1971. In addition, many countries have analogue acts that automatically forbid any drugs sharing similar chemical structures to common illicit substances regardless of whether they are harmful.

Others

Others

See the original post here:

Psychedelic drug - Wikipedia

Posted in Psychedelics | Comments Off on Psychedelic drug – Wikipedia

Nootropic – Wikipedia

Posted: at 4:24 am

Nootropics (pronunciation: noh--TROP-iks)also called smart drugs or cognitive enhancersare drugs, supplements, or other substances that improve cognitive function, particularly executive functions, memory, creativity, or motivation, in healthy individuals.[1][2] The use of cognition-enhancing drugs by healthy individuals in the absence of a medical indication is one of the most debated topics among neuroscientists, psychiatrists, and physicians which spans a number of issues, including the ethics and fairness of their use, concerns over adverse effects, and the diversion of prescription drugs for nonmedical uses, among others.[1][3][4] Nonetheless, the international sales of cognition-enhancing supplements exceeded US$1 billion in 2015 and the global demand for these compounds is still growing rapidly.[5]

The word nootropic was coined in 1972 by a Romanian psychologist and chemist, Corneliu E. Giurgea,[6][7] from the Greek words nous, or "mind", and trepein meaning to bend or turn.[8]

There are only a few drugs that are known to improve some aspect of cognition. Many more are in different stages of development.[9] The most commonly used class of drug is stimulants, such as caffeine.[10]

These drugs are purportedly used primarily to treat cognitive or motor function difficulties attributable to disorders such as Alzheimer's disease, Parkinson's disease, Huntington's disease, and ADHD.[citation needed] Some researchers, however, report more widespread use despite concern for further research.[11] Nevertheless, intense marketing may not correlate with efficacy. While scientific studies support the beneficial effects of some compounds, manufacturer's marketing claims for dietary supplements are usually not formally tested and verified by independent entities.[12]

Among students, nootropics have been used to increase productivity, despite their long-term effects lacking conclusive research in healthy individuals.[9] The use of prescription stimulants is especially prevalent among students attending academically competitive colleges.[13] Surveys suggest that 0.74.5% of German students have used cognitive enhancers in their lifetime.[14][15][16] Stimulants such as dimethylamylamine and methylphenidate are used on college campuses and by younger groups.[9] Based upon studies of self-reported illicit stimulant use, 535% of college students use diverted ADHD stimulants, which are primarily used for performance enhancement rather than as recreational drugs.[17][18][19]

Several factors positively and negatively influence the use of drugs to increase cognitive performance. Among them are personal characteristics, drug characteristics, and characteristics of the social context.[14][15][20][21]

The main concern with pharmaceutical drugs is adverse effects, and these concerns apply to cognitive-enhancing drugs as well. Long-term safety data is typically unavailable for some types of nootropics[9] (e.g., many non-pharmaceutical cognitive enhancers, newly developed pharmaceuticals and pharmaceuticals with short-term therapeutic use). Racetamspiracetam and other compounds that are structurally related to piracetamhave few serious adverse effects and low toxicity, but there is little evidence that they enhance cognition in individuals without cognitive impairments.[22][23] While addiction to stimulants is sometimes identified as a cause for concern,[24] a very large body of research on the therapeutic use of the "more addictive" psychostimulants indicate that addiction is fairly rare in therapeutic doses.[25][26][27] On their safety profile, a systematic review from June 2015 asserted, "evidence indicates that at low, clinically relevant doses, psychostimulants are devoid of the behavioral and neurochemical actions that define this class of drugs and instead act largely as cognitive enhancers."[28]

In the United States dietary supplements may be marketed if the manufacturer can show that it can manufacture the supplement safely, that the supplement is indeed generally recognized as safe, and if the manufacturer does not make any claims about the supplement's use to treat or prevent any disease or condition; supplements that contain drugs or for which treatment or prevention claims are made are illegal under US law.[29]

In 2015, systematic medical reviews and meta-analyses of clinical research in humans established consensus that certain stimulants, only when used at low (therapeutic) concentrations, unambiguously enhance cognition in the general population;[28][30][31][32] in particular, the classes of stimulants that demonstrate cognition-enhancing effects in humans act as direct agonists or indirect agonists of dopamine receptor D1, adrenoceptor A2, or both receptors in the prefrontal cortex.[28][30][32][33] Relatively high doses of stimulants cause cognitive deficits.[32][33]

Racetams, such as piracetam, oxiracetam, and aniracetam, are structurally similar compounds, which are often marketed as cognitive enhancers and sold over-the-counter. Racetams are often referred to as nootropics, but this property of the drug class is not well established.[53] The racetams have poorly understood mechanisms of action; however, piracetam and aniracetam are known to act as positive allosteric modulators of AMPA receptors and appear to modulate cholinergic systems.[54]

According to the FDA, "Piracetam is not a vitamin, mineral, amino acid, herb or other botanical, or dietary substance for use by man to supplement the diet by increasing the total dietary intake. Further, piracetam is not a concentrate, metabolite, constituent, extract or combination of any such dietary ingredient. [...] Accordingly, these products are drugs, under section 201(g)(1)(C) of the Act, 21 U.S.C. 321(g)(1)(C), because they are not foods and they are intended to affect the structure or any function of the body. Moreover, these products are new drugs as defined by section 201(p) of the Act, 21 U.S.C. 321(p), because they are not generally recognized as safe and effective for use under the conditions prescribed, recommended, or suggested in their labeling."[55]

The results of this meta-analysis cannot address the important issues of individual differences in stimulant effects or the role of motivational enhancement in helping perform academic or occupational tasks. However, they do confirm the reality of cognitive enhancing effects for normal healthy adults in general, while also indicating that these effects are modest in size.

The rest is here:

Nootropic - Wikipedia

Posted in Nootropics | Comments Off on Nootropic – Wikipedia

Eugenics – Wikipedia

Posted: at 4:23 am

Eugenics (; from Greek eugenes "well-born" from eu, "good, well" and genos, "race, stock, kin")[2][3] is a set of beliefs and practices that aims at improving the genetic quality of the human population.[4][5] It is a social philosophy advocating the improvement of human genetic traits through the promotion of higher rates of sexual reproduction for people with desired traits (positive eugenics), or reduced rates of sexual reproduction and sterilization of people with less-desired or undesired traits (negative eugenics), or both.[6] Alternatively, gene selection rather than "people selection" has recently been made possible through advances in genome editing (e.g. CRISPR).[7] The exact definition of eugenics has been a matter of debate since the term was coined. The definition of it as a "social philosophy"that is, a philosophy with implications for social orderis not universally accepted, and was taken from Frederick Osborn's 1937 journal article "Development of a Eugenic Philosophy".[6]

While eugenic principles have been practiced as far back in world history as Ancient Greece, the modern history of eugenics began in the early 20th century when a popular eugenics movement emerged in the United Kingdom[8] and spread to many countries, including the United States, Canada[9] and most European countries. In this period, eugenic ideas were espoused across the political spectrum. Consequently, many countries adopted eugenic policies meant to improve the genetic stock of their countries. Such programs often included both "positive" measures, such as encouraging individuals deemed particularly "fit" to reproduce, and "negative" measures such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. People deemed unfit to reproduce often included people with mental or physical disabilities, people who scored in the low ranges of different IQ tests, criminals and deviants, and members of disfavored minority groups. The eugenics movement became negatively associated with Nazi Germany and the Holocaust when many of the defendants at the Nuremberg trials attempted to justify their human rights abuses by claiming there was little difference between the Nazi eugenics programs and the US eugenics programs.[10] In the decades following World War II, with the institution of human rights, many countries gradually abandoned eugenics policies, although some Western countries, among them the United States, continued to carry out forced sterilizations.

Since the 1980s and 1990s when new assisted reproductive technology procedures became available, such as gestational surrogacy (available since 1985), preimplantation genetic diagnosis (available since 1989) and cytoplasmic transfer (first performed in 1996), fear about a possible future revival of eugenics and a widening of the gap between the rich and the poor has emerged.

A major criticism of eugenics policies is that, regardless of whether "negative" or "positive" policies are used, they are vulnerable to abuse because the criteria of selection are determined by whichever group is in political power. Furthermore, negative eugenics in particular is considered by many to be a violation of basic human rights, which include the right to reproduction. Another criticism is that eugenic policies eventually lead to a loss of genetic diversity, resulting in inbreeding depression instead due to a low genetic variation.

The idea of positive eugenics to produce better human beings has existed at least since Plato suggested selective mating to produce a guardian class.[12] The idea of negative eugenics to decrease the birth of inferior human beings has existed at least since William Goodell (1829-1894) advocated the castration and spaying of the insane.[13][14]

However, the term "eugenics" to describe a modern project of improving the human population through breeding was originally developed by Francis Galton. Galton had read his half-cousin Charles Darwin's theory of evolution, which sought to explain the development of plant and animal species, and desired to apply it to humans. Based on his biographical studies, Galton believed that desirable human qualities were hereditary traits, though Darwin strongly disagreed with this elaboration of his theory.[15] In 1883, one year after Darwin's death, Galton gave his research a name: eugenics.[16] Throughout its recent history, eugenics has remained controversial.

Eugenics became an academic discipline at many colleges and universities, and received funding from many sources.[18] Organisations formed to win public support and sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907, and the American Eugenics Society of 1921. Both sought support from leading clergymen, and modified their message to meet religious ideals.[19] In 1909 the Anglican clergymen William Inge and James Peile both wrote for the British Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes.[19]

Three International Eugenics Conferences presented a global venue for eugenists with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies were first implemented in the early 1900s in the United States.[20] It also took root in France, Germany, and Great Britain.[21] Later, in the 1920s and 30s, the eugenic policy of sterilizing certain mental patients was implemented in other countries, including Belgium,[22]Brazil,[23]Canada,[24]Japan and Sweden.

In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbour Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal, and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "less fit" races.

Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward,[33] the English writer G. K. Chesterton, the German-American anthropologist Franz Boas,[34] and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book Eugenics and Other Evils, and Boas' 1916 article "Eugenics" (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Sutherland identified eugenists as a major obstacle to the eradication and cure of tuberculosis in his 1917 address "Consumption: Its Cause and Cure",[35] and criticism of eugenists and Neo-Malthusians in his 1921 book Birth Control led to a writ for libel from the eugenist Marie Stopes. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben.[36] Other biologists such as J. B. S. Haldane and R. A. Fisher expressed skepticism that sterilization of "defectives" would lead to the disappearance of undesirable genetic traits.[37]

Among institutions, the Catholic Church was an opponent of state-enforced sterilizations.[38] Attempts by the Eugenics Education Society to persuade the British government to legalise voluntary sterilisation were opposed by Catholics and by the Labour Party.[pageneeded] The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii.[19] In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason."[39]

As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted[40] various eugenics policies, including: genetic screening, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, culminating in genocide.

The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in Mein Kampf in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", leading to their their segregation or institutionalization, sterilization, euthanasia, and even their mass murder. The Nazi practice of euthanasia was carried out on hospital patients in the Aktion T4 centers such as Hartheim Castle.

By the end of World War II, many discriminatory eugenics laws were abandoned, having become associated with Nazi Germany.[43] H. G. Wells, who had called for "the sterilization of failures" in 1904,[44] stated in his 1940 book The Rights of Man: Or What are we fighting for? that among the human rights he believed should be available to all people was "a prohibition on mutilation, sterilization, torture, and any bodily punishment".[45] After World War II, the practice of "imposing measures intended to prevent births within [a population] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide.[46] The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons".[47] In spite of the decline in discriminatory eugenics laws, some government mandated sterilization continued into the 21st century. During the ten years President Alberto Fujimori led Peru from 1990 to 2000, allegedly 2,000 persons were involuntarily sterilized.[48] China maintained its coercive one-child policy until 2015 as well as a suite of other eugenics based legislation to reduce population size and manage fertility rates of different populations.[49][50][51] In 2007 the United Nations reported coercive sterilisations and hysterectomies in Uzbekistan.[52] During the years 200506 to 201213, nearly one-third of the 144 California prison inmates who were sterilized did not give lawful consent to the operation.[53]

Developments in genetic, genomic, and reproductive technologies at the end of the 20th century are raising numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, claim that modern genetics is a back door to eugenics.[54] This view is shared by White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products".[55] In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.[56]

In October 2015, the United Nations' International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements; however, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want or cannot afford the enhancements.[57]

Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term "eugenics" (preferring "germinal choice" or "reprogenetics")[58] to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.

The term eugenics and its modern field of study were first formulated by Francis Galton in 1883,[59] drawing on the recent work of his half-cousin Charles Darwin.[60][61] Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development.

The origins of the concept began with certain interpretations of Mendelian inheritance, and the theories of August Weismann. The word eugenics is derived from the Greek word eu ("good" or "well") and the suffix -gens ("born"), and was coined by Galton in 1883 to replace the word "stirpiculture", which he had used previously but which had come to be mocked due to its perceived sexual overtones.[63] Galton defined eugenics as "the study of all agencies under human control which can improve or impair the racial quality of future generations".[64] Galton did not understand the mechanism of inheritance.[65]

Historically, the term has referred to everything from prenatal care for mothers to forced sterilization and euthanasia.[citation needed] To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent."[66] Debate as to what exactly counts as eugenics has continued to the present day.[67]

Edwin Black, journalist and author of War Against the Weak, claims eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is often deemed a cultural choice rather than a matter that can be determined through objective scientific inquiry.[68] The most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. This aspect of eugenics has historically been tainted with scientific racism.

Early eugenists were mostly concerned with perceived intelligence factors that often correlated strongly with social class. Some of these early eugenists include Karl Pearson and Walter Weldon, who worked on this at the University College London.[15]

Eugenics also had a place in medicine. In his lecture "Darwinism, Medical Progress and Eugenics", Karl Pearson said that everything concerning eugenics fell into the field of medicine. He basically placed the two words as equivalents. He was supported in part by the fact that Francis Galton, the father of eugenics, also had medical training.[69]

Eugenic policies have been conceptually divided into two categories. Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning.[70] The movie Gattaca provides a fictional example of positive eugenics done voluntarily. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning.[70] Both positive and negative eugenics can be coercive; abortion for fit women, for example, was illegal in Nazi Germany.[71]

Jon Entine claims that eugenics simply means "good genes" and using it as synonym for genocide is an "all-too-common distortion of the social history of genetics policy in the United States." According to Entine, eugenics developed out of the Progressive Era and not "Hitler's twisted Final Solution".[72]

According to Richard Lynn, eugenics may be divided into two main categories based on the ways in which the methods of eugenics can be applied.[73]

The first major challenge to conventional eugenics based upon genetic inheritance was made in 1915 by Thomas Hunt Morgan, who demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family of red-eyes. Morgan claimed that this demonstrated that major genetic changes occurred outside of inheritance and that the concept of eugenics based upon genetic inheritance was not completely scientifically accurate. Additionally, Morgan criticized the view that subjective traits, such as intelligence and criminality, were caused by heredity because he believed that the definitions of these traits varied and that accurate work in genetics could only be done when the traits being studied were accurately defined.[109] In spite of Morgan's public rejection of eugenics, much of his genetic research was absorbed by eugenics.[110][111]

The heterozygote test is used for the early detection of recessive hereditary diseases, allowing for couples to determine if they are at risk of passing genetic defects to a future child.[112] The goal of the test is to estimate the likelihood of passing the hereditary disease to future descendants.[112]

Recessive traits can be severely reduced, but never eliminated unless the complete genetic makeup of all members of the pool was known, as aforementioned. As only very few undesirable traits, such as Huntington's disease, are dominant, it could be argued[by whom?] from certain perspectives that the practicality of "eliminating" traits is quite low.[citation needed]

There are examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not influencing the prevalence of heterozygote carriers of those diseases. The elevated prevalence of certain genetically transmitted diseases among the Ashkenazi Jewish population (TaySachs, cystic fibrosis, Canavan's disease, and Gaucher's disease), has been decreased in current populations by the application of genetic screening.[113]

Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect.[114] Andrzej Pkalski, from the University of Wrocaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects for a pleiotropic gene that is also associated with a positive trait. Pekalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together.[115]

Eugenic policies could also lead to loss of genetic diversity, in which case a culturally accepted "improvement" of the gene pool could very likelyas evidenced in numerous instances in isolated island populations (e.g., the dodo, Raphus cucullatus, of Mauritius)result in extinction due to increased vulnerability to disease, reduced ability to adapt to environmental change, and other factors both known and unknown. A long-term species-wide eugenics plan might lead to a scenario similar to this because the elimination of traits deemed undesirable would reduce genetic diversity by definition.[116]

Edward M. Miller claims that, in any one generation, any realistic program should make only minor changes in a fraction of the gene pool, giving plenty of time to reverse direction if unintended consequences emerge, reducing the likelihood of the elimination of desirable genes.[117] Miller also argues that any appreciable reduction in diversity is so far in the future that little concern is needed for now.[117]

While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology there is at this point no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some diseases such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual. Reducing the instance of sickle-cell disease genes in Africa where malaria is a common and deadly disease could indeed have extremely negative net consequences.

However, some genetic diseases such as haemochromatosis can increase susceptibility to illness, cause physical deformities, and other dysfunctions, which provides some incentive for people to re-consider some elements of eugenics.

Autistic people have advocated a shift in perception of autism spectrum disorders as complex syndromes rather than diseases that must be cured. Proponents of this view reject the notion that there is an "ideal" brain configuration and that any deviation from the norm is pathological; they promote tolerance for what they call neurodiversity.[118] Baron-Cohen argues that the genes for Asperger's combination of abilities have operated throughout recent human evolution and have made remarkable contributions to human history.[119] The possible reduction of autism rates through selection against the genetic predisposition to autism is a significant political issue in the autism rights movement, which claims that autism is a part of neurodiversity.

Many culturally Deaf people oppose attempts to cure deafness, believing instead deafness should be considered a defining cultural characteristic not a disease.[120][121][122] Some people have started advocating the idea that deafness brings about certain advantages, often termed "Deaf Gain."[123][124]

Societal and political consequences of eugenics call for a place in the discussion on the ethics behind the eugenics movement.[125] Many of the ethical concerns regarding eugenics arise from its controversial past, prompting a discussion on what place, if any, it should have in the future. Advances in science have changed eugenics. In the past, eugenics had more to do with sterilization and enforced reproduction laws.[126] Now, in the age of a progressively mapped genome, embryos can be tested for susceptibility to disease, gender, and genetic defects, and alternative methods of reproduction such as in vitro fertilization are becoming more common.[127] Therefore, eugenics is no longer ex post facto regulation of the living but instead preemptive action on the unborn.[128]

With this change, however, there are ethical concerns which lack adequate attention, and which must be addressed before eugenic policies can be properly implemented in the future. Sterilized individuals, for example, could volunteer for the procedure, albeit under incentive or duress, or at least voice their opinion. The unborn fetus on which these new eugenic procedures are performed cannot speak out, as the fetus lacks the voice to consent or to express his or her opinion.[129] Philosophers disagree about the proper framework for reasoning about such actions, which change the very identity and existence of future persons.[130]

A common criticism of eugenics is that "it inevitably leads to measures that are unethical".[131] Some fear future "eugenics wars" as the worst-case scenario: the return of coercive state-sponsored genetic discrimination and human rights violations such as compulsory sterilization of persons with genetic defects, the killing of the institutionalized and, specifically, segregation and genocide of races perceived as inferior.[132] Health law professor George Annas and technology law professor Lori Andrews are prominent advocates of the position that the use of these technologies could lead to such human-posthuman caste warfare.[133][134]

In his 2003 book Enough: Staying Human in an Engineered Age, environmental ethicist Bill McKibben argued at length against germinal choice technology and other advanced biotechnological strategies for human enhancement. He claims that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome technologically. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using as examples Ming China, Tokugawa Japan and the contemporary Amish.[135]

Some, such as Nathaniel C. Comfort from Johns Hopkins University, claim that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making from the state to the patient and their family.[136] Comfort suggests that "the eugenic impulse drives us to eliminate disease, live longer and healthier, with greater intelligence, and a better adjustment to the conditions of society; and the health benefits, the intellectual thrill and the profits of genetic bio-medicine are too great for us to do otherwise."[137] Others, such as bioethicist Stephen Wilkinson of Keele University and Honorary Research Fellow Eve Garrard at the University of Manchester, claim that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral. In a co-authored publication by Keele University, they stated that "[e]ugenics doesn't seem always to be immoral, and so the fact that PGD, and other forms of selective reproduction, might sometimes technically be eugenic, isn't sufficient to show that they're wrong."[138]

In their 2000 book From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.[139]

Original position, a hypothetical situation developed by American philosopher John Rawls, has been used as an argument for negative eugenics.[140][141]

See the rest here:

Eugenics - Wikipedia

Posted in Eugenics | Comments Off on Eugenics – Wikipedia

Member states of NATO – Wikipedia

Posted: at 4:22 am

NATO (the North Atlantic Treaty Organization) is an international alliance that consists of 28 member states from North America and Europe. It was established at the signing of the North Atlantic Treaty on 4 April 1949. Article Five of the treaty states that if an armed attack occurs against one of the member states, it should be considered an attack against all members, and other members shall assist the attacked member, with armed forces if necessary.[1]

Of the 28 member countries, two are located in North America (Canada and the United States) and 25 are European countries while Turkey is in Eurasia. All members have militaries, except for Iceland which does not have a typical army (but does, however, have a coast guard and a small unit of civilian specialists for NATO operations). Three of NATO's members are nuclear weapons states: France, the United Kingdom, and the United States. NATO has 12 original founding member nation states, and from 18 February 1952 to 6 May 1955, it added 3 more member nations, and a fourth on 30 May 1982. After the end of the Cold War, NATO added 12 more member nations (10 former Warsaw Pact members and 2 former Yugoslav republics) from 12 March 1999 to 1 April 2009.

NATO has added new members six times since its founding in 1949, and since 2009 NATO has had 28 members. Twelve countries were part of the founding of NATO: Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom, and the United States. In 1952, Greece and Turkey became members of the Alliance, joined later by West Germany (in 1955) and Spain (in 1982). In 1990, with the reunification of Germany, NATO grew to include the former country of East Germany. Between 1994 and 1997, wider forums for regional cooperation between NATO and its neighbors were set up, including the Partnership for Peace, the Mediterranean Dialogue initiative and the Euro-Atlantic Partnership Council. In 1997, three former Warsaw Pact countries, Hungary, the Czech Republic, and Poland, were invited to join NATO. After this fourth enlargement in 1999, the Vilnius group of The Baltics and seven East European countries formed in May 2000 to cooperate and lobby for further NATO membership. Seven of these countries joined in the fifth enlargement in 2004. Albania and Croatia joined in the sixth enlargement in 2009.

Go here to see the original:
Member states of NATO - Wikipedia

Posted in NATO | Comments Off on Member states of NATO – Wikipedia

Genome – Wikipedia

Posted: at 4:19 am

In modern molecular biology and genetics, a genome is the genetic material of an organism. It consists of DNA (or RNA in RNA viruses). The genome includes both the genes, (the coding regions), the noncoding DNA[1] and the genomes of the mitochondria[2] and chloroplasts.

The term genome was created in 1920 by Hans Winkler,[3] professor of botany at the University of Hamburg, Germany. The Oxford Dictionary suggests the name is a blend of the words gene and chromosome.[4] However, see omics for a more thorough discussion. A few related -ome words already existedsuch as biome, rhizome, forming a vocabulary into which genome fits systematically.[5]

Some organisms have multiple copies of chromosomes: diploid, triploid, tetraploid and so on. In classical genetics, in a sexually reproducing organism (typically eukarya) the gamete has half the number of chromosomes of the somatic cell and the genome is a full set of chromosomes in a diploid cell. The halving of the genetic material in gametes is accomplished by the segregation of homologous chromosomes during meiosis.[6] In haploid organisms, including cells of bacteria, archaea, and in organelles including mitochondria and chloroplasts, or viruses, that similarly contain genes, the single or set of circular or linear chains of DNA (or RNA for some viruses), likewise constitute the genome. The term genome can be applied specifically to mean what is stored on a complete set of nuclearDNA (i.e.,the "nuclear genome") but can also be applied to what is stored within organelles that contain their own DNA, as with the "mitochondrial genome" or the "chloroplast genome". Additionally, the genome can comprise non-chromosomal genetic elements such as viruses, plasmids, and transposable elements.[7]

Typically, when it is said that the genome of a sexually reproducing species has been "sequenced", it refers to a determination of the sequences of one set of autosomes and one of each type of sex chromosome, which together represent both of the possible sexes. Even in species that exist in only one sex, what is described as a "genome sequence" may be a composite read from the chromosomes of various individuals. Colloquially, the phrase "genetic makeup" is sometimes used to signify the genome of a particular individual or organism.[citation needed] The study of the global properties of genomes of related organisms is usually referred to as genomics, which distinguishes it from genetics which generally studies the properties of single genes or groups of genes.

Both the number of base pairs and the number of genes vary widely from one species to another, and there is only a rough correlation between the two (an observation is known as the C-value paradox). At present, the highest known number of genes is around 60,000, for the protozoan causing trichomoniasis (see List of sequenced eukaryotic genomes), almost three times as many as in the human genome.

An analogy to the human genome stored on DNA is that of instructions stored in a book:

In 1976, Walter Fiers at the University of Ghent (Belgium) was the first to establish the complete nucleotide sequence of a viral RNA-genome (Bacteriophage MS2). The next year Fred Sanger completed the first DNA-genome sequence: Phage -X174, of 5386 base pairs.[8] The first complete genome sequences among all three domains of life were released within a short period during the mid-1990s: The first bacterial genome to be sequenced was that of Haemophilus influenzae, completed by a team at The Institute for Genomic Research in 1995. A few months later, the first eukaryotic genome was completed, with sequences of the 16 chromosomes of budding yeast Saccharomyces cerevisiae published as the result of a European-led effort begun in the mid-1980s. The first genome sequence for an archaeon, Methanococcus jannaschii, was completed in 1996, again by The Institute for Genomic Research.

The development of new technologies has made it dramatically easier and cheaper to do sequencing, and the number of complete genome sequences is growing rapidly. The US National Institutes of Health maintains one of several comprehensive databases of genomic information.[9] Among the thousands of completed genome sequencing projects include those for rice, a mouse, the plant Arabidopsis thaliana, the puffer fish, and the bacteria E. coli. In December 2013, scientists first sequenced the entire genome of a Neanderthal, an extinct species of humans. The genome was extracted from the toe bone of a 130,000-year-old Neanderthal found in a Siberian cave.[10][11]

New sequencing technologies, such as massive parallel sequencing have also opened up the prospect of personal genome sequencing as a diagnostic tool, as pioneered by Manteia Predictive Medicine. A major step toward that goal was the completion in 2007 of the full genome of James D. Watson, one of the co-discoverers of the structure of DNA.[12]

Whereas a genome sequence lists the order of every DNA base in a genome, a genome map identifies the landmarks. A genome map is less detailed than a genome sequence and aids in navigating around the genome. The Human Genome Project was organized to map and to sequence the human genome. A fundamental step in the project was the release of a detailed genomic map by Jean Weissenbach and his team at the Genoscope in Paris.[13][14]

Reference genome sequences and maps continue to be updated, removing errors and clarifying regions of high allelic complexity.[15] The decreasing cost of genomic mapping has permitted genealogical sites to offer it as a service,[16] to the extent that one may submit one's genome to crowd sourced scientific endeavours such as DNA.land at the New York Genome Center, an example both of the economies of scale and of citizen science.[17]

Genome composition is used to describe the make up of contents of a haploid genome, which should include genome size, proportions of non-repetitive DNA and repetitive DNA in details. By comparing the genome compositions between genomes, scientists can better understand the evolutionary history of a given genome.

When talking about genome composition, one should distinguish between prokaryotes and eukaryotes as there are significant differences with contents structure. In prokaryotes, most of the genome (8590%) is non-repetitive DNA, which means coding DNA mainly forms it, while non-coding regions only take a small part.[18] On the contrary, eukaryotes have the feature of exon-intron organization of protein coding genes; the variation of repetitive DNA content in eukaryotes is also extremely high. In mammals and plants, the major part of the genome is composed of repetitive DNA.[19]

Most biological entities that are more complex than a virus sometimes or always carry additional genetic material besides that which resides in their chromosomes. In some contexts, such as sequencing the genome of a pathogenic microbe, "genome" is meant to include information stored on this auxiliary material, which is carried in plasmids. In such circumstances then, "genome" describes all of the genes and information on non-coding DNA that have the potential to be present.

In eukaryotes such as plants, protozoa and animals, however, "genome" carries the typical connotation of only information on chromosomal DNA. So although these organisms contain chloroplasts or mitochondria that have their own DNA, the genetic information contained in DNA within these organelles is not considered part of the genome. In fact, mitochondria are sometimes said to have their own genome often referred to as the "mitochondrial genome". The DNA found within the chloroplast may be referred to as the "plastome".

Genome size is the total number of DNA base pairs in one copy of a haploid genome. The nuclear genome comprises approximately 3.2 billion nucleotides of DNA, divided into 24 linear molecules, the shortest 50 000 000 nucleotides in length and the longest 260 000 000 nucleotides, each contained in a different chromosome.[21] The genome size is positively correlated with the morphological complexity among prokaryotes and lower eukaryotes; however, after mollusks and all the other higher eukaryotes above, this correlation is no longer effective.[19][22] This phenomenon also indicates the mighty influence coming from repetitive DNA act on the genomes.

Since genomes are very complex, one research strategy is to reduce the number of genes in a genome to the bare minimum and still have the organism in question survive. There is experimental work being done on minimal genomes for single cell organisms as well as minimal genomes for multi-cellular organisms (see Developmental biology). The work is both in vivo and in silico.[23][24]

Here is a table of some significant or representative genomes. See #See also for lists of sequenced genomes.

[30][31][32]

Initial sequencing and analysis of the human genome[61]

The proportion of non-repetitive DNA is calculated by using the length of non-repetitive DNA divided by genome size. Protein-coding genes and RNA-coding genes are generally non-repetitive DNA.[66] A bigger genome does not mean more genes, and the proportion of non-repetitive DNA decreases along with increasing genome size in higher eukaryotes.[19]

It had been found that the proportion of non-repetitive DNA can vary a lot between species. Some E. coli as prokaryotes only have non-repetitive DNA, lower eukaryotes such as C. elegans and fruit fly, still possess more non-repetitive DNA than repetitive DNA.[19][67] Higher eukaryotes tend to have more repetitive DNA than non-repetitive ones. In some plants and amphibians, the proportion of non-repetitive DNA is no more than 20%, becoming a minority component.[19]

The proportion of repetitive DNA is calculated by using length of repetitive DNA divide by genome size. There are two categories of repetitive DNA in genome: tandem repeats and interspersed repeats.[68]

Tandem repeats are usually caused by slippage during replication, unequal crossing-over and gene conversion,[69]satellite DNA and microsatellites are forms of tandem repeats in the genome.[70] Although tandem repeats count for a significant proportion in genome, the largest proportion in mammalian is the other type, interspersed repeats.

Interspersed repeats mainly come from transposable elements (TEs), but they also include some protein coding gene families and pseudogenes. Transposable elements are able to integrate into the genome at another site within the cell.[18][71] It is believed that TEs are an important driving force on genome evolution of higher eukaryotes.[72] TEs can be classified into two categories, Class 1 (retrotransposons) and Class 2 (DNA transposons).[71]

Retrotransposons can be transcribed into RNA, which are then duplicated at another site into the genome.[73] Retrotransposons can be divided into Long terminal repeats (LTRs) and Non-Long Terminal Repeats (Non-LTR).[72]

DNA transposons generally move by "cut and paste" in the genome, but duplication has also been observed. Class 2 TEs do not use RNA as intermediate and are popular in bacteria, in metazoan it has also been found.[72]

Genomes are more than the sum of an organism's genes and have traits that may be measured and studied without reference to the details of any particular genes and their products. Researchers compare traits such as chromosome number (karyotype), genome size, gene order, codon usage bias, and GC-content to determine what mechanisms could have produced the great variety of genomes that exist today (for recent overviews, see Brown 2002; Saccone and Pesole 2003; Benfey and Protopapas 2004; Gibson and Muse 2004; Reese 2004; Gregory 2005).

Duplications play a major role in shaping the genome. Duplication may range from extension of short tandem repeats, to duplication of a cluster of genes, and all the way to duplication of entire chromosomes or even entire genomes. Such duplications are probably fundamental to the creation of genetic novelty.

Horizontal gene transfer is invoked to explain how there is often an extreme similarity between small portions of the genomes of two organisms that are otherwise very distantly related. Horizontal gene transfer seems to be common among many microbes. Also, eukaryotic cells seem to have experienced a transfer of some genetic material from their chloroplast and mitochondrial genomes to their nuclear chromosomes.

See more here:
Genome - Wikipedia

Posted in Genome | Comments Off on Genome – Wikipedia

Libertarianism.org – Official Site

Posted: at 4:19 am

Justice, prosperity, responsibility, tolerance, cooperation, and peace.

Many people believe that liberty is the core political value of modern civilization itself, the one that gives substance and form to all the other values of social life. They're called libertarians.

Libertarianism.org presents

essays

by Walt Whitman in 1871

America, and with some definite instinct why and for what she has arisenis, now and here, with wonderful step, journeying through Time.

essays

by Walt Whitman in 1871

A single new thoughtfit for the time, put in shape by some greatliteratus[may cause change] greater than the longest and bloodiest war.

essays

by William Godwin in 1834

Numa met the goddess Egeria from time to time in a cave; and by her was instructed in the institutions he should give to the Romans.

Featured Guide

What do libertarians think about issues in public policy? Jeffrey Miron applies economic thinking to a variety of policy questions, building a picture of what a libertarian world might look like.

Jeffrey A. Miron is a Senior Lecturer and Director of Undergraduate Studies in Harvards Economics Department.

Read the original post:
Libertarianism.org - Official Site

Posted in Libertarianism | Comments Off on Libertarianism.org – Official Site

Futurist – Wikipedia

Posted: at 4:18 am

Futurists or futurologists are scientists and social scientists whose specialty is futurology or the attempt to systematically explore predictions and possibilities about the future and how they can emerge from the present, whether that of human society in particular or of life on Earth in general.

The term "futurist" most commonly refers to people such as authors, consultants, organizational leaders and others who engage in interdisciplinary and systems thinking to advise private and public organizations on such matters as diverse global trends, possible scenarios, emerging market opportunities and risk management. Futurist is not in the sense of the art movement futurism.

The Oxford English Dictionary identifies the earliest use of the term futurism in English as 1842, to refer, in a theological context, to the Christian eschatological tendency of that time. The next recorded use is the label adopted by the Italian and Russian futurists, the artistic, literary and political movements of the 1920s and 1930s which sought to reject the past and fervently embrace speed, technology and, often violent, change..

Visionary writers such as Jules Verne, Edward Bellamy, and H.G.Wells were not in their day characterized as futurists. The term futurology in its contemporary sense was first coined in the mid1940s by the German Professor Ossip K. Flechtheim, who proposed a new science of probability. Flechtheim argued that even if systematic forecasting did no more than unveil the subset of statistically highly probable processes of change and charted their advance, it would still be of crucial social value.[1]

In the mid1940s the first professional "futurist" consulting institutions like RAND and SRI began to engage in long-range planning, systematic trend watching, scenario development, and visioning, at first under World WarII military and government contract and, beginning in the 1950s, for private institutions and corporations. The period from the late 1940s to the mid1960s laid the conceptual and methodological foundations of the modern futures studies field. Bertrand de Jouvenel's The Art of Conjecture in 1963 and Dennis Gabor's Inventing the Future in 1964 are considered key early works, and the first U.S.university course devoted entirely to the future was taught by the late Alvin Toffler at the The New School in 1966.[2]

More generally, the label includes such disparate lay, professional, and academic groups as visionaries, foresight consultants, corporate strategists, policy analysts, cultural critics, planners, marketers, forecasters, prediction market developers, roadmappers, operations researchers, investment managers, actuaries, and other risk analyzers, and future-oriented individuals educated in every academic discipline, including anthropology, complexity studies, computer science, economics, engineering, Urban design, evolutionary biology, history, management, mathematics, philosophy, physical sciences, political science, psychology, sociology, systems theory, technology studies, and other disciplines.

"Futures studies"sometimes referred to as futurology, futures research, and foresightcan be summarized as being concerned with "three P's and a W", i.e. "possible, probable, and preferable" futures, plus "wildcards", which are low-probability, high-impact events, should they occur. Even with high-profile, probable events, such as the fall of telecommunications costs, the growth of the internet, or the aging demographics of particular countries, there is often significant uncertainty in the rate or continuation of a trend. Thus a key part of futures analysis is the managing of uncertainty and risk.[3]

Not all futurists engage in the practice of futurology as generally defined. Pre-conventional futurists (see below) would generally not. And while religious futurists, astrologers, occultists, New Age divinists, etc. use methodologies that include study, none of their personal revelation or belief-based work would fall within a consensus definition of futurology as used in academics or by futures studies professionals.

Several authors have become recognized as futurists. They research trends, particularly in technology, and write their observations, conclusions, and predictions. In earlier eras, many futurists were at academic institutions. John McHale, author of The Future of the Future, published a 'Futures Directory', and directed a think tank called The Centre For Integrative Studies at a university. Futurists have started consulting groups or earn money as speakers, with examples including Alvin Toffler, John Naisbitt and Patrick Dixon. Frank Feather is a business speaker that presents himself as a pragmatic futurist. Some futurists have commonalities with science fiction, and some science-fiction writers, such as Arthur C. Clarke, are known as futurists.[citation needed] In the introduction to The Left Hand of Darkness, Ursula K. Le Guin distinguished futurists from novelists, writing of the study as the business of prophets, clairvoyants, and futurists. In her words, "a novelist's business is lying".

A survey of 108 futurists[4] found the following shared assumptions:

Read this article:
Futurist - Wikipedia

Posted in Futurist | Comments Off on Futurist – Wikipedia