Daily Archives: June 12, 2016

Dietary supplement – Wikipedia, the free encyclopedia

Posted: June 12, 2016 at 8:19 pm

"Food supplement" redirects here. For food additions that alter the flavor, color or longevity of food, see Food additive. Flight through a CT image stack of a multivitamin tablet "A-Z" by German company Abtei.

A dietary supplement is intended to provide nutrients that may otherwise not be consumed in sufficient quantities.

Supplements as generally understood include vitamins, minerals, fiber, fatty acids, or amino acids, among other substances. U.S. authorities define dietary supplements as foods, while elsewhere they may be classified as drugs or other products.

There are more than 50,000 dietary supplements available. More than half of the U.S. adult population (53% - 55%) consume dietary supplements with most common ones being multivitamins.[1][2]

These products are not intended to prevent or treat any disease and in some circumstances are dangerous, according to the U.S. National Institutes of Health. For those who fail to consume a balanced diet, the agency says that certain supplements "may have value."[3]

Most supplements should be avoided, and usually people should not eat micronutrients except people with clearly shown deficiency.[4] Those people should first consult a doctor.[5] An exception is vitamin D, which is recommended in Nordic countries[6] due to weak sunlight.

According to the United States Food and Drug Administration (FDA), dietary supplements are products which are not pharmaceutical drugs, food additives like spices or preservatives, or conventional food, and which also meet any of these criteria:[7]

In the United States, the FDA has different monitoring procedures for substances depending on whether they are presented as drugs, food additives, food, or dietary supplements.[7] Dietary supplements are eaten or taken by mouth, and are regulated in United States law as a type of food rather than a type of drug.[8] Like food and unlike drugs, no government approval is required to make or sell dietary supplements; the manufacturer checks the safety of dietary supplements but the government does not; and rather than requiring riskbenefit analysis to prove that the product can be sold like a drug, riskbenefit analysis is only used to petition that food or a dietary supplement is unsafe and should be removed from market.[7]

The intended use of dietary supplements is to ensure that a person gets enough essential nutrients.[9]

Dietary supplements should not be used to treat any disease or as preventive healthcare.[10] An exception to this recommendation is the appropriate use of vitamins.[10]

Dietary supplements are unnecessary if one eats a balanced diet.[11]

Supplements may create harm in several ways, including over-consumption, particularly of minerals and fat-soluble vitamins which can build up in the body.[12] The products may also cause harm related to their rapid absorption in a short period of time, quality issues such as contamination, or by adverse interactions with other foods and medications.[13]

There are many types of dietary supplements.

Vitamin is an organic compound required by an organism as a vital nutrient in limited amounts.[14] An organic chemical compound (or related set of compounds) is called a vitamin when it cannot be synthesized in sufficient quantities by an organism, and must be obtained from the diet. Thus, the term is conditional both on the circumstances and on the particular organism. For example, ascorbic acid (vitamin C) is a vitamin for humans, but not for most other animals. Supplementation is important for the treatment of certain health problems but there is little evidence of benefit when used by those who are otherwise healthy.[15]

Dietary elements, commonly called "dietary minerals" or "minerals", are the chemical elements required by living organisms, other than the four elements carbon, hydrogen, nitrogen, and oxygen present in common organic molecules. The term "dietary mineral" is archaic, as the substances it refers are chemical elements rather than actual minerals.

Herbal medicine is the use of plants for medicinal purposes. Plants have been the basis for medical treatments through much of human history, and such traditional medicine is still widely practiced today. Modern medicine recognizes herbalism as a form of alternative medicine, as the practice of herbalism is not strictly based on evidence gathered using the scientific method. Modern medicine, does, however, make use of many plant-derived compounds as the basis for evidence-tested pharmaceutical drugs, and phytotherapy works to apply modern standards of effectiveness testing to herbs and medicines that are derived from natural sources. The scope of herbal medicine is sometimes extended to include fungal and bee products, as well as minerals, shells and certain animal parts.

Amino acids are biologically important organic compounds composed of amine (-NH2) and carboxylic acid (-COOH) functional groups, along with a side-chain specific to each amino acid. The key elements of an amino acid are carbon, hydrogen, oxygen, and nitrogen, though other elements are found in the side-chains of certain amino acids.

Amino acids can be divided into three categories: essential amino acids, non-essential amino acids, and conditional amino acids. Essential amino acids cannot be made by the body, and must be supplied by food. Non-essential amino acids are made by the body from essential amino acids or in the normal breakdown of proteins. Conditional amino acids are usually not essential, except in times of illness, stress, or for someone challenged with a lifelong medical condition[citation needed].

Essential fatty acids, or EFAs, are fatty acids that humans and other animals must ingest because the body requires them for good health but cannot synthesize them.[16] The term "essential fatty acid" refers to fatty acids required for biological processes but does not include the fats that only act as fuel.

Bodybuilding supplements are dietary supplements commonly used by those involved in bodybuilding and athletics. Bodybuilding supplements may be used to replace meals, enhance weight gain, promote weight loss or improve athletic performance. Among the most widely used are vitamin supplements, protein drinks, branched-chain amino acids (BCAA), glutamine, essential fatty acids, meal replacement products, creatine, weight loss products and testosterone boosters. Supplements are sold either as single ingredient preparations or in the form of "stacks" - proprietary blends of various supplements marketed as offering synergistic advantages. While many bodybuilding supplements are also consumed by the general public their salience and frequency of use may differ when used specifically by bodybuilders.

According to University of Helsinki food safety professor Marina Heinonen, more than 90% of dietary supplement health claims are incorrect.[17] In addition, ingredients listed have been found to be different from the contents. For example, Consumer Reports reported unsafe levels of arsenic, cadmium, lead and mercury in several of the protein powders that were tested.[18] Also, the CBC found that protein spiking (the addition of amino acid filler to manipulate analysis) was not uncommon,[19] however many of the companies involved challenged their claim.[19]

The number of incidents of liver damage from dietary supplements has tripled in a decade. Most of the products causing that effect were bodybuilding supplements. Some of the victims required liver transplants and some died. A third of the supplements involved contained unlisted steroids.[20]

Mild to severe toxicity has occurred on many occasions due to dietary supplements, even when the active ingredients were essential nutrients such as vitamins, minerals or amino acids. This has been a result of adulteration of the product, excessive usage on the part of the consumer, or use by persons at risk for the development of adverse effects. In addition, a number of supplements contain psychoactive drugs, whether of natural or synthetic origin.[21][22]

BMC Medicine published a study on herbal supplements in 2013. Most of the supplements studied were of low quality, one third did not contain the active ingredient(s) claimed, and one third contained unlisted substances.[23][24]

An investigation by the New York Attorney Generals office analyzed 78 bottles of herbal supplements from Walmart, Target, Walgreens and GNC stores in New York State using DNA barcoding. a method used to detect labeling fraud in the seafood industry. Only about 20% contained the ingredient on the label.[25][26]

Some supplements were contamined by rodent feces and urine.[27]

Only 0.3% of the 55,000 U.S. market dietary supplements have been studied regarding their common side effects.[20]

In early 20th century there were great hopes for supplements, but later research has shown these hopes were unfounded.[28]

"Antioxidant paradox" means the fact that even though fruits and vegetables are related to decreases in mortality, cardiovascular diseases and cancers, antioxidant nutrients do not really seem to help. According to one theory, this is because some other nutrients would be the important ones.[29][30] Multivitamin pills have neither proved useful[4] but may even increase mortality.[31]

Omega-3 fatty acids and fish oils from food are very healthy, but fish oil supplements are recommended only for those suffering from coronary artery diseases and not eating fish. Latest research has made the benefits of the supplements questionable even for them. Contrary to claims, fish oils do not decrease cholesterol but may even raise the "bad" LDL cholesterol and cause other harms. Also the use of cod liver oil is criticized by scientists.[32]

Alice Lichtenstein, DSc, chairwoman of the American Heart Association (AHA) says that even though omega-3 fatty acids from foods are healthy, the same is not shown in studies on omega-3 supplements. Therefore, one should not eat fish oil supplements unless one suffers from heart diseases.[33]

The regulation of food and dietary supplements by the U.S. Food and Drug Administration is governed by various statutes enacted by the United States Congress and interpreted by the U.S. Food and Drug Administration ("FDA"). Pursuant to the Federal Food, Drug, and Cosmetic Act ("the Act") and accompanying legislation, the FDA has authority to oversee the quality of substances sold as food in the United States, and to monitor claims made in the labeling about both the composition and the health benefits of foods.

Substances which the FDA regulates as food are subdivided into various categories, including foods, food additives, added substances (man-made substances which are not intentionally introduced into food, but nevertheless end up in it), and dietary supplements. The specific standards which the FDA exercises differ from one category to the next. Furthermore, the FDA has been granted a variety of means by which it can address violations of the standards for a given category of substances.

The European Union's Food Supplements Directive of 2002 requires that supplements be demonstrated to be safe, both in dosages and in purity.[34] Only those supplements that have been proven to be safe may be sold in the bloc without prescription. As a category of food, food supplements cannot be labeled with drug claims but can bear health claims and nutrition claims.[35]

The dietary supplements industry in the United Kingdom (UK), one of the 28 countries in the bloc, strongly opposed the Directive. In addition, a large number of consumers throughout Europe, including over one million in the UK, and various doctors and scientists, had signed petitions by 2005 against what are viewed by the petitioners as unjustified restrictions of consumer choice.[36]

In 2004, along with two British trade associations, the Alliance for Natural Health (ANH) had a legal challenge to the Food Supplements Directive[37] referred to the European Court of Justice by the High Court in London.[38]

Although the European Court of Justice's Advocate General subsequently said that the bloc's plan to tighten rules on the sale of vitamins and food supplements should be scrapped,[39] he was eventually overruled by the European Court, which decided that the measures in question were necessary and appropriate for the purpose of protecting public health. ANH, however, interpreted the ban as applying only to synthetically produced supplementsand not to vitamins and minerals normally found in or consumed as part of the diet.[40]

Nevertheless, the European judges acknowledged the Advocate General's concerns, stating that there must be clear procedures to allow substances to be added to the permitted list based on scientific evidence. They also said that any refusal to add the product to the list must be open to challenge in the courts.[41]

Effects of most dietary supplements have not been determined in randomized clinical trials and manufacturing is lightly regulated; randomized clinical trials of certain vitamins and antioxidants have found increased mortality rates.[42][43]

Excerpt from:

Dietary supplement - Wikipedia, the free encyclopedia

Posted in Food Supplements | Comments Off on Dietary supplement – Wikipedia, the free encyclopedia

Food fortification – Wikipedia, the free encyclopedia

Posted: at 8:19 pm

Food fortification or enrichment is the process of adding micronutrients (essential trace elements and vitamins) to food. It may be a purely commercial choice to provide extra nutrients in a food, while other times it is a public health policy which aims to reduce the number of people with dietary deficiencies within a population.

Diets that lack variety can be deficient in certain nutrients. Sometimes the staple foods of a region can lack particular nutrients, due to the soil of the region or because of the inherent inadequacy of the normal diet. Addition of micronutrients to staples and condiments can prevent large-scale deficiency diseases in these cases.[citation needed]

While it is true that both fortification and enrichment refer to the addition of nutrients to food, the true definitions do slightly vary. As defined by the World Health Organization (WHO) and the Food and Agricultural Organization of the United Nations (FAO), fortification refers to "the practice of deliberately increasing the content of an essential micronutrient, ie. vitamins and minerals (including trace elements) in a food irrespective of whether the nutrients were originally in the food before processing or not, so as to improve the nutritional quality of the food supply and to provide a public health benefit with minimal risk to health," whereas enrichment is defined as "synonymous with fortification and refers to the addition of micronutrients to a food which are lost during processing."[1]

Food fortification was identified as the second strategy of four by the WHO and FAO to begin decreasing the incidence of nutrient deficiencies at the global level.[1]

As outlined by the FAO, the most common fortified foods are:

The four main methods of food fortification (named as to indicate the procedure that is used in order to fortify the food):

The WHO and FAO, among many other nationally recognized organizations, have recognized that there are over 2 billion people worldwide who suffer from a variety of micronutrient deficiencies. In 1992, 159 countries pledged at the FAO/WHO International Conference on Nutrition to make efforts to help combat these issues of micronutrient deficiencies, highlighting the importance of decreasing the number of those with iodine, vitamin A, and iron deficiencies.[1] A significant statistic that led to these efforts was the discovery that approximately 1 in 3 people worldwide were at risk for either an iodine, vitamin A, or iron deficiency.[4] Although it is recognized that food fortification alone will not combat this deficiency, it is a step towards reducing the prevalence of these deficiencies and their associated health conditions.[5]

In Canada, The Food and Drug Regulations have outlined specific criterion which justifies food fortification:

There are also several advantages to approaching nutrient deficiencies among populations via food fortification as opposed to other methods. These may include, but are not limited to: treating a population without specific dietary interventions therefore not requiring a change in dietary patterns, continuous delivery of the nutrient, does not require individual compliance, and potential to maintain nutrient stores more efficiently if consumed on a regular basis.[3]

Several organizations such as the WHO, FAO, Health Canada, and the Nestl Research Center acknowledge that there are limitations to food fortification. Within the discussion of nutrient deficiencies the topic of nutrient toxicities can also be immediately questioned. Fortification of nutrients in foods may deliver toxic amounts of nutrients to an individual and also cause its associated side effects. As seen with the case of fluoride toxicity below, the result can be irreversible staining to the teeth. Although this may be a minor toxic effect to health, there are several that are more severe.[7]

The WHO states that limitations to food fortification may include: human rights issues indicating that consumers have the right to choose if they want fortified products or not, the potential for insufficient demand of the fortified product, increased production costs leading to increased retail costs, the potential that the fortified products will still not be a solution to nutrient deficiencies amongst low income populations who may not be able to afford the new product, and children who may not be able to consume adequate amounts thereof.[1]

Food safety worries led to legislation in Denmark in 2004 restricting foods fortified with extra vitamins or minerals. Products banned include: Rice Crispies, Shreddies, Horlicks, Ovaltine and Marmite.[8]

Danes said [Kelloggs] Corn Flakes, Rice Krispies and Special K wanted to include "toxic" doses which, if eaten regularly, could damage children's livers and kidneys and harm fetuses in pregnant women.[9]

One factor that limits the benefits of food fortification is that isolated nutrients added back into a processed food that has had many of its nutrients removed, does not always result in the added nutrients being as bioavailable as they would be in the original, whole food. An example is skim milk that has had the fat removed, and then had vitamin A and vitamin D added back. Vitamins A and D are both fat-soluble and non-water-soluble, so a person consuming skim milk in the absence of fats may not be able to absorb as much of these vitamins as one would be able to absorb from drinking whole milk.

Phytochemicals such as polyphenols can also impact nutrient absorption.

Ecological studies have shown that increased B vitamin fortification is correlated with the prevalence of obesity and diabetes.[10] Daily consumption of iron per capita in the United States has dramatically surged since World War II and nearly doubled over the past century due to increases in iron fortification and increased consumption of meat.[11] Existing evidence suggests that excess iron intake may play a role in the development of obesity, cardiovascular disease, diabetes and cancer.[12]

Fortification of foods with folic acid has been mandated in many countries solely to improve the folate status of pregnant women to prevent Neural Tube Defectsa relatively rare birth defect which affected 0.5% of US births before fortification began.[13][14] However, when fortification is introduced, several hundred thousand people are exposed to an increased intake of folic acid for each neural tube defect pregnancy that is prevented.[15] In humans, increased folic acid intake leads to elevated blood concentrations of naturally occurring folates and of unmetabolized folic acid. High blood concentrations of folic acid may decrease natural killer cell cytotoxicity, and high folate status may reduce the response to drugs used to treat malaria, rheumatoid arthritis, psoriasis, and cancer.[15] A combination of high folate levels and low vitamin B-12 status may be associated with an increased risk of cognitive impairment and anemia in the elderly and, in pregnant women, with an increased risk of insulin resistance and obesity in their children.[15] Folate has a dual effect on cancer, protecting against cancer initiation but facilitating progression and growth of preneoplastic cells and subclinical cancers.[15] Furthermore, intake of folic acid from fortification have turned out to be significantly greater than originally modeled in pre mandate predictions.[16] Therefore, a high folic acid intake due to fortification may be harmful for more people than the policy is designed to help.[14][15][17][18]

There is a concern that micronutrients are legally defined in such a way that does not distinguish between different forms, and that fortified foods often have nutrients in a balance that would not occur naturally. For example, in the U.S., food is fortified with folic acid, which is one of the many naturally-occurring forms of folate, and which only contributes a minor amount to the folates occurring in natural foods.[19] In many cases, such as with folate, it is an open question of whether or not there are any benefits or risks to consuming folic acid in this form.

In many cases, the micronutrients added to foods in fortification are synthetic.

In some cases, certain forms of micronutrients can be actively toxic in a sufficiently high dose, even if other forms are safe at the same or much higher doses. There are examples of such toxicity in both synthetic and naturally-occurring forms of vitamins. Retinol, the active form of Vitamin A, is toxic in a much lower dose than other forms, such as beta carotene. Menadione, a phased-out synthetic form of Vitamin K, is also known to be toxic.[20]

There are several main groups of food supplements like:

Many foods and beverages worldwide have been fortified, whether a voluntary action by the product developers or by law. Although some may view these additions as strategic marketing schemes to sell their product, there is a lot of work that must go into a product before simply fortifying it. In order to fortify a product, it must first be proven that the addition of this vitamin or mineral is beneficial to health, safe, and an effective method of delivery. The addition must also abide by all food and labeling regulations and support nutritional rationale. From a food developer's point of view, they also need to consider the costs associated with this new product and whether or not there will be a market to support the change.[21]

Examples of foods and beverages that have been fortified and shown to have positive health effects:

"Iodine deficiency disorder (IDD) is the single greatest cause of preventable mental retardation. Severe deficiencies cause cretinism, stillbirth and miscarriage. But even mild deficiency can significantly affect the learning ability of populations........ Today over 1 billion people in the world suffer from iodine deficiency, and 38 million babies born every year are not protected from brain damage due to IDD."Kul Gautam, Deputy Executive Director, UNICEF, October 2007[22]

Iodised salt has been used in the United States since before World War II. It was discovered in 1821 that goiters could be treated by the use of iodized salts. However, it was not until 1916 that the use of iodized salts could be tested in a research trial as a preventative measure against goiters. By 1924, it became readily available in the US.[23]

Currently in Canada and the US, the RDA for iodine is as low as 90g/day for children (48 years) and as high as 290g/day for breast-feeding mothers.[24]

Diseases that are associated with an iodine deficiency include: mental retardation, hypothyroidism, and goiter. There is also a risk of various other growth and developmental abnormalities.[24]

Folic acid (also known as folate) functions in reducing blood homocysteine levels, forming red blood cells, proper growth and division of cells, and preventing neural tube defects (NTDs).[25]

In many industrialized countries, the addition of folic acid to flour has prevented a significant number of NTDs in infants. Two common types of NTDs, spina bifida and anencephaly, affect approximately 2500-3000 infants born in the US annually. Research trials have shown the ability to reduce the incidence of NTDs by supplementing pregnant mothers with folic acid by 72%.[26]

The RDA for folic acid ranges from as low as 150g/day for children aged 13 years old, to 400g/day for males and females over the age of 19, and 600g/day during pregnancy.[27]

Diseases associated with folic acid deficiency include: megaloblastic or macrocytic anemia, cardiovascular disease, certain types of cancer, and NTDs in infants.[28]

Niacin has been added to bread in the USA since 1938 (when voluntary addition started), a programme which substantially reduced the incidence of pellagra.[29] As early as 1755, pellagra was recognized by doctors as being a niacin deficiency disease. Although not officially receiving its name of pellagra until 1771.[30]Pellagra was seen amongst poor families who used corn as their main dietary staple. Although corn itself does contain niacin, it is not a bioavailable form unless it undergoes Nixtamalization (treatment with alkali, traditional in Native American cultures) and therefore was not contributing to the overall intake of niacin.[31] Although pellagra can still be seen in developing countries, fortification of food with niacin played a huge role in eliminating the prevalence of the disease.[30]

The RDA for niacin is 2mg NE(niacin equivalents)/day (AI) for infants aged 06 months, 16mg NE/day for males, and 14mg NE/day for females who are over the age of 19.[31]

Diseases associated with niacin deficiency include: Pellagra which consisted of signs and symptoms called the 3D's-"Dermatitis, dementia, and diarrhea. Others may include vascular or gastrointestinal diseases.[30]

Common diseases which present a high frequency of niacin deficiency: alcoholism, anorexia nervosa, HIV infection, gastrectomy, malabsorptive disorders, certain cancers and their associated treatments.[30]

Since Vitamin D is a fat-soluble vitamin, it cannot be added to a wide variety of foods. Foods that it is commonly added to are margarine, vegetable oils and dairy products.[32] During the late 1800s, after the discovery of curing conditions of scurvy and beriberi had occurred, researchers were aiming to see if the disease, later known as rickets, could also be cured by food. Their results showed that sunlight exposure and cod liver oil were the cure. It was not until the 1930s that vitamin D was actually linked to curing rickets.[33] This discovery led to the fortification of common foods such as milk, margarine, and breakfast cereals. This took the astonishing statistics of approximately 8090% of children showing varying degrees of bone deformations due to vitamin D deficiency to being a very rare condition.[34]

Risk factors for vitamin D deficiencies include:

The current RDA for infants aged 06 months is 10g (400 International Units (IU))/day and for adults over 19 years of age it is 15g (600 IU)/day.[35]

Diseases associated with a vitamin D deficiency include rickets, osteoporosis, and certain types of cancer (breast, prostate, colon and ovaries). It has also been associated with increased risks for fractures, heart disease, type 2 diabetes, autoimmune and infectious diseases, asthma and other wheezing disorders, myocardial infarction, hypertension, congestive heart failure, and peripheral vascular disease.[34]

Although fluoride is not considered an essential mineral, it is seen as crucial in prevention of tooth decay and maintaining adequate dental health.[36] In the mid-1900s it was discovered that towns with a high level of fluoride in their water supply was causing the residents' teeth to have both brown spotting and a strange resistance to dental caries. This led to the fortification of water supplies with fluoride with safe amounts to retain the properties of resistance to dental caries but avoid the staining cause by fluorosis (a condition caused by a fluoride toxicity).[37] The tolerable upper intake level (UL) set for fluoride ranges from 0.7mg/day for infants aged 06 months and 10mg/day for adults over the age of 19.

Conditions commonly associated with fluoride deficiency are dental caries and osteoporosis.[36]

Some other examples of fortified foods:

Despite having some scientific basis, but with controversial ethics, is the science of using foods and food supplements to achieve a defined health goal. A common example of this use of food supplements is the extent to which body builders will use amino acid mixtures, vitamins and phytochemicals to enhance natural hormone production, increase muscle and reduce fat. The literature is not concrete on an appropriate method for use of fortification for body builders and therefore may not be recommended due to safety concerns.[42]

There is interest in the use of food supplements in established medical conditions. This nutritional supplementation using foods as medicine (nutraceuticals) has been effectively used in treating disorders affecting the immune system up to and including cancers.[43] This goes beyond the definition of "food supplement", but should be included for the sake of completeness.

View original post here:

Food fortification - Wikipedia, the free encyclopedia

Posted in Food Supplements | Comments Off on Food fortification – Wikipedia, the free encyclopedia

resource-based view – Create Advantage

Posted: at 8:19 pm

The resource-based view of the firm and strategy, in contrast to the product, or positional, view. This view of the firm started with the seminal work of Penrose (1959), was touched on by Selznik (1957) with his notion of distinctive competencies, defined by Wernerfelt (1984), and elaborated on by Barney in several works (1986a, 1986b, 1991, 2001). The RBV combines the internal analysis of phenomena within companies (a preoccupation of the 'distinctive"" and 'core competency' group) with the external analysis of the industry and the competitive environment (a focus of the industrial organization group.

Basic description of the resource-based view (Newbert, 2008) -- To fully appreciate this theory, it is necessary to understand the terms used.

A firm that has attained a competitive advantage has created economic value (the difference between the perceived benefits of a resource-capability combination and the economic cost to exploit them) than its competitors. Economic value is generally created by producing products and/or services with either greater benefits at the same cost compared to competitors (i.e. differentiation-based competitive advantage) or the same benefits at lower cost compared to competitors (i.e. efficiency-based competitive advantage) (Peteraf and Barney, 2003).

Because superior benefits tend to enhance customer loyalty and perceived quality (Zou, Fang, and Zhao, 2003), a firm that can exploit its resource-capability combinations to effectively attain a differentiation-based competitive advantage should be able to improve its performance compared to competitors by selling more units at the same margin (i.e., parity price) or by selling the same number of units at a greater margin (i.e., premium price).

Furthermore, because a superior cost structure enables greater pricing flexibility as well as the ability to increase available surplus (Barua et al., 2004; Porter and Millar, 1985; Zou et al., 2003), a firm that can exploit its resource-capability combinations to effectively attain an efficiency-based competitive advantage should be able to improve its performance compared to competitors by selling more units at the same margin (i.e., low price) or by selling the same number of units at a greater margin (i.e., parity price). In either case, it is logical to assume that a firm that attains a competitive advantage, whether in the form of greater benefits at the same cost or the same benefits at lower cost, will be able to improve its performance in ways that its competitors cannot.

Practitioner implications -- Given that (1) performance advantage results when valuable and rare combinations of resources and capabilities are applied to reduce costs, exploit market opportunities, and/or neutralize competitive threats, (2) firms of all sizes can achieve advantage, and (3) with novelty one can produce rare and valuable (unique) combinations of resources and capabilities from even common resources and capabilities -- the pursuit of novelty to develop a truly unique basis for advantage is conceivably within the reach of all firms. Distinctive competency (Selznick, 1957) and its renewal is an essential pursuit in the evolution of the firm.

Competitively valuable resources (Collis and Montgomery, 1995) -- A resource-capability combination (for expediency, 'resource' in this section) value is due to its deployment towards competitive advantage: (1) reduction of costs, (2) the exploitation of market opportunities, and/or (3) the neutralization of competitive threats. The 'test' of the strategic value of a resource five-fold, connecting the internal and external factors related to a resource:

Further points... See organizational economics and industrial organization. These terms descibe the broad areas of knowledge relating to the positional view of strategy and the resource view of strategy.

The ""resource view"", contends that a firm's internal resources and capabilities are the best source of competitive advantage over other firms. An approach to strategy with this view then seeks to find or develop distinctive competencies and resources, applying them to produce superior value. To the extent that these competencies can be kept unique to the firm, they can be used to develop a competitive advantage.

Competitive advantage -- (Barney, 1991) The resource-based view focuses on internal resources, the firm's strengths and weaknesses, in contrast to the positional or environmental models of competitive advantage which focuses on opportunities and threats.

Assumptions -- (Barney, 1991) The resource-based model assumes that firms within an industry (or group) may be heterogeneous with respect to the strategic resources they control. Second, this model assumes that these resources may not be perfectly mobile across firms, thus heterogeneity may can be long lasting.

Resource based theory -- Resource based theory sees the firm as a collection of assets, or capabilities. In the modern economy, most of these assets and capabilities are intangible. The success of corporations is based on those of their capabilities that are distinctive. Companies with distinctive capabilities have attributes which others cannot replicate, and which others cannot replicate even after they realise the benefit they offer to the company which originally possesses them.

Business strategy involves identifying a firm's capabilities: putting together a collection of complementary assets and capabilities, and maximising and defending the economic rents which result. The concept of economic rent is central in linking the competitive advantage of the firm to conventional measures of performance.

John Kay, http://www.johnkay.com/about/, April 7, 2007

Highly efficient resources, uniquely efficient, form a resource position barrier that is effective because of the lower expected returns on the same type of resources if acquired by a competitor. One's chance of maximizing market imperfections and perhaps getting a cheap resource buy would be greatest if one tried to build on one's most unusual resource or resource position. Looking at diversified firms as portfolios of resources rather than portfolios of products gives a different and perhaps richer perspective on their growth prospects (Wernerfelt, 1984).

Strategy for diversified firms (Wernerfelt, 1984) -- The resource perspective provides a basis for addressing some key issues in the formulation of strategy for diversified firms, such as:

""Strategy for a bigger firm involves striking a balance between the exploitation of existing resources and the development of new ones. In analogy to the growth-share matrix, this can be visualized in what we will call a resource-product matrix.""

See the original post:

resource-based view - Create Advantage

Posted in Resource Based Economy | Comments Off on resource-based view – Create Advantage

The U.S. Basic Income Guarantee Network

Posted: at 8:19 pm

WINNIPEG, MANITOBA

May 12 - 15, 2016

The North American Basic Income Guarantee Congress 2016 is rapidly approaching. It will take place in Winnipeg, Canada at the University of Manitoba on May 12 15. The detailed PROGRAMME now available at: http://umanitoba.ca/faculties/social_work/about/879.html REMINDER: early bird registration rates are available only until April 20.

The congress is co-organized by the Basic Income Canada Network, the United States Basic Income Guarantee Network, Basic Income Manitoba, and the University of Manitoba. It will bring together social activists, policy advocates, researchers, government officials, and community members interested in the provision of an unconditional, universal, adequate basic income for all. Visit tourismwinnipeg.com for information about Winnipeg, Manitoba. Please direct inquiries to: nabigcongress2016@umanitoba.ca.

Call for Participation and other details

The basic income guarantee (BIG) is a government insured guarantee that no citizen's income will fall below some minimal level for any reason. All citizens would receive a BIG without means test or work requirement. BIG is an efficient and effective solution to poverty that preserves individual autonomy and work incentives while simplifying government social policy. Some researchers estimate that a small BIG, sufficient to cut the poverty rate in half could be financed without an increase in taxes by redirecting funds from spending programs and tax deductions aimed at maintaining incomes. Click here for more information.

A Belorussian translation

The U.S. Basic Income Guarantee Network (The USBIG Network) is an informal group promoting the discussion of the basic income guarantee in the United States. USBIG (pronounced "U.S big") publishes an email newsletter (subscription 500) every two months, maintains an on-line discussion paper series, and has yearly conferences.

USBIG was founded in December 1999 by Fred Block of University of California-Davis, Charles M. A. Clark of St. John's University, Pamela Donovan of the City University of New York, Michael Lewis of the State University of New York-Stony Brook, and Karl Widerquist then of the Levy Economics Institute. The USBIG Coordinating Committee has thirteen members: Michael Howard of the University of Maine (Coordinator), michael.howard@umit.maine.edu; Steve Shafarman, author (Activist Coordinator, steve@IncomeSecurityForAll.org); Michael Lewis, Silberman School of Social Work, Hunter College; Eri Noguchi of Columbia University; Dan O'Sullivan of Rise Up Economics danosully@gmail.com; Jason Murphy murphyjb@slu.edu; Jeff Smith, Forum on Geonomics, jjs@geonomics.org, Dianne Pagendianepagen@yahoo.com; Mark Witham,mwitham@basicincomeproject.org; Shawn Cassiman,scassiman1@udayton.edu;Ann Withorn,withorn.ann@gmail.com;Alanna Hartzok, alanna@centurylink.net;Click here for more information, or email (michael.howard@umit.maine.edu).

Last updated - 12.04.2016-01:49

Go here to see the original:

The U.S. Basic Income Guarantee Network

Posted in Basic Income Guarantee | Comments Off on The U.S. Basic Income Guarantee Network

Wage-Slavery and Republican Liberty | Jacobin

Posted: at 8:19 pm

Generations of workers critiqued wage-labor in the name of republican liberty.

In a recent interview, historian Quentin Skinner had the following to say about Karl Marx and the republican theory of liberty. The republican or neo-Roman theory says that we are unfree when we are subject to another persons will:

I am very struck by the extent to which Marx deploys, in his own way, a neo-Roman political vocabulary. He talks about wage slaves, and he talks about the dictatorship of the proletariat. He insists that, if you are free only to sell your labour, then you are not free at all. He stigmatises capitalism as a form of servitude. These are all recognizably neo-Roman moral commitments.

Skinner also says that this is a question which would bear a great deal more investigation than it has received.

I have been engaging in some of this investigation. It is not just Marx or even primarily Marx who believed that the neo-roman theory of freedom leads directly to a critique of wage-slavery. As early as the late 1820s, urban workers seized on the inherited republicanism of the American Revolution and applied it to the wage-labor relationship. They organized themselves city-by-city into the first self-conscious political parties of labor and their main campaign was against wage-slavery.

They argued that the wealthy keep us in a state of humble dependence through their monopoly control of the means of production. As Thomas Skidmore, founder of the Workingmens Party of New York, put it:

thousands of our people of the present day in deep distress and poverty, dependent for their daily subsistence upon a few among us whom the unnatural operation of our own free and republican institutions, as we are pleased to call them, has thus arbitrarily and barbarously made enormously rich.

Their humble dependence meant that they had no choice but to sell their labor to some employer or another. Their only chance of leading a decent life was if some employer would give them a job. Though formally free, these workers were nonetheless economically dependent and thus unfree. That is why they saw themselves as denied their rightful republican liberty, and why wage-labor merited the name slavery. Skidmore made the comparison with classical slavery the most explicit:

For he, in all countries is a slave, who must work more for another than that other must work for him. It does not matter how this state of things is brought about; whether the sword of victory hew down the liberty of the captive, and thus compel him to labor for his conqueror, or whether the sword of want extort our consent, as it were, to a voluntary slavery, through a denial to us of the materials of nature

The critique of wage-slavery in the name of republican liberty could hardly be clearer.

Given their analysis of wage-labor, these artisan republicans were inexorably led to radical conclusions about the conditions that could restore workers their full independence. Every leading figure of these early workingmens parties made some form of the argument that the principles of equal distribution [of property be] everywhere adopted or that it was necessary to equalize property. Here, the property to be equally distributed was clearly means of production. And it was to be distributed not just in the form of land, but cooperative control over factories and other implements.

For instance, the major report articulating the principles of the Workingmens Party of New York included the demand for AN EQUAL AMOUNT OF PROPERTY ON ARRIVING AT THE AGE OF MATURITY. Only with control over this kind of property could workers structural dependence on owners be eliminated. For these Workies following out the logic of the republican theory led not to a nostalgic, agrarian idealism, but to the view that each persons independence depended upon everyone possessing equal and collective control of productive resources. Even more striking, they argued that the only way to achieve this condition of independence was through the joint political efforts of the dependent or enslaved class.

As Langdon Byllesby, one of the earliest of these worker republicans, wrote, history does not furnish an instance wherein the depository of power voluntarily abrogated its prerogative, or the oppressor relinquished his advantages in favour of the oppressed. It was up to the dependent classes, through the agency of their workingmens parties, to realize a cooperative commonwealth.

There is an important historical connection between these radical artisans and Marx. As Maximilen Rubel and Lewis Feuer have shown, just at the time that Marx turned from Hegelian philosophy to political economy, in 18412, he began to read comparative political history. He was particularly interested in the American republic, and read three main sources: Beaumont, Tocqueville, and a less well-known Englishman, Thomas Hamilton. Hamilton was a former colonel who wrote his own, very popular observation of his time traveling in the United States called Men and Manners in America, published in 1833. For Marx, Hamilton was the best source of the three because Hamilton, unlike the Frenchmen, actually met with and spoke to leaders of the Workingmans Party of New York. That section of Hamiltons travelogue includes ominous references to the Extreme Gauche of the Workies who wish to introduce an AGRARIAN LAW, and a periodical division of property, and includes gloomy reflections on the coming anarchy and despoliation. It is these very sections of Hamilton that Marx copied into his notebooks during this period of preparatory study.

Unbeknown to Marx, he was copying a copy. In those sections of Men and Manners Hamilton had essentially transcribed parts of Thomas Skidmores report to the Workingmens Party of New York, which were a distillation of the ideas that could be found in Skidmores lengthy The Rights of Man to Property! Skidmores book included the argument that property rights were invalid if they were used to make the poor economically dependent, allowing owners to live in idleness, partial or total, thus supporting himself, more or less, on the labors of others.

If property rights were illegitimate the minute they were used to make some dependent on others then it was clear all freedom-loving citizens were justified in transforming property relations in the name of republican liberty. This was why Skidmore proposed the radical demand that the workers APPROPRIATE ALSO, in the same way, THE COTTON FACTORIES, THE WOOLEN FACTORIES, THE IRON FOUNDERIES, THE ROLLING MILLS, HOUSES, CHURCHES, SHIPS, GOODS, STEAM-BOATS, FIELDS OF AGRICULTURE, &c. &c. &c. in manner as proposed in this work, AND AS IS THEIR RIGHT. The manner proposed for this expropriation of the expropriators was not violent revolution but a state constitutional convention in which all property would be nationalized and then redistributed in shares of equal value to be used to form cooperatives or buy land.

Marx never knew these labor republicans by name, nor any of their primary writings, but it is clear from his notebooks that their ideas and political self-organization contributed to his early thinking, especially at the moment at which he was formulating his view of workers as the universal class. Indeed, in On the Jewish Question, Beaumont, Tocqueville and the Englishman Hamiltons accounts of the United States feature heavily in Marxs discussion of America. It is there that Marx makes the famous distinction between political and human emancipation, arguing that the American republic shows us most clearly the distinction between the two. This was almost exactly the same distinction that the Workies made when saying, as Philadelphian Samuel Simpson did, the consequence now is, that while the government is republican, society in its general features, is as regal as it is in England. A republican theory of wage-slavery was developed well before Marx (see here for evidence of similar developments in France that were also very likely to have influenced Marx).

In the United States, the republican critique of wage-labor went into abeyance for a time after the 1840s, or more appropriately, it was absorbed into the agrarian socialism of the National Reform Association a tale masterfully told by the historian Mark Lause in Young America: Land, Labor and Republican Community. But labor republicanism exploded back onto the political scene in the United States after the Civil War, especially with leading figures around the Knights of Labor and the eight-hour movement. The Knights were for a time one of the most powerful organizations in the country, organized skilled and unskilled labor together, and at their peak included more than 700,000official members, probably representing more than 1 million participating workers. The Knights used the republican concept of liberty to assert the universal interests of labor and to argue for the transformation of American society. George McNeill, a leading Knight, wrote that There is an inevitable and irresistible conflict between the wage-system of labor and the republican system of government. Ira Steward, most famous as an eight-hour campaigner, demanded a a republicanization of labor, as well as a republicanization of government.

These turns of phrase were more than rhetorical gestures. They were self-conscious appeals to the republican theory. Indeed the Journal of United Labor even reproduced a famous passage on slavery from Algernon Sidneys Discourses on Government in order to articulate why wage-labor was a form of servitude. The passage goes:

Slavery. The weight of chains, number of stripes, hardness of labor, and other effects of a masters cruelty, may make one servitude more miserable than another; but he is a slave who serves the gentlest man in the world, as well as he who serves the worst; and he does serve him if he must obey his commands and depend upon his will.

This passage, and Sidneys writings, have played a major role in contemporary scholarship on early modern republicanism, and here it is deployed to critique not the political enslavement to a monarch but wage-slavery.

In fact, the labor republicans not only drew on the republican theory but further developed it in light of the new dynamics of industrial capitalism. They noted that there were two interconnected forms of dependence. One was the general or structural dependence of the wage-laborer on employers, defined by the fact that the monopoly of control over productive property by some left the rest dependent upon those owners for their livelihoods. This, as George McNeil put it, meant that workers assent but they do not consent, they submit but do not agree.

The voluntaristic language here was meant to capture how, thought the workers were not literally slaves, they were nonetheless compelled to work for others. As Skinner has shown in his book on Hobbes, it is precisely this conflation of voluntaristic action and freedom that modern republicans have always rejected, and which their enemies, like Hobbes, have regularly defended. Though here, the workers dependence was not a feature so much of being the legal property of another as it was being forced, by economic need, to sell his labor:

when a man is placed in a position where he is compelled to give the benefit of his labor to another, he is in a condition of slavery, whether the slave is held in chattel bondage or in wages bondage, he is equally a slave.

Emancipation may have eliminated chattel slavery, but, as eight-hour campaigner Ira Steward once put it, the creation of this new form of economic dependence meant something of slavery still remainssomething of freedom is yet to come.

According to labor republicans, the structural dependence of the wage-laborer was translated, through the labor contract, to a more personal form of servitude to the employer. After all, the contract was an agreement of obedience in exchange for wages. It was an agreement to alienate control over ones own activity in exchange for the privilege of having enough money to buy necessities, and perhaps a few luxuries. Indeed, even if the wages were fairly high, the point of the contract was to become subject to the will of a specific owner or his manager. As one anonymous author put it, in the Journal of United Labor, Is there a workshop where obedience is not demanded not to the difficulties or qualities of the labor to be performed but to the caprice of he who pays the wages of his servants? As nearly every scholar of republican thought has noted, the language of being subject to the caprice of another is one of the most enduring rhetorical tropes of the neo-Roman theory of freedom. It is no accident that it would feature so heavily in labor republican arguments about domination in the workplace.

It was for this reason that the Knights of Labor believed that the only way to republicanize labor was to abolish as rapidly as possible, the wage system, substituting co-operation therefore. The point about a cooperative system was that property was collectively owned and work cooperatively managed. Only when the class differences between owners and workers were removed could republican liberty be truly universalized. It would, at once, remove the structural and personal dependence of workers.

As William H. Silvis, one of the earliest of these figures, argued, cooperation renders the workman independent of necessities which often compel him to submit to hectoring, domineering, and insults of every kind. What clearer statement could there be of the connection between the republican theory of liberty, economic dependence, and the modern wage-system? Here was a series of arguments that flowed naturally from the principles of the American Revolution.

To demand that there is to be a people in industry, as in government was simply to argue that the cooperative commonwealth was nothing more than the culmination and completion of the American Revolutions republican aspirations.

If you like this article, please subscribe or donate.

Continued here:

Wage-Slavery and Republican Liberty | Jacobin

Posted in Wage Slavery | Comments Off on Wage-Slavery and Republican Liberty | Jacobin

Bob Black – Wikipedia, the free encyclopedia

Posted: at 8:19 pm

Bob Black Born Robert Charles Black, Jr. (1951-01-04) January 4, 1951 (age65) Detroit, Michigan Almamater University of Michigan Era 20th-century philosophy Region Western Philosophy School Post-left anarchy

Main interests

Notable ideas

Influences

Robert Charles "Bob" Black, Jr. (born January 4, 1951) is an American anarchist. He is the author of the books The Abolition of Work and Other Essays, Beneath the Underground, Friendly Fire, Anarchy After Leftism, Defacing the Currency, and numerous political essays.

Black graduated from the University of Michigan and Georgetown Law School. He later took M.A. degrees in jurisprudence and social policy from the University of California (Berkeley), criminal justice from the State University of New York (SUNY) at Albany, and an LL.M in criminal law from the SUNY Buffalo School of Law. During his college days (1969-1973) he became disillusioned with the New Left of the 1970s and undertook extensive readings in anarchism, utopian socialism, council communism, and other left tendencies critical of both MarxismLeninism and social democracy. He found some of these sources at the Labadie Collection at the University of Michigan, a major collection of radical, labor, socialist, and anarchist materials which is now the repository for Black's papers and correspondence. He was soon drawn to Situationist thought, egoist communism, and the anti-authoritarian analyses of John Zerzan and the Detroit magazine Fifth Estate. He produced a series of ironic political posters signed "The Last International", first in Ann Arbor, Michigan, then in San Francisco where he moved in 1978. In the Bay Area he became involved with the publishing and cultural underground, writing reviews and critiques of what he called the "marginals milieu." Since 1988 he has lived in upstate New York.[1]

Black is best known for a 1985 essay, "The Abolition of Work," which has been widely reprinted and translated into at least thirteen languages (most recently, Urdu). In it he argued that work is a fundamental source of domination, comparable to capitalism and the state, which should be transformed into voluntary "productive play." Black acknowledged among his inspirations the French utopian socialist Charles Fourier, the British utopian socialist William Morris, the Russian anarcho-communist Peter Kropotkin, and the Situationists. The Abolition of Work and Other Essays, published by Loompanics in 1986, included, along with the title essay, some of his short Last International texts, and some essays and reviews reprinted from his column in "San Francisco's Appeal to Reason," a leftist and counter-cultural tabloid published from 1980 to 1984.

Two more essay collections were later published as books, Friendly Fire (Autonomedia, 1992) and Beneath the Underground (Feral House, 1994), the latter devoted to the do-it-yourself/fanzine subculture of the '80s and '90s which he called "the marginals milieu" and in which he had been heavily involved. Anarchy after Leftism (C.A.L. Press, 1996) is a more or less point-by-point rebuttal of Murray Bookchin's Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm (A.K. Press, 1996), which had criticized as "lifestyle anarchism" various nontraditional tendencies in contemporary anarchism. Black's short book ("about an even shorter book," as he put it) was succeededas an E-book published in 2011 at the online Anarchist Libraryby Nightmares of Reason, a longer and more wide-ranging critique of Bookchin's anthropological and historical arguments, especially Bookchin's espousal of "libertarian municipalism" which Black ridiculed as "mini-statism."

In 1996 Black cooperated with the Seattle police Narcotics Division against Seattle author Jim Hogshire, leading to a police raid on Hogshire's home and the subsequent arrest of Hogshire and his wife.[2][3][4]

Since 2000, Black has focused on topics reflecting his education and reading in the sociology and the ethnography of law, resulting in writings often published in Anarchy: A Journal of Desire Armed. His recent interests have included the anarchist implications of dispute resolution institutions in stateless primitive societies (arguing that mediation, arbitration, etc., cannot feasibly be annexed to the U.S. criminal justice system, because they presuppose anarchism and a relative social equality not found in state/class societies). At the 2011 annual B.A.S.T.A.R.D. anarchist conference in Berkeley, California, Black presented a workshop where he argued that, in society as it is, crime can be an anarchist method of social control, especially for people systematically neglected by the legal system. An article based on this presentation appeared in Anarchy magazine and in his 2013 book, Defacing the Currency: Selected Writings, 1992-2012.

Black has expressed an interest, which grew out of his polemics with Bookchin, in the relation of democracy to anarchism. For Bookchin, democracythe "direct democracy" of face-to-face assemblies of citizensis anarchism. Some contemporary anarchists agree, including the academics Cindy Milstein, David Graeber, and Peter Staudenmeier. Black, however, has always rejected the idea that democracy (direct or representative) is anarchist. He made this argument at a presentation at the Long Haul Bookshop (in Berkeley) in 2008. In 2011, C.A.L. Press published as a pamphlet Debunking Democracy, elaborating on the speech and providing citation support. This too is reprinted in Defacing the Currency.

Some of his work from the early 1980s includes (anthologized in The Abolition of Work and Other Essays) highlights his critiques of the nuclear freeze movement ("Anti-Nuclear Terror"), the editors of Processed World ("Circle A Deceit: A Review of Processed World"), radical feminists ("Feminism as Fascism"), and right wing libertarians ("The Libertarian As Conservative"). Some of these essays previously appeared in "San Francisco's Appeal to Reason" (1981-1984), a leftist and counter-cultural tabloid newspaper for which Black wrote a column.

"To demonize state authoritarianism while ignoring identical albeit contract-consecrated subservient arrangements in the large-scale corporations which control the world economy is fetishism at its worst ... Your foreman or supervisor gives you more or-else orders in a week than the police do in a decade."

The Abolition of Work and Other Essays (1986), draws upon some ideas of the Situationist International, the utopian socialists Charles Fourier and William Morris, anarchists such as Paul Goodman, and anthropologists such as Richard Borshay Lee and Marshall Sahlins. Black criticizes work for its compulsion, and, in industrial society, for taking the form of "jobs"the restriction of the worker to a single limited task, usually one which involves no creativity and often no skill. Black's alternative is the elimination of what William Morris called "useless toil" and the transformation of useful work into "productive play," with opportunities to participate in a variety of useful yet intrinsically enjoyable activities, as proposed by Charles Fourier. Beneath the Underground (1992) is a collection of texts relating to what Black calls the "marginals milieu"the do-it-yourself zine subculture which flourished in the 80s and early 90s. Friendly Fire (1992) is, like Black's first book, an eclectic collection touching on many topics including the Art Strike, Nietzsche, the first Gulf War and the Dial-a-Rumor telephone project he conducted with Zack Replica (1981-1983).

Defacing the Currency: Selected Writings, 1992-2012[6] was published by Little Black Cart Press in 2013. It includes a lengthy (113 pages), previously unpublished critique of Noam Chomsky, "Chomsky on the Nod." A similar collection has been published, in Russian translation, by Hylaea Books in Moscow. Black's most recent book, also from LBC Books, is Instead of Work, which collects "The Abolition of Work" and seven other previously published texts, with a lengthy new update, "Afterthoughts on the Abolition of Work." The introduction is by science fiction writer Bruce Sterling.

Follow this link:

Bob Black - Wikipedia, the free encyclopedia

Posted in Abolition Of Work | Comments Off on Bob Black – Wikipedia, the free encyclopedia

Series On Personal Empowerment at Psychology, Philosophy …

Posted: at 8:19 pm

The following articles are related to Series On Personal Empowerment at Psychology, Philosophy and Real Life.

Human beings have one amazing power, but one power only the power of choice.

Our conceptualizations of the situations we find ourselves in can not only place us at a disadvantage, but can literally do us harm.

Theres no need to red flag action that youre willing to take if the disturbed character wont change. Dont threaten, just take action.

In the course of human relations, we frequently make agreements with one another. Because disturbed characters are not reliable, trustworthy, or prone to play fairly, making any kind of agreements with them can be a risky business indeed.

If you find yourself drained in a relationship, chances are youre doing way too much to make things work and not keeping the weight of responsibility where it belongs.

Ultimately, people have power only over one thing: the execution of their free will.

A person always loses power when they fail to set and enforce reasonable limits.

See original here:

Series On Personal Empowerment at Psychology, Philosophy ...

Posted in Personal Empowerment | Comments Off on Series On Personal Empowerment at Psychology, Philosophy …

Beyond Humanism: Reflections on Trans- and Posthumanism

Posted: at 8:19 pm

Abstract

I am focusing here on the main counterarguments that were raised against a thesis I put forward in my article Nietzsche, the Overhuman, and Transhumanism (2009), namely that significant similarities can be found on a fundamental level between the concept of the posthuman, as put forward by some transhumanists, and Nietzsches concept of the overhuman. The articles with the counterarguments were published in the recent Nietzsche and European Posthumanisms issue of The Journal of Evolution and Technology (January-July 2010). As several commentators referred to identical issues, I decided that it would be appropriate not to respond to each of the articles individually, but to focus on the central arguments and to deal with the counterarguments mentioned in the various replies. I am concerned with each topic in a separate section. The sections are entitled as follows: 1. Technology and evolution; 2. Overcoming nihilism; 3. Politics and liberalism; 4. Utilitarianism or virtue ethics?; 5. The good Life; 6. Creativity and the will to power; 7. Immortality and longevity; 8. Logocentrism; 9. The Third Reich. When dealing with the various topics, I am not merely responding to counterarguments; I also raise questions concerning transhumanism and put forward my own views concerning some of the questions I am dealing with.

I am very grateful for the provocative replies to my article Nietzsche, the Overhuman, and Transhumanism (2009), published in the recent Nietzsche and European Posthumanisms issue of The Journal of Evolution and Technologyy (January-July 2010). In the following nine sections, I will address the most relevant arguments that have been put forward against some of the points I was raising. As several commentators referred to identical issues, I decided that it would be appropriate not to respond to each of the articles individually, but to focus on the central arguments and to deal with the counterarguments mentioned in the various replies. I will be concerned with each topic in a separate section. The sections will be entitled as follows: 1. Technology and evolution; 2. Overcoming nihilism; 3. Politics and liberalism; 4. Utilitarianism or virtue ethics?; 5. The good life; 6. Creativity and the will to power; 7. Immortality and longevity; 8. Logocentrism; 9. The Third Reich.

1. Technology and evolution

One of the central issues that many commentators discussed was the appropriate understanding of who is the overhuman and how can he come about. In the final paragraphs of his article, Hauskeller attacks the idea that Nietzsches overhuman is to be understood in an evolutionary sense (2010, 7). However, I can confidently claim that he is wrong in this respect. Let me list the most important reasons for this. First, Nietzsche saw human beings as the link between animals and overhumans (KSA, Za, 4, 16). How is this to be understood, if not in the evolutionary sense? Second, Nietzsche valued Darwin immensely. Nietzsche readers frequently point out that Nietzsche was very critical of Darwin, and falsely conclude from this that he did not hold a theory of evolution. But the inference is false, as is their understanding of Nietzsches evaluation of Darwin. It is true that Nietzsches remarks concerning Darwin were critical. However, he criticized him for a specific reason: not for putting forward a theory of evolution, but for putting forward a theory of evolution based on the assumption that the fundamental goal of human beings is their struggle for survival (KSA, GD, 6, 120). According to Nietzsche, the world is will to power, and hence the fundamental goal of human beings is power, too (KSA, GD, 6, 120). Why, one might wonder, if Nietzsche was so close to Darwin, did he have to be so critical of him? Nietzsche stresses explicitly that he distances himself most vehemently from those to whom he feels closest. In order to give a clear shape to his philosophy, he deals most carefully and intensely with those who are closest to his way of thinking, which is the reason why he permanently argues against Socrates (KSA, NF, 8, 97). The same applies to all those thinkers, such as Darwin, with whom he shares many basic insights. Hence, Nietzsche is not arguing with Darwin over the plausibility of the theory of evolution but concerning the appropriate understanding of the theory and the fundamental theory of action that underlies it. Third, a simple way of showing that Nietzsche did hold a theory of evolution is by referring not only to the writings he published himself, but also to those of his writings that were published by others later on. Here one finds several clear attempts at developing a theory of evolution (KSA, NF, 13, 316-317).

Fourth, many of the commentators are correct in stressing that Nietzsche regarded education as the primary means for realizing the overhuman and the evolutionary changes that would enable the overhuman to come into existence. However, Nietzsche also talks about breeding in some passages of his notebooks. In my recent monograph on the concept of human dignity (2010, 226-232), I described in detail how the evolutionary process towards the overhuman is supposed to occur from Nietzsches perspective. In short, Nietzsche regards it as possible to achieve by means of education. Thereby, the more active human beings become stronger and turn into higher human beings, such that the gap between active and passive human beings widens itself. Eventually, it can occur that the group of the active and that of the passive human beings stand for two types of human beings which represent the outer limits of what the human type can be or what can be understood as belonging to the human species. If such a state is reached, then an evolutionary step towards a new species can occur and the overhuman can come into existence. Many transhumanists, by contrast, focus on various means of enhancement, in particular genetic enhancement, for such an event to occur. In both cases, the goal is to move from natural selection towards a type of human selection, even though the expression human selection sounds strange particularly, perhaps, for many contemporary Germans. Yet, I do not think that human selection must be a morally dubious procedure. If the selection is a liberal one, i.e. a type of selection undertaken within a liberal and democratic society, many problematic aspects vanish.

Even though transhumanist thinkers and Nietzsche appear to differ over the primary means of bringing about an evolutionary change, I think the appearance is deceptive. Classical education and genetic enhancement strike me as structurally analogous procedures, and in the following section I will offer some reasons for holding this position.

1.1 Technology

Quite a few commentators have pointed out that that Nietzsche regarded education as the main means of bringing about the overhuman, whereas transhumanists focus on technological means of altering human beings to realize the posthuman. Blackford explicitly stresses this in the editorial of the Nietzsche and European Posthumanisms issue: It is unclear what Nietzsche would make of such a technologically-mediated form of evolution in human psychology, capacities, and (perhaps) morphology (2010, ii). Certainly, this is a correct estimation. Max More is also right when he stresses the following: From both the individual and the species perspective, the concept of self-overcoming resonates strongly with extropic transhumanist ideals and goals. Although Nietzsche had little to say about technology as a means of self-overcoming neither did he rule it out (2010, 2). Stambler, on the other hand, goes much further and declares confidently: in addition [...] his denial of scientific knowledge and disregard of technology [...] are elements that make it difficult to accept him as an ideological forerunner of transhumanism (2010, 19). Stambler supports his doubts about Nietzsches ancestry of transhumanism by stressing the point in a further passage: Nietzsche too placed a much greater stock in literary theory than in science and technology (2010, 22).

I can understand Blackford and More who doubt whether Nietzsche would have been affirmative of technological means of enhancing human beings. However, Stamblers remarks concerning Nietzsche are rather dubious given the current state of the art in Nietzsche scholarship. Stambler writes that Nietzsche denies scientific knowledge. However, it needs to be stressed that Nietzsche rejected the possibility of gaining knowledge of the world, as that is understood within a correspondence theory of truth, by any method, whether the sciences, the arts, philosophy or any other means of enquiry, since he held that each perspective is already an interpretation. It is false to infer from this that Nietzsche had a disrespect for science. On the contrary, he was well aware that the future would be governed by the scientific spirit (Sorgner 2007, 140-158). As he found it implausible to hold that there is an absolute criterion of truth, what was important for a worldview to be regarded as superior and plausible was that it corresponds to the spirit of the times. Nietzsche himself put forward theories that he regarded as appealing for scientifically minded people so that his worldview might become plausible.

Indeed, Nietzsche's respect for the various sciences is immense. He upholds a theory of evolution which is based upon a naturalistic worldview that can be summarized by the term will to power (Sorgner 2007, 40-65). In addition, he puts forward the eternal recurrence of everything, which he tries to prove intellectually by reference to the scientific insights of his day. Unfortunately, he fails to put forward a valid argument, even though it would have been possible for him to have one. Elsewhere, I have reconstructed a possible argument and shown that the premises which must be true for the eternal recurrence to occur are such as correspond to contemporary scientific insights (Sorgner 2009b, vol. 2, 919-922). In addition to all this, Nietzsche wanted to transfer to Paris to study natural sciences in order to be able to prove the validity of the eternal recurrence (Andreas-Salome 1994, 172). Thus his high estimation of the sciences becomes clear. This does not mean, of course, that he disrespects the literary arts. However, it shows that he does not regard scientific enquiry and literary theory as two antagonistic approaches to philosophy, as Stambler claims. Nietzsche accepts the value of both approaches and stresses the great importance of scientific approaches for the future, and he is right in doing so. In this regard, his approach is very similar to that put forward by Kuhn in his The Structure of Scientific Revolutions (1962).

Is it possible to infer from Nietzsches high estimation of the sciences that he would have been in favor of enhancement procedures by means of technology? Not necessarily. However, there are good reasons for holding that the procedures of classical education and genetic enhancement are structurally analogous. Given that Nietzsche was in favor of education to bring about the overhuman, and assuming that classical education and genetic enhancement are structurally analogous procedures, there are good reasons for concluding that Nietzsche would have been affirmative of technological means for bringing about the overhuman. I am currently working on a monograph on the relationship between genetic enhancement and classical education, and in the following sections I will summarize some of its important points.

1.1.1 Education and enhancement as structurally analogous procedures

Habermas (2001, 91) has criticized the position that educational and genetic enhancements are parallel events, a position held by Robertson (1994, 167). I, on the other hand, wish to show that there is a structural analogy between educational and genetic enhancement such that their moral evaluation ought also to be analogous (Habermas 2001, 87). Both procedures have in common that decisions are being made by parents concerning the development of their child, at a stage where the child cannot yet decide for himself what it should do. In the case of genetic enhancement, we are faced with the choice between genetic roulette vs genetic enhancement. In the case of educational enhancement, we face the options of a Kasper Hauser lifestyle vs parental guidance. First, I will address two fundamental, but related, claims that Habermas puts forward against the parallel between genetic and educative enhancement: that genetic enhancement is irreversible, and that educative enhancement is reversible. Afterwards, I will add a further insight concerning the potential of education and enhancement for evolution given the latest findings of epigenetic research.

1.1.1.1 Irreversibility of genetic enhancement

According to Habermas, one claim against the parallel between genetic and educative enhancement is that genetic enhancement is irreversible. However, as recent research has shown, this claim is implausible, if not plain false.

Let us consider the lesbian couple discussed by Agar (2004, 12-14) who were both deaf and who chose a deaf sperm donor in order to have a deaf child (Agar 2004, 12-14). Actually, the child can hear a bit in one ear, but this is unimportant for my current purpose. According to the couple, deafness is not a defect, but merely represents a being different. The couple was able to realize their wish and in this way managed to have a mostly deaf child. If germ-line gene therapy worked, then they could have had a non-deaf donor, changed the appropriate genes, and still brought about a deaf child. However, given that the deafness in question is one of the inner ear, it would then be possible for the person in question to go to a doctor later on and ask for surgery in which he receives an implant that enables him to hear. It is already possible to perform such an operation with such an implant.

Of course, it can be argued in such a case that the genotype was not reversed, but merely the phenotype. This is correct. However, the example also shows that qualities which come about due to a genetic setting are not necessarily irreversible. They can be changed by such means as surgery. Deaf people can sometimes undergo a surgical procedure so they can hear again, depending on the type of deafness they have and when the surgery takes place.

One could object that the consequences of educational enhancement can be reversed autonomously whereas in the case of genetic alterations one needs a surgeon, or other external help, to bring about a reversal. This is incorrect again, as I will show later. It is not true that all consequences of educational enhancement can be reversed. In addition one can reply that by means of somatic gene therapy, it is even possible to change the genetic set up of a person. One of the most striking examples in this context is siRNA therapy. By means of siRNA therapy, genes can get silenced. In the following paragraph, I state a brief summary of what siRNA therapy has achieved so far.

In 2002, the journal Science referred to RNAi as the Technology of the Year, and McCaffrey et al. published a paper in Nature in which they specified that siRNA functions in mice and rats (2002, 38-9). That siRNAs can be used therapeutically in animals was demonstrated by Song et al. in 2003. By means of this type of therapy (RNA interference targeting Fas), mice can be protected from fulminant hepatitis (Song et al. 2003, 347-51). A year later, it was shown that genes at transcriptional level can be silenced by means of siRNA (Morris 2004, 1289-1292). Due to the enormous potential of siRNA, Andrew Fire and Craig Mello were awarded the Nobel prize in medicine for discovering RNAi mechanism in 2006.

Given the empirical data concerning siRNA, it is plausible to claim that the following process is theoretically possible, and hence that genetic states do not have to be fixed: 1. An embryo with brown eyes can be selected by means of preimplantation genetic diagnosis (PGD); 2. The adult does not like his eye color; 3. Accordingly, he asks medics to provide him with siRNA therapy to change the gene related to his eye colour; 4. The altered genes bring it about that the eye color changes. Another option would be available if germ line gene-therapy became effective. In that case, we could change a gene using germ-line gene therapy to bring about a quality x. Imagine that the quality x is disapproved of by the later adult. Hence, he decides to undergo siRNA therapy to silence the altered gene again. Such a procedure is theoretically possible.

However, we do not have to use fictional examples to show that alterations brought about by genetic enhancement are reversible; we may simply look at the latest developments in gene therapy. A 23-year-old British male, Robert Johnson, suffered from Lebers congenital amaurosis, which is an inherited blinding disease. Early in 2007, he underwent surgery at Moorfields Eye Hospital and University College Londons Institute of Ophthalmology. This represented the worlds first gene therapy trial for an inherited retinal disease. In April 2008, The New England Journal of Medicine published the results of this operation, which revealed its success, as the patient had obtained a modest increase in vision with no apparent side-effects (Maguire et al. 2008, 2240-2248).

In this case, it was a therapeutic use of genetic modification. As genes can be altered for therapeutic purposes, they can also be altered for non-therapeutic ends (assuming one wishes to uphold the problematic distinction between therapeutic and non-therapeutic ends). The examples mentioned here clearly show that qualities brought about by means of genetic enhancement do not have to be irreversible. However, the parallels between genetic and educative enhancement go even further.

1.1.1.2 Reversibility of educative enhancement

According to Habermas, character traits brought about by educative means are reversible. Because of this crucial assumption, he rejects the proposition that educative and genetic enhancement are parallel processes. Aristotle disagrees, and he is right in doing so. According to Aristotle, a hexis, a basic stable attitude, gets established by means of repetition (EN 1103a). You become brave, if you continuously act in a brave manner. By playing a guitar, you turn into a guitar player. By acting with moderation, you become moderate. Aristotle makes clear that by means of repeating a certain type of action, you establish the type in your character, you form a basic stable attitude, a hexis. In The Categories, he makes clear that the hexis is extremely stable (Cat. 8, 8b27-35). In the Nichomachean Ethics, he goes even further and claims that once one has established a basic stable attitude it is impossible to get rid of it again (EN III 7, 1114a19-21). Buddensiek (2002, 190) has correctly interpreted this passage as claiming that once a hexis, a basic stable attitude, has been formed or established, it is an irreversible part of the person's character.

Aristotles position gets support from Freud, who made the following claim: It follows from what I have said that the neuroses can be completely prevented but are completely incurable (cited in Malcolm 1984, 24). According to Freud, Angstneurosen were a particularly striking example (Rabelhofer 2006, 38). Much time has passed since Freud, and much research has taken place. However, in recent publications concerning psychiatric and psychotherapeutic findings, it is still clear that psychological diseases can be incurable (Beese 2004, 20). Psychological disorders are not intentionally brought about by educative means. However, much empirical research has been done in the field of illnesses and their origin in early childhood. Since irreversible states of psychological disorders can come about from events or actions in childhood, it is clear that other irreversible effects can happen through proper educative measures.

Medical research has shown, and most physicians agree, that Post Traumatic Stress Disorders can not only become chronic, but also lead to a permanent personality disturbance (Rentrop et al. 2009, 373). They come about because of exceptional events that represent an enormous burden and change within someones life. Obsessional neuroses are another such case. According to the latest numbers, only 10 to 15 % of patients get cured, and in most cases the neurosis turns into a chronic disease (Rentrop et al. 2009, 368). Another disturbance which one could refer to is the borderline syndrome, which is a type of personality disorder. It can be related to events or actions in early childhood, such as violence or child abuse. In most cases, this is a chronic disease (Rentrop et al. 2009, 459).

Given the examples mentioned, it is clear that actions and events during ones lifetime can produce permanent and irreversible states. In the above cases, it is disadvantageous to the person in question. In the case of an Aristotelian hexis, however, it is an advantage for the person in question if he or she establishes a virtue in this manner.

To provide further intuitive support for the position that qualities established by educational enhancement can be irreversible, one can simply think about learning to ride a bike, tie ones shoe laces, play the piano or speak ones mother tongue. Children get educated for years and years to undertake these tasks. Even when one moves into a different country, or if one does not ride the bike for many years, it is difficult, if not impossible, to completely eliminate the acquired skill. Hence, it is very plausible that educative enhancement can have irreversible consequences, and that Habermas is doubly wrong: genetic enhancement can have consequences that are reversible, and educative enhancement can have consequences that are irreversible. Given these insights, the parallel between genetic and educative enhancement gains additional support.

1.1.1.3 Education, enhancement and evolution

Can education bring about changes that have an influence on the potential offspring of the person who gets educated? As inheritance depends upon genes, and genes do not get altered by means of education, it has seemed that education cannot be relevant for the process of evolution. Hence, Lamarckism, the heritability of acquired characteristics, has not been very fashionable for some time. However, in recent decades doubts have been raised concerning this position, based on research on epigenetics. Together with Japlonka and Lamb (2005, 248), I can stress that the study of epigenetics and epigenetic inheritance systems (EISs) is young and hard evidence is sparse, but there are some very telling indications that it may be very important.

Besides the genetic code, the epigenetic code, too, is relevant for creating phenotypes, and it can get altered by environmental influences. The epigenetic inheritance systems belong to three supragenetic inheritance systems that Japlonka and Lamb distinguish. These authors also stress that through the supragenetic inheritance systems, complex organisms can pass on some acquired characteristics. So Lamarckian evolution is certainly possible for them (Japlonka and Lamb 2005, 107).1

Given recent work in this field, it is likely that stress,2 education,3 drugs, medicine or diet can bring about epigenetic alterations that, again, can be responsible for an alteration of cell structures (Japlonka/Lamb 2005, 121) and the activation or silencing of genes (2005, 117).4 In some cases, the possibility cannot be excluded that such alterations might lead to an enhanced version of evolution. Japlonka and Lamb stress the following:

The point is that epigenetic variants exist, and are known to show typical Mendelian patterns of inheritance. They therefore need to be studied. If there is heredity in the epigenetic dimension, then there is evolution, too. (2005, 359)

They also point out that the transfer of epigenetic information from one generation to the next has been found, and that in theory it can lead to evolutionary change (2005, 153). Their reason for holding this position is partly that new epigenetic marks might be induced in both somatic and germ-line cells (2005, 145).

A mothers diet can also bring about such alterations, according to Japlonka and Lamb (2005, 144), hence the same potential as the ones stated before applies equally to the next method of bringing about a posthuman, i.e. it is possible that the posthuman can come about by means of educational as well as genetics enhancement procedures.

1.1.1.4 Nietzsche and Technology

Given the above analysis, I conclude that Habermas is wrong concerning fundamental issues when he denies that educational and genetic enhancements are parallel events. Even if the parallel between educational and genetic enhancement is accepted, however, it does not solve the elementary challenges connected to it, such as questions concerning the appropriate good that motivates efforts at enhancement.

Even though I am unable to discuss that issue further here, this analysis provides me with a reason to think that Nietzsche would have been in favor of technological means for bringing about the overhuman. Nietzsche held that the overhuman comes into existence primarily by means of educational procedures. I have shown that the procedures of education and genetic enhancement are structurally analogous. Hence, it seems plausible to hold that Nietzsche would also have been positive about technological means for realizing the overhuman.

2. Overcoming nihilism

The next topic I wish to address is that of nihilism. More mentioned it, and I think that some further remarks should be added to what he said. I think that More is right in pointing out that Nietzsche stresses the necessity to overcome nihilism. Nietzsche is in favor of a move towards a positive (but continually evolving) value-perspective (2010, 2). More agrees with Nietzsche in this respect, and holds that nihilism has to be overcome. However, before talking critically about nihilism one has to distinguish its various forms. It is important not to mix up aletheic and ethical nihilism, because different dangers are related to each of the concepts. Aletheic nihilism stands for the view that it is currently impossible to obtain knowledge of the world, as that is understood in a correspondence theory of truth. Ethical nihilism, on the other hand, represents the judgment that universal ethical guidelines that apply to a certain culture are currently absent. To move beyond ethical nihilism does not imply that one reestablishes ethical principles with an ultimate foundation, but it merely means that ethical guideline which apply universally within a community get reestablished (Sorgner 2010, 134-135). Nietzsches perspectivism, according to which every perspective is an interpretation, implies his affirmation of aletheic nihilism (Sorgner 2010, 113-117). I think Nietzsches position is correct in this respect. Ethical nihilism, on the other hand, can imply that the basis of human acts is a hedonistic calculation, and Nietzsche is very critical of hedonism (KSA, JGB, 5, 160). He definitely favored going beyond ethical nihilism, but I doubt that his vision concerning the beyond is an appealing one. In general, I find it highly problematic to go beyond ethical nihilism, because of the potentially paternalistic structures that must accompany such a move. I will make some further remarks concerning this point in the next section. From my remarks here, it becomes clear that there are good reasons for affirming both types of nihilism in contrast to Nietzsche, who hopes that it will be possible to go beyond the currently dominant ethical nihilism which he sees embodied in the last man whom he characterizes so clearly in Thus Spoke Zarathustra.

Coming back to aletheic nihilism, I wish to stress that, like Nietzsche, I regard this type of nihilism as a valuable achievement and I regard it as the only epistemic position that I can truthfully affirm. Why is it valuable? Aletheic nihilism helps to avoid the coming about of violent and paternalistic structures. Religious fundamentalists claim that homosexual marriages ought to be forbidden because they are unnatural. What the concept unnatural implies is that the correspondence theory represents a correct insight into the true nature of the world. Political defenders of a concept of nature act like a good father who wishes to institutionalize his insight to stop others from committing evil acts. The concept natural implies the epistemic superiority of the judgment to which it applies. Aletheic nihilism, on the other hand, implies that any judgment and all concepts of the natural are based on personal prejudices and that each represents a specific perspective not necessarily anything more. Religious fundamentalists commit an act of violence by claiming that x is an unnatural act, which then implies that those (a, b, & c) who commit act x do some evil, and thereby these fundamentalists look down upon a, b, and c who suffer from being humiliated. If we realize that all judgments are interpretations based upon personal prejudices, it is easier to refrain from universalizing ones own values and norms and to accept that other human beings uphold different values and norms. Hence, it becomes a matter of negotiation and a fight between various interest groups which norms get established in a political system. If we affirm aletheic nihilism, no norm is a priori false or true and the argument that a value is evil or false cannot get further support by means of reference to God or nature. Instead, one needs to appeal to more pragmatic and this-worldly aspects, such as the consequences of a rule or the attitude of someone who commits the corresponding acts. I regard these lines of argument as valuable and appropriate for our times, and I am not claiming that there is just one pragmatic way of arriving at an appropriate decision.

3. Politics and liberalism

Given the argument of the previous section, it is not surprising that I was slightly worried when I read that Roden affirms the move away from bio-political organizations such as liberalism or capitalism (2010, 34). I wonder what is the alternative, because I think we have done pretty well recently in Western industrial countries with liberal and social versions of democracy. I do not think that there is nothing which can get improved or criticized, but generally speaking I am very happy living in a Western liberal democracy with a well developed health system and permanently new technological innovations that help us in improving our lives as long as we do not let ourselves get dominated by these developments. Most other types of political organization so far have led to paternalistic systems in which the leaders exploited the citizens in the name of the common good. Any system that does not sufficiently stress the norm of negative freedom brings about structures which are strongly paternalistic. I do not think that social liberal democracies are the final answer to all questions or that they are metaphysically superior to other types of political organization, but I think that pragmatically they seem to work pretty well. In addition, I am afraid of the violence and cruelties related to political structures that are based upon stronger notions of the public good.

An apparent difference between transhumanism and Nietzsches philosophy in this respect is pointed out by Hauskeller, who stresses that transhumanists aim at making the world a better place, whereas Nietzsche does not because he supposedly holds that there is no truly better or worse, and so does not aim at bettering humanity (2010, 5). There is some truth in what Hauskeller says. However, Nietzsche did have a political vision, even though he also claimed to be a non-political thinker. I think that his political vision, which I described in detail in my recent monograph (2010, 218-32), is not very appealing, because it leads to a two-class society in which a small class of people can dedicate themselves to the creation of culture, while the rest of humanity has to care for the pragmatic background so that the small group of artists can dedicate themselves to such a life style. This is Nietzsches suggestion of how ethical nihilism ought to get transcended.

Given this vision, it seems that there is a clear difference between Nietzsches view and that of transhumanists. However, I do not think that this is necessarily the case. The danger of a two-class society also applies to many visions of transhumanists, especially if an overly libertarian version gets adopted. Transhumanism can lead to a genetic divide and a two-class society, as has been shown convincingly in the Gattaca argument. In particular, a solely libertarian type of transhumanism implies the danger of a genetic divide that would not be too different from Nietzsches vision.

Again, I agree with Mores judgment that the goals of transhumanists and Nietzsche do not have to imply any kind of illiberal social or political system (2010, 4). However, in the case of Nietzsche it is more plausible to interpret his political vision such that it is not a very appealing one, because it leads towards a two-class society. This danger can also arise from an overly libertarian type of transhumanism.

James Hughes (2004) has put forward some plausible arguments why a social democratic version of transhumanism might be more appropriate. I have some reservations about both social-democratic and libertarian positions, even though I share many basic premises of both. I share Hughes fear that a libertarian type of transhumanism leads to a genetic divide. However, I also fear that a social democratic version of transhumanism might not sufficiently consider the wonderful norm of negative freedom for which several interest groups have been fighting since the Enlightenment so that we nowadays can benefit from the results of these struggles. I regard a dialectic solution as more plausible; this implies that there is no ideal political system which can serve as the final goal towards which all systems ought to strive. Any system brings about challenges that cannot get solved within the system, but they can be resolved by altering the system. As this insight applies both to libertarian and social democratic systems, a pragmatic pendulum between those extremes might be the best we can achieve, which also implies that we permanently have to adapt ourselves dynamically to the new demands of social institutions and scientific developments. Dynamic adaptation works best in the process of evolution and might be the best we can achieve on a cultural level, which includes our political systems, too. Hence, not sticking dogmatically to ones former evaluations might not be a sign of weakness, but of dynamic integrity (Birx 2006).

See the rest here:

Beyond Humanism: Reflections on Trans- and Posthumanism

Posted in Posthumanism | Comments Off on Beyond Humanism: Reflections on Trans- and Posthumanism

Nietzsche, Nihilism, Nihilists, & Nihilistic Philosophy

Posted: at 8:18 pm

By Austin Cline

There is a common misconception that the German philosopher Friedrich Nietzsche was a nihilist. You can find this assertion in both popular and academic literature, yet as widespread as it it, it isn't really an accurate portrayal of his work. Nietzsche wrote a great deal about nihilism, it is true, but that was because he was concerned about the effects of nihilism on society and culture, not because he advocated nihilism.

Even that, though, is perhaps a bit too simplistic. The question of whether Nietzsche really advocated nihilism or not is largely dependent upon the context: Nietzsche's philosophy is a moving target because he had so many different things to say on so many different subjects, and not all of what he wrote is perfectly consistent with everything else.

Nietzsche could be categorized as a nihilist in the descriptive sense that he believed that there was no longer any real substance to traditional social, political, moral, and religious values.

He denied that those values had any objective validity or that they imposed any binding obligations upon us. Indeed, he even argued that they could at times have negative consequence for us.

We could also categorize Nietzsche as a nihilist in the descriptive sense that he saw that many people in society around him were effectively nihilists themselves. Many, if not most, probably wouldn't admit to it, but Nietzsche saw that the old values and old morality simply didn't have the same power that they once did. It is here that he announced the "death of God," arguing that the traditional source of ultimate and transcendental value, God, no longer mattered in modern culture and was effectively dead to us.

Describing nihilism isn't the same as advocating nihilism, so is there any sense in which Nietzshe did the latter? As a matter of fact, he could be described as a nihilist in a normative sense because he regarded the "death of God" as being ultimately a good thing for society. As mentioned above, Nietzsche believed that traditional moral values, and in particular those stemming from traditional Christianity, were ultimately harmful to humanity. Thus, the removal of their primary support should lead to their downfall and that could only be a good thing.

It is here, however, that Nietzsche parts company from nihilism. Nihilists look at the death of God and conclude that, without any perfect source of absolute, universal, and transcendent values, then there can be no real values at all. Nietzsche, however, argues that the lack of such absolute values does not imply the absence of any values at all.

On the contrary, by freeing himself from the chains tying him to a single perspective normally attributed to God, Nietzsche is able to give a fair hearing to the values of many different and even mutually exclusive perspectives. In so doing, he can conclude that these values are "true" and appropriate to those perspectives, even if they may be inappropriate and invalid to other perspectives. Indeed, the great "sin" of both Christian values and Enlightenment values is, at least for Nietzsche, the attempt to pretend that they are universal and absolute rather than situated in some particular set of historical and philosophical circumstances.

Nietzsche can actually be quite critical of nihilism, although that is not always recognized. In Will to Power we can find the following comment: "Nihilism isnot only the belief that everything deserves to perish; but one actually puts one shoulder to the plough; one destroys." It is true that Nietzsche put his shoulder to the plough of his philosophy, tearing through many cherished assumptions and beliefs.

Once again, though, he parts company with nihilists in that he did not argue that everything deserves to be destroyed. He was not simply interested in tearing down traditional beliefs based upon traditional values; instead, he also wanted to help build new values. He pointed in the direction of a "superman" who might be able to construct his own set of values independent of what anyone else thought.

Nietzsche was certainly the first philosopher to study nihilism extensively and to try and take its implications seriously, yet that doesn't mean that he was a nihilist in the sense that most people mean by the label. He may have taken nihilism seriously, but only as part of an effort to provide an alternative to the Void that it offered.

Original post:

Nietzsche, Nihilism, Nihilists, & Nihilistic Philosophy

Posted in Nihilism | Comments Off on Nietzsche, Nihilism, Nihilists, & Nihilistic Philosophy

THE WAR ON DRUGS EXPLAINED Vox

Posted: at 12:44 am

Card 1 of 17

In the 1970s, President Richard Nixon formally launched the war on drugs to eradicate illicit drug use in the US. "If we cannot destroy the drug menace in America, then it will surely in time destroy us," Nixontold Congress in 1971. "I am not prepared to accept this alternative."

Over the next couple decades, particularly under the Reagan administration, what followed was the escalation of global military and police efforts against drugs. But in that process, the drug war led to unintended consequences that have proliferated violence around the world and contributed to mass incarceration in the US, even if it has made drugs less accessible and reduced potential levels of drug abuse.

Nixon inaugurated the war on drugs at a time when America was in hysterics over widespread drug use. Drug use had become more public and prevalent during the 1960s due in part to the counterculture movement, and many Americans felt that drug use had become a serious threat to the country and its moral standing.

Over the past four decades, the US has committed more than $1 trillion to the war on drugs. But the crackdown has in some waysfailed to produce the desired results: Drug use remains a very serious problem in the US, even though the drug war has made these substances less accessible. The drug war also led to several some unintended negative consequences, including abig strain on America's criminal justice system and the proliferation ofdrug-related violence around the world.

While Nixon began the modern war on drugs, America hasa long history of trying to control the use of certain drugs. Laws passed in the early 20th century attempted to restrict drug production and sales. Some of this history is racially tinged, and, perhaps as a result, the war on drugs has long hit minority communities the hardest.

In response to the failures and unintended consequences, many drug policy experts and historians have called for reforms: a larger focus onrehabilitation, thedecriminalization of currently illicit substances, and even the legalization of all drugs.

The question with these policies, as with the drug war more broadly, is whether the risks and costs are worth the benefits. Drug policy is often described as choosing between a bunch of bad or mediocre options, rather than finding the perfect solution. In the case of the war on drugs, the question is whether the very real drawbacks of prohibition more racially skewed arrests, drug-related violence around the world, and financial costs are worth the potential gains from outlawing and hopefully depressing drug abuse in the US.

Card 2 of 17

The goal of the war on drugs is to reduce drug use. The specific aim is to destroy and inhibit the international drug trade making drugs scarcer and costlier, and therefore making drug habits in the US unaffordable. And although some of the data shows drugs getting cheaper, drug policy experts generally believe that the drug war is nonetheless preventing some drug abuse by making the substances less accessible.

The prices of most drugs, as tracked by the Office of National Drug Control Policy, have plummeted. Between 1981 and 2007, the median bulk price of heroin is down by roughly 93 percent, and the median bulk price of powder cocaine is down by about 87 percent. Between 1986 and 2007, the median bulk price of crack cocaine fell by around 54 percent. The prices of meth and marijuana, meanwhile, have remained largely stable since the 1980s.

Much of this is explained by what's known as the balloon effect: Cracking down on drugs in one area doesn't necessarily reduce the overall supply of drugs. Instead, drug production and trafficking shift elsewhere, because the drug trade is so lucrative that someone will always want to take it up particularly in countries where the drug trade might be one of the only economic opportunities and governments won't be strong enough to suppress the drug trade.

The balloon effect has been documented in multiple instances, includingPeru and Bolivia to Colombia in the 1990s, the Netherlands Antilles to West Africa in the early 2000s, and Colombia and Mexico to El Salvador, Honduras, and Guatemala in the 2000s and 2010s.

Sometimes the drug war has failed to push down production altogether, like in Afghanistan. The US spent$7.6 billion between 2002 and 2014 to crack down on opium in Afghanistan, where a bulk of the world's supply for heroin comes from. Despite the efforts, Afghanistan's opium poppy crop cultivation reached record levels in 2013.

On the demand side, illicit drug use has dramatically fluctuated since the drug war began.The Monitoring the Future survey, which tracks illicit drug use among high school students, offers a useful proxy: In 1975, four years after President Richard Nixon launched the war on drugs, 30.7 percent of high school seniors reportedly used drugs in the previous month. In 1992, the rate was 14.4 percent. In 2013, it was back up to 25.5 percent.

Still, prohibition does likely make drugs less accessible than they would be if they were legal. A 2014study by Jon Caulkins, a drug policy expert at Carnegie Mellon University, suggested that prohibition multiplies the price of hard drugs like cocaine by as much as 10 times. And illicit drugs obviously aren't available through easy means one can't just walk into a CVS and buy heroin. So the drug war is likely stopping somedrug use: Caulkins estimates that legalization could lead hard drug abuse to triple,although he told me it could go much higher.

But there's also evidence that the drug war is too punitive:A 2014 study from Peter Reuter at the University of Maryland and Harold Pollack at the University of Chicago found there's no good evidence that tougher punishments or harsher supply-elimination efforts do a better job of pushing down access to drugs and substance abuse than lighter penalties. So increasing the severity of the punishment doesn't do much, if anything, to slow the flow of drugs.

Instead, most of the reduction in accessibility from the drug war appears to be a result of the simple fact that drugs are illegal, which by itself makes drugs more expensive and less accessible by eliminating avenues toward mass production and distribution.

The question is whether the possible reduction of potential drug use is worth the drawbacks that come in other areas, including a strained criminal justice system and the global proliferation of violence fueled by illegal drug markets. If the drug war has failed to significantly reduce drug use, production, and trafficking, then perhaps it's not worth these costs, and a new approach is preferable.

Card 3 of 17

The US uses what's called thedrug scheduling system. Under theControlled Substances Act, there are five categories of controlled substances known as schedules, which w
eigh a drug's medical value and abuse potential.

Medical value is typically evaluated through scientific research, particularly large-scale clinical trials similar to those used by the Food and Drug Administration for pharmaceuticals. Potential for abuse isn't clearly defined by the Controlled Substances Act, but for the federal government, abuse is when individuals take a substance on their own initiative, leading to personal health hazards or dangers to society as a whole.

Under this system, Schedule 1 drugs are considered to have no medical value and a high potential for abuse. Schedule 2 drugs have high potential for abuse but some medical value. As the rank goes down to Schedule 5, a drug's potential for abuse generally decreases.

It may be helpful to think of the scheduling system as made up of two distinct groups: nonmedical and medical. The nonmedical group is the Schedule 1 drugs, which are considered to have no medical value and high potential for abuse. The medical group is the Schedule 2 to 5 drugs, which have some medical value and are numerically ranked based on abuse potential (from high to low).

Marijuana and heroin are Schedule 1 drugs, so the federal government says they have no medical value and a high potential for abuse. Cocaine, meth, and opioid painkillers are Schedule 2 drugs, so they're considered to have some medical value and high potential for abuse. Steroids and testosterone products are Schedule 3, Xanax and Valium are Schedule 4, and cough preparations with limited amounts of codeine are Schedule 5. Congressspecifically exempted alcohol and tobacco from the schedules in 1970.

Although these schedules help shapecriminal penalties for illicit drug possession and sales, they're not always the final word. Congress, for instance, massively increased penalties against crack cocaine in 1986 in response to concerns about a crack epidemic and its potential link to crime. And state governments can set up their own criminal penalties and schedules for drugs as well.

Other countries, like the UK and Australia, use similar systems to the US, although their specific rankings for some drugs differ.

Card 4 of 17

The US fights the war on drugs both domestically and overseas.

On the domestic front, the federal government supplies local and state police departments with funds, legal flexibility, and special equipment to crack down on illicit drugs. Local and state police then use this funding to go after drug dealing organizations.

"[Federal] assistance helped us take out major drug organizations, and we took out a number of them in Baltimore," said Neill Franklin, a retired police major and executive director of Law Enforcement Against Prohibition, which opposes the war on drugs. "But to do that, we took out the low-hanging fruit to work up the chain to find who was at the top of the pyramid. It started with low-level drug dealers, working our way up to midlevel management, all the way up to the kingpins."

Some of the funding, particularly from the Byrne Justice Assistance Grant program, encourages local and state police to participate in anti-drug operations. If police don't use the money to go after illicit substances, they risk losing it providing a financial incentive for cops to continue the war on drugs.

Although the focus is on criminal groups, casual users still get caught in the criminal justice system. Between 1999 and 2007, Human Rights Watch found at least 80 percent of drug-related arrests were for possession, not sales.

It seems, however, that arrests for possession don't typically turn into convictions and prison time. According to federal statistics, only 5.3 percent of drug offenders in federal prisons and 27.9 percent of drug offenders in state prisons in 2004 were in for drug possession. The overwhelming majority were in for trafficking, and a small few were in for an unspecified "other" category.

Mexican officials incinerate 130 tons of seized marijuana.

Internationally, the US regularly aids other countries in their efforts to crack down on drugs. For example, the US in the 2000s provided military aid and training to Colombia in what's known as Plan Colombia to help the Latin American country go after criminal organizations and paramilitaries funded through drug trafficking.

Federal officials argue that helping countries like Colombia attacks the source of illicit drugs, since such substances are often produced in Latin America and shipped north to the US. But the international efforts have consistently displaced, not eliminated, drug trafficking and the violence that comes with it to other countries.

Given the struggles of the war on drugs to meet its goals, federal and state officials have begun moving away from harsh enforcement tactics and tough-on-crime stances. The White House Office of National Drug Control Policy nowadvocates for a bigger focus on rehabilitation and less on law enforcement. Even some conservatives, like former Texas Governor Rick Perry, have embraced drug courts, which place drug offenders into rehabilitation programs instead of jail or prison.

The idea behind these reforms is to find a better balance between locking up more people for drug trafficking while moving genuinely problematic drug users to rehabilitation and treatment services that could help them."We can't arrest our way out of the problem," Michael Botticelli, US drug czar,said, "and we really need to focus our attention on proven public health strategies to make a significant difference as it relates to drug use and consequences to that in the United States."

Card 5 of 17

The escalation of the criminal justice system's reach over the past few decades, ranging from more incarceration to seizures of private property and militarization, can be traced back to the war on drugs.

After the US stepped up the drug war throughout the 1970s and '80s, harsher sentences for drug offenses played a role in turning the country into theworld's leader in incarceration. (But drug offenders still make up a small part of the prison population: About 54 percent of people in state prisons which house more than 86 percent of the US prison population were violent offenders in 2012, and 16 percent were drug offenders, according to the Bureau of Justice Statistics.)

Still, mass incarceration has massively strained the criminal justice system and led to a lot of overcrowding in US prisons to the point that some states, such asCalifornia, have rolled back penalties for nonviolent drug users and sellers with the explicit goal of reducing their incarcerated population.

In terms of police powers,civil asset forfeitures have been justified as a way to go after drug dealing organizations. These forfeitures allow law enforcement agencies to take the organizations' a
ssets cash in particular and then use the gains to fund more anti-drug operations. The idea is to turn drug dealers' ill-gotten gains against them.

But there have beenmany documented cases in which police abused civil asset forfeiture, including instances in which police took people's cars and cash simply because they suspected but couldn't prove that there was some sort of illegal activity going on. In these cases, it's actually up to people whose private property was taken to prove that they weren't doing anything illegal instead of traditional legal standards in which police have to prove wrongdoing or reasonable suspicion of it before they act.

Similarly, the federal government helped militarize local and state police departments in an attempt to better equip them in the fight against drugs. The Pentagon's 1033 program, which gives surplus military-grade equipment to police, was created in the 1990s as part of President GeorgeHW Bush's escalation of the war on drugs. The deployment of SWAT teams, as reported by the ACLU, also increased during the past few decades, and 62 percent of SWAT raids in 2011 and 2012 were for drug searches.

Various groups have complained that these increases in police power are often abused and misused. The ACLU, for instance, argues that civil asset forfeitures threaten Americans' civil liberties and property rights, because police can often seize assets without even filing charges. Such seizures also might encourage police to focus on drug crimes, since a raid can result in actual cash that goes back to the police department, while a violent crime conviction likely would not. The libertarian Cato Institute has also criticized the war on drugs for decades, because anti-drug efforts gave cover to a huge expansion of law enforcement's surveillance capabilities, including wiretaps and US mail searches.

The militarization of police became a particular sticking point during the 2014 protests in Ferguson, Missouri, over the police shooting of Michael Brown. After heavily armed police responded to largely peaceful protesters with armored vehicle that resemble tanks, tear gas, and sound cannons, law enforcement experts andjournalists criticized the tactics.

Since the beginning of the war on drugs, the general trend has been to massively grow police powers and expand the criminal justice system as a means of combating drug use. But as the drug warstruggles to halt drug use and trafficking, the heavy-handed policies which many describe as draconian have been called into question. If the war on drugs isn't meeting its goals, critics say these expansions of the criminal justice system aren't worth the financial strain and costs to liberty in the US.

Card 6 of 17

The war on drugs has created a black market for illicit drugs that criminal organizations around the world can rely on for revenue that payrolls other, more violent activities. This market supplies so much revenue that drug trafficking organizations can actually rival developing countries' weak government institutions.

In Mexico, for example, drug cartels have leveraged their profits from the drug trade to violently maintain their stranglehold over the market despite the government's war on drugs. As a result, public decapitations have become a particularly prominent tactic of ruthless drug cartels. As many as 80,000 people have died in the war. Tens of thousands of people have gone missing since 2007, including 43 students who vanished in 2014 in a widely publicized case.

But even if Mexico were to actually defeat drug cartels, this potentially wouldn't reduce drug war violence on a global scale.Instead, drug production and trafficking, and the violence that comes with both, would likely shift elsewhere, because the drug trade is so lucrative that someone will always want to take it up particularly in countries where the drug trade might be one of the only economic opportunities and governments won't be strong enough to suppress the drug trade.

In 2014, for instance, the drug warsignificantly contributed to the child migrant crisis. After some drug trafficking was pushed out of Mexico, gangs and drug cartels stepped up their operations in Central America's Northern Triangle of El Salvador, Honduras, and Guatemala. These countries, with their weak criminal justice and law enforcement systems, didn't seem to have the capacity to deal with the influx of violence and crime.

The war on drugs "drove a lot of the activities to Central America, a region that has extremely weakened systems," Adriana Beltran of the Washington Office on Latin Americaexplained. "Unfortunately, there hasn't been a strong commitment to building the criminal justice system and the police."

As a result, children fled their countries by the thousands ina major humanitarian crisis. Many of these children ended up in the US, where the refugee system simply doesn't have the capacity to handle the rush of child migrants.

Although the child migrant crisis is fairly unique in its specific circumstances and effects, the series of events a government cracks down on drugs, trafficking moves to another country, and the drug trade brings violence and crime is pretty typical in the history of the war on drugs. In the past couple of decades it happened inColombia, Mexico, Venezuela, and Ecuador after successful anti-drug crackdowns in other Latin American countries.

The Wall Street Journal explained:

Ironically, the shift is partly a by-product of a drug-war success story, Plan Colombia. In a little over a decade, the U.S. spent nearly $8 billion to back Colombia's efforts to eradicate coca fields, arrest traffickers and battle drug-funded guerrilla armies such as the Revolutionary Armed Forces of Colombia, or FARC. Colombian cocaine production declined, the murder rate plunged and the FARC is on the run.

But traffickers adjusted. Cartels moved south across the Ecuadorean border to set up new storage facilities and pioneer new smuggling routes from Ecuador's Pacific coast. Colombia's neighbor to the east, Venezuela, is now the departure point for half of the cocaine going to Europe by sea.

As a 2012 report from the UN Office on Drugs and Crime explained, "one countrys success became the problem of others."

This global proliferation of violence is one of the most prominent costs of the drug war. When evaluating whether the war on drugs has been successful, experts and historians weigh this cost, along with the rise of incarceration in the US, against the benefits, such as potentially depressed drug use, to gauge whether anti-drug efforts have been worth it.

Card 7 of 17

Enforcing the war on drugs costs the US more than $51 billion each year, according to the Drug Policy Alliance. As of 2012, the US had spent $1 trillion on anti-drug efforts.

The spending estimates don't account for the loss of potential taxes on currently illegal substances. According to a 2010 paper from the libertarian Cato Institute, taxing and regulating illicit drugs similarly to tobacco and alcohol could raise $46.7 billion in tax revenue each year.

These annual costs the spending, the lost potential taxes add up to nearly 2 percent of state and federal budgets, which totaled an estimated $6.1 trillion in 2013. That's not a huge amount of money, but it may not be worth the cost if the war on drugs is leading todrug-related violence around the world and isn't significantly reducingdrug abuse.

Card 8 of 17

In the US, the war on drugs mostly impacts minority, particularly black, communities. This disproportionate effect is why critics often call the war on drugs racist.

Although black communities aren't more likely to use orsell drugs, they are much more likely to be arrested and incarcerated for drug offenses.

When black defendants are convicted for drug crimes, they face longer prison sentences as well. Drug sentences for black men were 13.1 percent longer than drug sentences for white men between 2007 and 2009, according to a 2012 report from the US Sentencing Commission.

TheSentencing Project explained the differences in a February 2015 report: "Myriad criminal justice policies that appear to be race-neutral collide with broader socioeconomic patterns to create a disparate racial impact Socioeconomic inequality does lead people of color to disproportionately use and sell drugs outdoors, where they are more readily apprehended by police."

One example: Trafficking crack cocaine, one of the few illicit drugs that's more popular among black Americans, carries the harshest punishment. The threshold for a five-year mandatory minimum sentence of crack is 28 grams. In comparison, the threshold for powder cocaine, which is more popular among white than black Americans but pharmacoligically similar to crack, is 500 grams.

As for the broader racial disparities, federal programs that encourage local and state police departments to crack down on drugs may create perverse incentives to go after minority communities. Some federal grants, for instance, previously required police to make more drug arrests in order to obtain more funding for anti-drug efforts. Neill Franklin, a retired police major from Maryland and executive director of Law Enforcement Against Prohibition, said minority communities are "the low-hanging fruit" for police departments because they tend to sell in open-air markets, such as public street corners, and have less political and financial power than white Americans.

In Chicago, for instance, an analysis byProject Know, a drug addiction resource center, foundenforcement of anti-drug laws is concentrated in poor neighborhoods, which tend to have more crime but are predominantly black:

"Doing these evening and afternoon sweeps meant 20 to 30 arrests, and now you have some great numbers for your grant application," Franklin said. "In that process, we also ended up seizing a lot of money and a lot of property. That's another cash cow."

The disproportionate arrest and incarceration rates have clearly detrimental effects on minority communities. A 2014 study published in the journal Sociological Science found boys with imprisoned fathers are much less likely to possess the behavioral skills needed to succeed in school by the age of 5, starting them on a vicious path known as theschool-to-prison pipeline.

As the drug war continues, these racial disparities have become one of the major points of criticism against it. It's not just whether the war on drugs has led to the widespread, costly incarceration of millions of Americans, but whether incarceration has created"the new Jim Crow" a reference to policies, such as segregation and voting restrictions, that subjugated black communities in America.

Card 9 of 17

Beyond the goal ofcurtailing drug use, the motivations behind the US war on drugs have been rooted in historical fears of immigrants and minority groups.

The US began regulating and restricting drugs during the first half of the 20th century, particularly through thePure Food and Drug Act of 1906, the Harrison Narcotics Tax Act of 1914, and the Marijuana Tax Act of 1937. During this period, racial and ethnic tensions were particularly high across the country not just toward African Americans, but toward Mexican and Chinese immigrants as well.

As the New York Times explained, the federal prohibition of marijuana came during a period of national hysteria about the effect of the drug on Mexican immigrants and black communities. Concerns about a new, exotic drug, coupled with feelings of xenophobia and racism that were all too common in the 1930s, drove law enforcement, the broader public, and eventually legislators to demand the drug's prohibition. "Police in Texas border towns demonized the plant in racial terms as the drug of 'immoral' populations who were promptly labeled 'fiends,'" wrote the Times's Brent Staples.

These beliefs extended to practically all forms of drug prohibition. According to historian Peter Knight, opium largely came over to America with Chinese immigrants on the West Coast. Americans, already skeptical of the drug, quickly latched on to xenophobic beliefs that opium somehow made Chinese immigrants dangerous. "Stories of Chinese immigrants who lured white females into prostitution, along with the media depictions of the Chinese as depraved and unclean, bolstered the enactment of anti-opium laws in eleven states between 1877 and 1900," Knight wrote.

Cocaine was similarly attached in fear to black communities, neuroscientist Carl Hartwrote for the Nation. The belief was so widespread that the New York Times even felt comfortable writing headlines in 1914 that claimed "Negro cocaine 'fiends' are a new southern menace." The author of the Times piece a physician wrote, "[The cocaine user] imagines that he hears people taunting and abusing him, and this often incites homicidal attacks upon innocent and unsuspecting victims." He later added, "Many of the wholesale killings in the South may be cited as indicating that accuracy in shooting is not interfered with is, indeed, probably improved by cocaine. I believe the record of the 'cocaine n----r' near Asheville who dropped five men dead in their tracks using only one cartridge for each, offers evidence that is sufficiently convincing."

Most recently, these fears of drugs and the connection to minorities came up during what law enforcement officials characterized as a crack cocaine epidemic in the 1980s and '90s. Lawmakers, judges, and police in particular linked crack to violence in minority communities. The connection was part of the rationale for making it 100 times easier to
get a mandatory minimum sentence for crack cocaine over powder cocaine, even though the two drugs are pharmacologically identical. As a result, minority groups have received considerably harsher prison sentences for illegal drugs. (In 2010, the ratio between crack's sentence and cocaine's was reduced from 100-to-1 to 18-to-1.)

Hart explained, after noting the New York Times's coverage in particular: "Over the [late 1980s], a barrage of similar articles connected crack and its associated problems with black people. Entire specialty police units were deployed to 'troubled neighborhoods,' making excessive arrests and subjecting the targeted communities to dehumanizing treatment. Along the way, complex economic and social forces were reduced to criminal justice problems; resources were directed toward law enforcement rather than neighborhoods real needs, such as job creation."

Follow this link:

THE WAR ON DRUGS EXPLAINED Vox

Posted in War On Drugs | Comments Off on THE WAR ON DRUGS EXPLAINED Vox