Ethereum Price Forecast: Chasm Widens Between Price and Reality

Ethereum News Analysis
Two seemingly contradictory things are happening in cryptocurrency markets: 1) Valuations are shrinking, and 2) Blockchain adoption is growing. These trends appear to be in conflict.

What exactly is going on?

Well, the first thing to remember is you’re not in a nightmare. Ethereum prices are truly trading below $400.00, scary as that might seem. Second, it’s possible for markets to be wrong in the short term. And third, it’s possible for markets to be wrong more than once in a row.

Let me explain…

Some people claim that cryptocurrencies went through a bubble in late 2017. “It’s TulipMania all over again! It’s the South Sea.

The post Ethereum Price Forecast: Chasm Widens Between Price and Reality appeared first on Profit Confidential.

Read the original here:
Ethereum Price Forecast: Chasm Widens Between Price and Reality

Genetic Engineering Is the New Nuke – TV Tropes

"Biotechnology promises the greatest revolution in human history. By the end of this decade, it will have outdistanced atomic power and computers in its effect on our everyday lives."Once upon a time, superheroes inevitably gained their superpowers from radiation, the latest and most mysterious-yet-powerful fad of the 50s and 60s.Technology Marches On, however, and gene splicing has replaced atom smashing as the most glamorous sciencey stuff: nowadays, many modern remakes of classic superheroes go with Genetic Engineering. Be it a bite from a genetically engineered spider, or exposure to it in a freak accident, genetically engineered origins are the Phlebotinum for the 21st century. It is worth noting that in Real Life rarely are the effects of genetic engineering anything like those portrayed in speculative fiction yet.Genetic Engineering also lends itself to being weaponized to do exactly the same thing as those ultracool nukes that kill people but leave buildings standing. Now that a nuclear apocalypse is substantially less likely (or at least less likely to wipe us all out), and chemical weapons just aren't destructive enough in terms of human life, biological weapons make a nice scary (and vague) alternative.May lead to Bio-Augmentation and Mutants. Superpowerful Genetics may either come from this, or have a hand on the engineering overall.It's also interesting to note the other favourite sources of weirdness used by SF writers before the advent of nuclear physics.

open/close all folders

Anime & Manga

Comic Books

Fanfiction

Film

Literature

Live Action TV

Tabletop Gaming

Video Games

Web Comics

Dr. Lambha: "God damn you idiots in the media! I'm doing research on spider genetics, and you infer that I'm going to cure fatness or turn people into spidermen! Do you understand nothing about science?"

Web Original

Western Animation

Real Life

Genetic engineering will eliminate some of the most horrific things that can happen to anyone, ever, and make everyone better at everything as a mere side effect. Anyone campaigning against genetic engineering is saying, "I was lucky enough not to get cystic fibrosis, Tay-Sachs, or any one of a hundred other unthinkable horrors, and that's 100 percent of the humans I care about! Yay!"

Follow this link:

Genetic Engineering Is the New Nuke - TV Tropes

MSNBC In Cover-Up Of Manifestly Provable Population …

Paul Joseph WatsonPrison Planet.comWednesday, June 16, 2010

As part of his obsessive drive to smear anti-big government activists as insanely paranoid and dangerous radicals, Chris Matthews and his guest, establishment neo-lib David Corn, previewed tonights Rise of the New Right hit piece by claiming that the elites agenda to enact dictatorial population control measures was a conspiracy theory.

As we have documented on numerous occasions, while Matthews points fingers at his political adversaries for preparing to engage in violence, the only real violence were witnessing out on the streets is being committed by Obama supporters, MSNBC thugs and other leftists who refuse to tolerate free speech that counters their propaganda.

However, MSNBCs goal is not just to demonize the Tea Party and anti-big government activists as dangerous radicals as an avenue through which to sick the police state on them and crush their free speech, theyre also desperate to prevent Americans from lending any credence to what people like Alex Jones have to say by acting as gatekeepers to prevent such information from becoming mainstream.

A perfect example of an issue that Matthews and his ilk want to sideline is the manifestly provable fact that elitists have for decades publicly stated their desire to reduce global population by around 80 per cent and as much as 99 per cent.

During MSNBCs Hardball show on Tuesday, Corn characterized the notion that there is a planetary elite that literally has a secret plan to kill 80 to 99 percent of the population, as a conspiracy theory.

Watch the clip.

Corns role in covering-up the depopulation agenda is unsurprising given his habitual tactic of trying to discredit anyone who exposes government criminality and corruption. One critic labeled Corn as someone who serves, As a Neo-Con-lite version of someone who dismisses those who have investigated the crimes of the U.S. government, in reference to how he tried to undermine the work of the late Gary Webb, an award-winning investigative journalist who exposed the CIAs involvement in the drug trade.

Despite Corns claims to the contrary, the global elite have been forthright, public, and unashamedly enthusiastic about their open intention to cull at least 80 per cent of humanity in the name of saving the planet.

There are still large numbers of people amongst the general public, in academia, and especially those who work for the corporate media, who are still in denial about the on-the-record stated agenda for global population reduction, as well as the consequences of this program that we already see unfolding.

We have compiled a compendium of evidence to prove that the elite have been obsessed with eugenics and its modern day incarnation, population control, for well over 100 years and that goal of global population reduction is still in full force to this day.

The Worlds Elite Are Discussing Population Reduction

During a recent TED conference, an organization which is sponsored by one of the largest toxic waste polluters on the planet, Gates told the audience that vaccines need to be used to reduce world population figures in order to solve global warming and lower CO2 emissions to almost zero.

Stating that the global population was heading towards 9 billion, Gates said, If we do a really great job on new vaccines, health care, reproductive health services (abortion), we could lower that by perhaps 10 or 15 per cent.

Quite how an improvement in health care and vaccines that supposedly save lives would lead to a lowering in global population is an oxymoron, unless Gates is referring to vaccines that sterilize people, which is precisely the same method advocated in White House science advisor John P. Holdrens 1977 textbook Ecoscience, which calls for a dictatorial planetary regime to enforce draconian measures of population reduction via all manner of oppressive techniques, including sterilization.

Gates eugenicist zeal is shared by his fellow Bilderberg elitists, many of whom have advocated draconian policies of population control in their own public speeches and writings. Indeed, the Rockefeller family funded eugenics research in Germany through the Kaiser-Wilhelm Institutes in Berlin and Munich. The Rockefeller Foundation praised Hitlers sterilization program in Nazi Germany. David Rockefeller attended the first Bilderberg meeting in 1954 and is now the head of Bilderbergs steering committee.

A joint World Health Organization-Rockefeller inoculation program against tetanus in Nicaragua, Mexico and the Philippines in the early 1990s was in fact a covert trial on using vaccines to medically abort womens babies.

Comite Pro Vida de Mexico, a Roman Catholic lay organization, became suspicious of the motives behind the WHO program and decided to test numerous vials of the vaccine and found them to contain human Chorionic Gonadotrophin, or hCG, writes historian F. William Engdahl in his article, Bill Gates And Neo-Eugenics: Vaccines To Reduce Population. That was a curious component for a vaccine designed to protect people against lock-jaw arising from infection with rusty nail wounds or other contact with certain bacteria found in soil. The tetanus disease was indeed, also rather rare. It was also curious because hCG was a natural hormone needed to maintain a pregnancy. However, when combined with a tetanus toxoid carrier, it stimulated formation of antibodies against hCG, rendering a woman incapable of maintaining a pregnancy, a form of concealed abortion. Similar reports of vaccines laced with hCG hormones came from the Philippines and Nicaragua.

Gates recently announced that he would be funding a sterilization program that would use sharp blasts of ultrasound directed against a mans scrotum to render him infertile for six months. The foundation has funded a new sweat-triggered vaccine delivery program based on nanoparticles penetrating human skin. The technology is described as a way to develop nanoparticles that penetrate the skin through hair follicles and burst upon contact with human sweat to release vaccines, writes health researcher Mike Adams.

As was reported last year by the London Times, a secret billionaire club meeting in early May 2009 which took place in New York and was attended by David Rockefeller, Ted Turner, Bill Gates and others was focused around how their wealth could be used to slow the growth of the worlds population.

We questioned establishment media spin which portrayed the attendees as kind-hearted and concerned philanthropists by pointing out that Ted Turner has publicly advocated shocking population reduction programs that would cull the human population by a staggering 95%. He has also called for a Communist-style one child policy to be mandated by governments in the west. In China, the one child policy is enforced by means of taxes on each subsequent child, allied to an intimidation program which includes secret police and family planning authorities kidnapping pregnant women from their homes and performing forced abortions.

Of course, Turner completely fails to follow his own rules on how everyone else should live their lives, having five children and owning no less than 2 million acres of land.

In the third world, Turner has contributed literally billions to population reduction, namely through United Nations programs, leading the way for the likes of Bill & Melinda Gates and Warren Buffet (Gates father has long been a leading board member of Planned Parenthood and a top eugenicist).

The notion that these elitists merely want to slow population growth in order to improve health is a complete misnomer. Slowing the growth of the worlds population while also improving its health are two irreconcilable concepts to the elite. Stabilizing world population is a natural byproduct of higher living standards, as has been proven by the stabilization of the white population in the west. Elitists like David Rockefeller have no interest in slowing the growth of world population by natural methods, their agenda is firmly rooted in the pseudo-science of eugenics, which is all about culling the surplus population via draconian methods.

David Rockefellers legacy is not derived from a well-meaning philanthropic urge to improve health in third world countries, it is born out of a Malthusian drive to eliminate the poor and those deemed racially inferior, using the justification of social Darwinism.

As is documented in Alex Jones seminal film Endgame, Rockefellers father, John D. Rockefeller, exported eugenics to Germany from its origins in Britain by bankrolling the Kaiser Wilhelm Institute which later would form a central pillar in the Third Reichs ideology of the Nazi super race. After the fall of the Nazis, top German eugenicists were protected by the allies as the victorious parties fought over who would enjoy their expertise in the post-war world.

As Dr. Len Horowitz writes, In the 1950s, the Rockefellers reorganized the U.S. eugenics movement in their own family offices, with spinoff population-control and abortion groups. The Eugenics Society changed its name to the Society for the Study of Social Biology, its current name.

The Rockefeller Foundation had long financed the eugenics movement in England, apparently repaying Britain for the fact that British capital and an Englishman-partner had started old John D. Rockefeller out in his Oil Trust. In the 1960s, the Eugenics Society of England adopted what they called Crypto-eugenics, stating in their official reports that they would do eugenics through means and instruments not labeled as eugenics.

With support from the Rockefellers, the Eugenics Society (England) set up a sub-committee called the International Planned Parenthood Federation, which for 12 years had no other address than the Eugenics Society. This, then, is the private, international apparatus which has set the world up for a global holocaust, under the UN flag.

In the latter half of the 20th century, eugenics merely changed its face to become known as population control. This was crystallized in National Security Study Memorandum 200, a 1974 geopolitical strategy document prepared by Rockefellers intimate friend and fellow Bilderberg member Henry Kissinger, which targeted thirteen countries for massive population reduction by means of creating food scarcity, sterilization and war.

Henry Kissinger: In the now declassified 1974 document, National Security Memorandum 200, Kissinger outlines the plan to use food scarcity as a weapon in order to achieve population reduction in lesser-developed countries.

The document, declassified in 1989, identified 13 countries that were of special interest to U.S. geopolitical objectives and outlined why population growth, and particularly that of young people who were seen as a revolutionary threat to U.S. corporations, was a potential roadblock to achieving these objectives. The countries named were India, Bangladesh, Pakistan, Nigeria, Mexico, Indonesia, Brazil, the Philippines, Thailand, Egypt, Turkey, Ethiopia and Colombia.

The study outlined how civil disturbances affecting the smooth flow of needed materials would be less likely to occur under conditions of slow or zero population growth.

Development of a worldwide political and popular commitment to population stabilization is fundamental to any effective strategy. This requires the support and commitment of key LDC leaders. This will only take place if they clearly see the negative impact of unrestricted population growth and believe it is possible to deal with this question through governmental action, states the document.

The document called for integrating family planning (otherwise known as abortion) with routine health services for the purposes of curbing the numbers of LDC people, (lesser-developed countries).

The report shockingly outlines how withholding food could be used as a means of punishment for lesser-developed countries who do not act to reduce their population, essentially using food as a weapon for a political agenda by creating mass starvation in under-developed countries.

The allocation of scarce PL480 (food) resources should take account of what steps a country is taking in population control as well as food production, states the document.

Later in the document, the idea of enforcing mandatory programs by using food as an instrument of national power is presented.

The document states that the program will be administered through the United Nations Fund for Population Activities (UNFPA), thereby avoiding the danger that some LDC leaders will see developed-country pressures for family planning as a form of economic or racial imperialism; this could well create a serious backlash.

As Jean Guilfoyle writes, NSSM 200 was a statement composed after the fact. During the late 1960s and early 1970s, the U.S. had worked diligently behind the scenes to advance the population-control agenda at the United Nations, contributing the initial funding of $1 million.

A Department of State telegram, dated July 1969, reported the support of John D. Rockefeller III, among others, for the appointment of Rafael Salas of the Philippines as senior officer to co-ordinate and administer the UN population program. The administrator of the UN Development Program reported confidentially that he preferred someone such as Salas who had the advantage of color, religion (Catholic) and conviction.

A comprehensive outline of what is contained in the National Security Memorandum document can be read at http://www.theinterim.com/july98/20nssm.html

Evidence of the actual consequences of this program can be found with the link between vaccines and sterilization, as well as other diseases such as cancer, in both the west and the third world.

In the following video clips, women of the Akha tribe who live predominately in Thailand, describe how they miscarried shortly after taking vaccines when they were eight months pregnant. The videos below highlight the efforts of supporters of the Akha tribe to get answers from the University of Oregon and the United Nations, who provided funding for the vaccination and sterilization programs.

Further evidence of the link between vaccinations, birth control, cancer and other diseases can be researched here.

In the 21st century, the eugenics movement has changed its stripes once again, manifesting itself through the global carbon tax agenda and the notion that having too many children or enjoying a reasonably high standard of living is destroying the planet through global warming, creating the pretext for further regulation and control over every facet of our lives.

As we have tirelessly documented, the elites drive for population control is not based around a benign philanthropic urge to improve living standards, it is firmly routed in eugenics, racial hygiene and fascist thinking.

According to the The London Times report, the secret billionaire cabal, with its interest in population reduction, has been dubbed The Good Club by insiders. This couldnt be further from the truth. Anyone who takes the time to properly research the origins of the population control movement will come to understand that the Rockefeller-Turner-Gates agenda for drastic population reduction, which is now clearly manifesting itself through real environmental crises like chemtrails, genetically modified food, tainted vaccines and other skyrocketing diseases such as cancer, has its origins in the age-old malevolent elitist agenda to cull the human chattel as one would do to rodents or any other species deemed a nuisance by the central planning authorities.

Sterilization And Eugenics Returns In Popular Culture

We are now seeing the return of last centurys eugenicist movement through the popular promotion of sterilization as a method of birth control.

A popular womens magazine in the UK recently featured an article entitled, Young, Single and Sterilized, in which women in their 20s discussed why they had undergone an operation to prevent them from ever having children. The article is little more than PR for a womens charity called Marie Stopes International, an organization that carries out abortions and sterilizations and was founded by a Nazi eugenicist who advocated compulsory sterilization of non-whites and those of bad character.

In the article, sterilization is lauded as an excellent method of birth control by Dr. Patricia Lohr of the British Pregnancy Advisory Service.

The article includes an advertisement that encourages women to seek more information about sterilization by contacting Marie Stopes International. We read that, Over the past year, a quarter of the women who booked a sterilization consultation with womens charity Marie Stopes were aged 30 or under.

Marie Stopes was a feminist who opened the first birth control clinic in Britain in 1921 as well as being Nazi sympathizer and a eugenicist who advocated that non-whites and the poor be sterilized.

Stopes, a racist and an anti-Semite, campaigned for selective breeding to achieve racial purity, a passion she shared with Adolf Hitler in adoring letters and poems that she sent the leader of the Third Reich.

Stopes also attended the Nazi congress on population science in Berlin in 1935, while calling for the compulsory sterilization of the diseased, drunkards, or simply those of bad character. Stopes acted on her appalling theories by concentrating her abortion clinics in poor areas so as to reduce the birth rate of the lower classes.

Stopes left most of her estate to the Eugenics Society, an organization that shared her passion for racial purity and still exists today under the new name The Galton Institute. The society has included members such as Charles Galton Darwin (grandson of the evolutionist), Julian Huxley and Margaret Sanger.

Marie Stopes, the Nazi and pioneering eugenicist who sent love letters to Hitler, honored recently by the Royal Mail.

Ominously, The Galton Institute website promotes its support and funding initiative for the practical delivery of family planning facilities, especially in developing countries. In other words, the same organization that once advocated sterilizing black people to achieve racial purity in the same vein as the Nazis is now bankrolling abortions of black babies in the third world.

While the issue of abortion is an entirely different argument, most would agree that no matter how extreme it sounds, a woman has the right to sterilize herself if she so chooses, just as a man has the right to a vasectomy.

But when a magazine aimed primarily at young women all but encourages girls as young as 20 to have their fallopian tubes tied in order to prevent the irritation of children entering their lives and then advertises an organization founded by a Nazi eugenicist that can perform the operation, something has to be amiss.

Even more shocking than this is the fact that the majority of people in the UK routinely express their support for societys undesirables to be forcibly sterilized by the state, harking back to a time when such a thing was commonplace right up to the 1970s in some areas of America and Europe.

As we highlighted at the time, respondents to a Daily Mail article about Royal Mail honoring Marie Stopes by using her image on a commemorative stamp were not disgusted at Royal Mail for paying homage to a racist Nazi eugenicist, but were merely keen to express their full agreement that those deemed not to be of pure genetic stock or of the approved character should be forcibly sterilized and prevented from having children.

A lot of people should be sterilized, IMO. Its still true today, wrote one.

Just imagine what a stable, well-ordered society wed have if compulsory sterilisation had been adopted years ago for the socially undesirable, states another respondent, calling for a satellite-carried sterilisation ray to be installed in space to zap the undesirables.

Shockingly, another compares sterilization and genocide of those deemed inferior to the breeding and culling of farmyard animals, and says that such a move is necessary to fight overpopulation and global warming. Here is the comment in full from Karen in Wales;

We breed farm animals to produce the best possible stock and kill them when they have fulfilled their purpose. We inter-breed pedigree animals to produce extremes that leave them open to ill-health and early death. It is only religion that says humans are not animals. The reality is that we are simply intelligent, mammalian primates.

The world population of humans has increased from 2 billion to 6.5 billion in the last 50 years. This planet can support 2 billion humans comfortably. 6.5 billion humans use too many resources and leads to global warming, climate change and a very uncertain future for all of us humans and all other life sharing this planet with us.

Marie Stopes believed in population control and in breeding the best possible humans. So did Hitler. Neither of the aims are bad in themselves. It is how they are achieved that is the problem. The fact that we still remember Marie Stopes is an achievement in itself.

The nature of these comments is so fundamentally sick and twisted that one is tempted to dismiss them as a joke but these people are deadly serious. Presumably they would also agree with Chinas one child policy, which is routinely enforced by intimidation as young pregnant women are grabbed off the streets by state goons and taken to hospitals where forced abortions are carried out.

Now with popular womens magazines advising women in their 20s where they can go to be sterilized and ensure a lifetime of partying and carefree sex, its no surprise that experts predict that by 2010 one in four western women will be child free for life.

The yearning to have children is the most beautiful, natural and innate emotion either a man or a woman can possibly experience. That is not to say that its always wrong for some people not to have children extreme circumstances can justify such a decision. But to have yourself sterilized because you find children to be an irritant and want to live a life free of responsibility or consequences is an awful message to send to young women, especially in the sex-saturated entertainment culture that we are now forced to endure.

Furthermore, the outright promotion of Marie Stopes International as the place to go to get sterilized if youre under 30 is stomach-churning considering the fact that the origins of this organization can be found in Nazi ideology, racist and backward early 20th century eugenics and a long-standing agenda to cull the population of undesirables, an abhorrent belief still held by elites across the planet today.

Genocidal Population Reduction Programs Embraced By Academia

One such individual who embraces the notion that humans are a virus that should be wiped out en masse for the good of mother earth is Dr. Eric R. Pianka, an American biologist based at the University of Texas in Austin.

Dr Erik Pianka, the American biologist who advocated the mass genocide of 90% of the human race and was applauded by his peers.

During a speech to the Texas Academy of Science in March 2006, Pianka advocated the need to exterminate 90% of the worlds population through the airborne ebola virus. The reaction from scores of top scientists and professors in attendance was not one of shock or revulsion they stood and applauded Piankas call for mass genocide.

Piankas speech was ordered to be kept off the record before it began as cameras were turned away and hundreds of students, scientists and professors sat in attendance.

Saying the public was not ready to hear the information presented, Pianka began by exclaiming, Were no better than bacteria!, as he jumped into a doomsday malthusian rant about overpopulation destroying the earth.

Standing in front of a slide of human skulls, Pianka gleefully advocated airborne ebola as his preferred method of exterminating the necessary 90% of humans, choosing it over AIDS because of its faster kill period. Ebola victims suffer the most tortuous deaths imaginable as the virus kills by liquefying the internal organs. The body literally dissolves as the victim writhes in pain bleeding from every orifice.

Pianka then cited the Peak Oil fraud as another reason to initiate global genocide. And the fossil fuels are running out, he said, so I think we may have to cut back to two billion, which would be about one-third as many people.

Later, the scientist welcomed the potential devastation of the avian flu virus and spoke glowingly of Chinas enforced one child policy, before zestfully commenting, We need to sterilize everybody on the Earth.

At the end of Piankas speech the audience erupted not to a chorus of boos and hisses but to a raucous reception of applause and cheers as audience members clambered to get close to the scientist to ask him follow up questions. Pianka was later presented with a distinguished scientist award by the Academy. Pianka is no crackpot. He has given lectures to prestigious universities worldwide.

Indeed, the notion that the earths population needs to be drastically reduced is a belief shared almost unanimously by academics across the western hemisphere.

In 2002, The Melbourne Age reported on newly uncovered documents detailing Nobel Peace Prize winning microbiologist Sir Macfarlane Burnets plan to help the Australian government develop biological weapons for use against Indonesia and other overpopulated countries of South-East Asia.

From the article;

Sir Macfarlane recommended in a secret report in 1947 that biological and chemical weapons should be developed to target food crops and spread infectious diseases. His key advisory role on biological warfare was uncovered by Canberra historian Philip Dorling in the National Archives in 1998.

Specifically to the Australian situation, the most effective counter-offensive to threatened invasion by overpopulated Asiatic countries would be directed towards the destruction by biological or chemical means of tropical food crops and the dissemination of infectious disease capable of spreading in tropical but not under Australian conditions, Sir Macfarlane said.

The Victorian-born immunologist, who headed the Walter and Eliza Hall Institute of Medical Research, won the Nobel prize for medicine in 1960. He died in 1985 but his theories on immunity and clonal selection provided the basis for modern biotechnology and genetic engineering.

Controversy surrounding the comments of another darling of scientific academia, geneticist James Watson, who told a Sunday Times newspaper interviewer that black people are inherently less intelligent than whites, should come as no surprise to those who are aware of Watsons role in pushing the dark pseudo-science of eugenics.

Watson told the interviewer that he was inherently gloomy about the prospect of Africa because all our social policies are based on the fact that their intelligence is the same as ours whereas all the testing says not really.

Watson was the Head of the Human Genome Project until 1992 and is best known for his contribution to the discovery of DNA, an achievement that won him the Nobel Peace prize in 1962.

But what most people are unaware of is the fact that Watson has played an integral role in advancing the legitimacy of the eugenics/population reduction movement for decades.

Watson is a strong proponent of genetic screening, a test to determine whether a couple is at increased risk of having a baby with a hereditary genetic disorder.

Since such screening obviously increases the rate of abortions of babies considered imperfect, many have slammed its introduction as nothing more than a camouflage for eugenics or voluntary eugenics as British philosophy professor Philip Kitcher labeled it.

Read the rest here:

MSNBC In Cover-Up Of Manifestly Provable Population ...

Jitsi Download – softpedia.com

Jitsi is an application designed to offer you a simple and fun way in which you can keep in touch with the people in your life.

It offers you chat, video and audio communication, all of which are possible through a comprehensive and good looking graphic interface. It supports protocols like XMPP, Jabber, SIP, AIM/ICQ, Yahoo, Windows Live and others.

As is characteristic to nearly all IM applications, Jitsi offers you a main window that contains your contacts list from where you can perform various tasks. You can change your status, call a friend or send a file. Everything about the application is straightforward and user-friendly.

Contacts can be placed into custom groups, renamed and relocated at any time. You can edit their info and start a secure chat with them. With Jitsi its possible to make audio and video calls, perform desktop streaming, make audio conference calls and record them, as well as encrypt all your calls.

It proves itself to be a reliable means of communication for all kinds of environments, home, school and even business.

The level of security that Jitsi offers is one you should not overlook. It provides encrypted password storage, call authentication, call encryption and DNSSEC support.

As far as instant messaging goes, Jitsi offers you a lot of functions from the chat window. You can invite more people to join in, call a certain contact, initiate a video call, send a file, start secure chatting and of course insert various types of emoticons.

In case you are busy or away from the computer, Jitsi provides auto answer and call forwarding to any other accounts that are added to the application.

In closing, if youre looking for an environment that brings together all the major chatting platforms then you can try Jitsi.

Read the original post:

Jitsi Download - softpedia.com

Maximum Life Foundation | Reverse Aging by 2033 Biotech …

Researchers Discovered How to Cure Aging in Our Lifetime

Maximum Life Foundation will show you how to add up to 20 healthy aging years to your life now... will help control aging and aging diseases for most individuals, and may position you for an indefinite youthful lifespan by 2033. Senescence, the destructive process that is responsible for human aging, is a primary cause behind heart disease, cancer, stroke, type II diabetes, Parkinson's, Alzheimer's disease and more. The Foundation has created a network of scientists, physicians, and biotechnology industry professionals to use their talents and resources to develop a strategic plan to understand and neutralize the causes of these disease processes.

Amazing tips & tricks to improve your health & increase longevity starting today.

Maximu Life Foundationis not only extremely informative, but it all makes so much common sense. Just about everything in...

I highly recommend David's book Life Extension Expressif you're interested in getting started in extending your life.It's very easy...

Continue reading here:

Maximum Life Foundation | Reverse Aging by 2033 Biotech ...

Immortality | Internet Encyclopedia of Philosophy

Immortality is the indefinite continuation of a persons existence, even after death. In common parlance, immortality is virtually indistinguishable from afterlife, but philosophically speaking, they are not identical. Afterlife is the continuation of existence after death, regardless of whether or not that continuation is indefinite. Immortality implies a never-ending existence, regardless of whether or not the body dies (as a matter of fact, some hypothetical medical technologies offer the prospect of a bodily immortality, but not an afterlife).

Immortality has been one of mankinds major concerns, and even though it has been traditionally mainly confined to religious traditions, it is also important to philosophy. Although a wide variety of cultures have believed in some sort of immortality, such beliefs may be reduced to basically three non-exclusive models: (1) the survival of the astral body resembling the physical body; (2) the immortality of the immaterial soul (that is an incorporeal existence); (3) resurrection of the body (or re-embodiment, in case the resurrected person does not keep the same body as at the moment of death). This article examines philosophical arguments for and against the prospect of immortality.

A substantial part of the discussion on immortality touches upon the fundamental question in the philosophy of mind: do souls exist? Dualists believe souls do exist and survive the death of the body; materialists believe mental activity is nothing but cerebral activity and thus death brings the total end of a persons existence. However, some immortalists believe that, even if immortal souls do not exist, immortality may still be achieved through resurrection.

Discussions on immortality are also intimately related to discussions of personal identity because any account of immortality must address how the dead person could be identical to the original person that once lived. Traditionally, philosophers have considered three main criteria for personal identity: the soul criterion , the body criterion and the psychological criterion.

Although empirical science has little to offer here, the field of parapsychology has attempted to offer empirical evidence in favor of an afterlife. More recently, secular futurists envision technologies that may suspend death indefinitely (such as Strategies for Engineered Negligible Senescence, and mind uploading), thus offering a prospect for a sort of bodily immortality.

Discourse on immortality bears a semantic difficulty concerning the word 'death. We usually define it in physiological terms as the cessation of biological functions that make life possible. But, if immortality is the continuation of life even after death, a contradiction appears to come up (Rosemberg, 1998). For apparently it makes no sense to say that someone has died and yet survived death. To be immortal is, precisely, not to suffer death. Thus, whoever dies, stops existing; nobody may exist after death, precisely because death means the end of existence.

For convenience, however, we may agree that death simply means the decomposition of the body, but not necessarily the end of a persons existence, as assumed in most dictionary definitions. In such a manner, a person may die in as much as their body no longer exists (or, to be more precise, no longer holds vital signs: pulse, brain activity, and so forth), but may continue to exist, either in an incorporeal state, with an ethereal body, or with some other physical body.

Some people may think of immortality in vague and general terms, such as the continuity of a persons deeds and memories among their friends and relatives. Thus, baseball player Babe Ruth is immortal in a very vague sense: he is well remembered among his fans. But, philosophically speaking, immortality implies the continuation of personal identity. Babe Ruth may be immortal in the sense that he is well remembered, but unless there is someone that may legitimately claim I am Babe Ruth, we shall presume Babe Ruth no longer exists and hence, is not immortal.

Despite the immense variety of beliefs on immortality, they may be reduced to three basic models: the survival of the astral body, the immaterial soul and resurrection (Flew, 2000). These models are not necessarily mutually exclusive; in fact, most religions have adhered to a combination of them.

Much primitive religious thought conceives that human beings are made up of two body substances: a physical body, susceptible of being touched, smelt, heard and seen; and an astral body made of some sort of mysterious ethereal substance. Unlike the physical body, the astral body has no solidity (it can go through walls, for example.) and hence, it cannot be touched, but it can be seen. Its appearance is similar to the physical body, except perhaps its color tonalities are lighter and its figure is fuzzier.

Upon death, the astral body detaches itself from the physical body, and mourns in some region within time and space. Thus, even if the physical body decomposes, the astral body survives. This is the type of immortality most commonly presented in films and literature (for example, Hamlets ghost). Traditionally, philosophers and theologians have not privileged this model of immortality, as there appears to be two insurmountable difficulties: 1) if the astral body does exist, it should be seen depart from the physical body at the moment of death; yet there is no evidence that accounts for it; 2) ghosts usually appear with clothes; this would imply that, not only are there astral bodies, but also astral clothes a claim simply too extravagant to be taken seriously (Edwards, 1997: 21).

The model of the immortality of the soul is similar to the astral body model, in as much as it considers that human beings are made up of two substances. But, unlike the astral body model, this model conceives that the substance that survives the death of the body is not a body of some other sort, but rather, an immaterial soul. In as much as the soul is immaterial, it has no extension, and thus, it cannot be perceived through the senses. A few philosophers, such as Henry James, have come to believe that for something to exist, it must occupy space (although not necessarily physical space), and hence, souls are located somewhere in space (Henry, 2007). Up until the twentieth century, the majority of philosophers believed that persons are souls, and that human beings are made up of two substances (soul and body). A good portion of philosophers believed that the body is mortal and the soul is immortal. Ever since Descartes in the seventeenth century, most philosophers have considered that the soul is identical to the mind, and, whenever a person dies, their mental contents survive in an incorporeal state.

Eastern religions (for example, Hinduism and Buddhism) and some ancient philosophers (for example, Pythagoras and Plato) believed that immortal souls abandon the body upon death, may exist temporarily in an incorporeal state, and may eventually adhere to a new body at the time of birth (in some traditions, at the time of fertilization). This is the doctrine of reincarnation.

Whereas most Greek philosophers believed that immortality implies solely the survival of the soul, the three great monotheistic religions (Judaism, Christianity and Islam) consider that immortality is achieved through the resurrection of the body at the time of the Final Judgment. The very same bodies that once constituted persons shall rise again, in order to be judged by God. None of these great faiths has a definite position on the existence of an immortal soul. Therefore, traditionally, Jews, Christians and Muslims have believed that, at the time of death, the soul detaches from the body and continues on to exist in an intermediate incorporeal state until the moment of resurrection. Some others, however, believe that there is no intermediate state: with death, the person ceases to exist, and in a sense, resumes existence at the time of resurrection.

As we shall see, some philosophers and theologians have postulated the possibility that, upon resurrection, persons do not rise with the very same bodies with which they once lived (rather, resurrected persons would be constituted by a replica). This version of the doctrine of the resurrection would be better referred to as re-embodiment: the person dies, but, as it were, is re-embodied.

Most religions adhere to the belief in immortality on the basis of faith. In other words, they provide no proof of the survival of the person after the death of the body; actually, their belief in immortality appeals to some sort of divine revelation that, allegedly, does not require rationalization.

Natural theology, however, attempts to provide rational proofs of Gods existence. Some philosophers have argued that, if we can rationally prove that God exists, then we may infer that we are immortal. For, God, being omnibenevolent, cares about us, and thus would not allow the annihilation of our existence; and being just, would bring about a Final Judgement (Swinburne, 1997). Thus, the traditional arguments in favor of the existence of God (ontological, cosmological, teleological) would indirectly prove our immortality. However, these traditional arguments have been notoriously criticized, and some arguments against the existence of God have also been raised (such as the problem of evil) (Martin, 1992; Smith, 1999).

Nevertheless, some philosophers have indeed tried to rationalize the doctrine of immortality, and have come up with a few pragmatic arguments in its favor.

Blaise Pascal proposed a famous argument in favor of the belief in the existence of God, but it may well be extended to the belief in immortality (Pascal, 2005). The so-called Pascals Wager argument goes roughly as follows: if we are to decide to believe whether God exists or not, it is wiser to believe that God does exist. If we rightly believe that God exists, , we gain eternal bliss; if God does not exist, we lose nothing, in as much as there is no Final Judgment to account for our error. On the other hand, if we rightly believe God does not exist, we gain nothing, in as much as there is no Final Judgment to reward our belief. But, if we wrongly believe that God does not exist, we lose eternal bliss, and are therefore damned to everlasting Hell. By a calculation of risks and benefits, we should conclude that it is better to believe in Gods existence. This argument is easily extensible to the belief in immortality: it is better to believe that there is a life after death, because if in fact there is a life after death, we shall be rewarded for our faith, and yet lose nothing if we are wrong; on the other hand, if we do not believe in a life after death, and we are wrong, we will be punished by God, and if we are right, there will not be a Final Judgment to reward our belief.

Although this argument has remained popular among some believers, philosophers have identified too many problems in it (Martin, 1992). Pascals Wager does not take into account the risk of believing in a false god (What if Baal were the real God, instead of the Christian God?), or the risk of believing in the wrong model of immortality (what if God rewarded belief in reincarnation, and punished belief in resurrection?). The argument also assumes that we are able to choose our beliefs, something most philosophers think very doubtful.

Other philosophers have appealed to other pragmatic benefits of the belief in immortality. Immanuel Kant famously rejected in his Critique of Pure Reason the traditional arguments in favor of the existence of God; but in his Critique of Practical Reason he put forth a so-called moral argument. The argument goes roughly as follows: belief in God and immortality is a prerequisite for moral action; if people do not believe there is a Final Judgment administered by God to account for deeds, there will be no motivation to be good. In Kants opinion, human beings seek happiness. But in order for happiness to coincide with moral action, the belief in an afterlife is necessary, because moral action does not guarantee happiness. Thus, the only way that a person may be moral and yet preserve happiness, is by believing that there will be an afterlife justice that will square morality with happiness. Perhaps Kants argument is more eloquently expressed in Ivan Karamazovs (a character from Dostoevskys The Brothers Karamazov) famous phrase: If there is no God, then everything is permitted... if there is no immortality, there is no virtue.

The so-called moral argument has been subject to some criticism. Many philosophers have argued that it is indeed possible to construe secular ethics, where appeal to God is unnecessary to justify morality. The question why be moral? may be answered by appealing to morality itself, to the need for cooperation, or simply, to ones own pleasure (Singer, 1995; Martin, 1992). A vigilant God does not seem to be a prime need in order for man to be good. If these philosophers are right, the lack of belief in immortality would not bring about the collapse of morality. Some contemporary philosophers, however, align with Kant and believe that secular morality is shallow, as it does not satisfactorily account for acts of sacrifice that go against self-interest; in their view, the only way to account for such acts is by appealing to a Divine Judge (Mavrodes, 1995).

Yet another pragmatic argument in favor of the belief in immortality appeals to the need to find meaning in life. Perhaps Miguel de Unamunos Del sentimiento trgico de la vida is the most emblematic philosophical treatise advocating this argument: in Unamunos opinion, belief in immortality is irrational, but nevertheless necessary to avoid desperation in the face of lifes absurdity. Only by believing that our lives will have an ever-lasting effect, do we find motivation to continue to live. If, on the contrary, we believe that everything will ultimately come to an end and nothing will survive, it becomes pointless to carry on any activity.

Of course, not all philosophers would agree. Some philosophers would argue that, on the contrary, the awareness that life is temporal and finite makes living more meaningful, in as much as we better appreciate opportunities (Heidegger, 1978). Bernard Williams has argued that, should life continue indefinitely, it would be terribly boring, and therefore, pointless (Williams, 1976). Some philosophers, however, counter that some activities may be endlessly repeated without ever becoming boring; furthermore, a good God would ensure that we never become bored in Heaven (Fischer, 2009).

Death strikes fear and anguish in many of us, and some philosophers argue that the belief in immortality is a much needed resource to cope with that fear. But, Epicurus famously argued that it is not rational to fear death, for two main reasons: 1) in as much as death is the extinction of consciousness, we are not aware of our condition (if death is, I am not; if I am, death is not); 2) in the same manner that we do not worry about the time that has passed before we were born, we should not worry about the time that will pass after we die (Rist, 1972).

At any rate, pragmatic arguments in favor of the belief in immortality are also critiqued on the grounds that the pragmatic benefits of a belief bear no implications on its truth. In other words, the fact that a belief is beneficial does not make it true. In the analytic tradition, philosophers have long argued for and against the pragmatic theory of truth, and depending on how this theory is valued, it will offer a greater or lesser plausibility to the arguments presented above.

Plato was the first philosopher to argue, not merely in favor of the convenience of accepting the belief in immortality, but for the truth of the belief itself. His Phaedo is a dramatic representation of Socrates final discussion with his disciples, just before drinking the hemlock. Socrates shows no sign of fear or concern, for he is certain that he will survive the death of his body. He presents three main arguments to support his position, and some of these arguments are still in use today.

First, Socrates appeals to cycles and opposites. He believes that everything has an opposite that is implied by it. And, as in cycles, things not only come from opposites, but also go towards opposites. Thus, when something is hot, it was previously cold; or when we are awake, we were previously asleep; but when we are asleep, we shall be awake once again. In the same manner, life and death are opposites in a cycle. Being alive is opposite to being dead. And, in as much as death comes from life, life must come from death. We come from death, and we go towards death. But, again, in as much as death comes from life, it will also go towards life. Thus, we had a life before being born, and we shall have a life after we die.

Most philosophers have not been persuaded by this argument. It is very doubtful that everything has an opposite (What is the opposite of a computer?) And, even if everything had an opposite, it is doubtful that everything comes from its opposite, or even that everything goes towards its opposite.

Socrates also appeals to the theory of reminiscence, the view that learning is really a process of remembering knowledge from past lives. The soul must already exist before the birth of the body, because we seem to know things that were not available to us. Consider the knowledge of equality. If we compare two sticks and we realize they are not equal, we form a judgment on the basis of a previous knowledge of equality as a form. That knowledge must come from previous lives. Therefore, this is an argument in favor of the transmigration of souls (that is, reincarnation or metempsychosis).

Some philosophers would dispute the existence of the Platonic forms, upon which this argument rests. And, the existence of innate ideas does not require the appeal to previous lives. Perhaps we are hard-wired by our brains to believe certain things; thus, we may know things that were not available to us previously.

Yet another of Socrates arguments appeals to the affinity between the soul and the forms. In Platos understanding, forms are perfect, immaterial and eternal. And, in as much as the forms are intelligible, but not sensible, only the soul can apprehend them. In order to apprehend something, the thing apprehending must have the same nature as the thing apprehended. The soul, then, shares the attributes of the forms: it is immaterial and eternal, and hence, immortal.

Again, the existence of the Platonic forms should not be taken for granted, and for this reason, this is not a compelling argument. Furthermore, it is doubtful that the thing apprehending must have the same nature as the thing apprehended: a criminologist need not be a criminal in order to apprehend the nature of crime.

Platos arguments take for granted that souls exist; he only attempts to prove that they are immortal. But, a major area of discussion in the philosophy of mind is the existence of the soul. One of the doctrines that hold that the soul does exist is called dualism; its name comes from the fact that it postulates that human beings are made up of two substances: body and soul. Arguments in favor of dualism are indirectly arguments in favor of immortality, or at least in favor of the possibility of survival of death. For, if the soul exists, it is an immaterial substance. And, in as much as it is an immaterial substance, it is not subject to the decomposition of material things; hence, it is immortal.

Most dualists agree that the soul is identical to the mind, yet different from the brain or its functions. Some dualists believe the mind may be some sort of emergent property of the brain: it depends on the brain, but it is not identical to the brain or its processes. This position is often labeled property dualism, but here we are concerned with substance dualism, that is, the doctrine that holds that the mind is a separate substance (and not merely a separate property) from the body, and therefore, may survive the death of the body (Swinburne, 1997).

Ren Descartes is usually considered the father of dualism, as he presents some very ingenuous arguments in favor of the existence of the soul as a separate substance (Descartes, 1980). In perhaps his most celebrated argument, Descartes invites a thought experiment: imagine you exist, but not your body. You wake up in the morning, but as you approach the mirror, you do not see yourself there. You try to reach your face with your hand, but it is thin air. You try to scream, but no sound comes out. And so on.

Now, Descartes believes that it is indeed possible to imagine such a scenario. But, if one can imagine the existence of a person without the existence of the body, then persons are not constituted by their bodies, and hence, mind and body are two different substances. If the mind were identical to the body, it would be impossible to imagine the existence of the mind without imagining at the same time the existence of the body.

This argument has been subject to much scrutiny. Dualists certainly believe it is a valid one, but it is not without its critics. Descartes seems to assume that everything that is imaginable is possible. Indeed, many philosophers have long agreed that imagination is a good guide as to what is possible (Hume, 2010). But, this criterion is disputed. Imagination seems to be a psychological process, and thus not strictly a logical process. Therefore, perhaps we can imagine scenarios that are not really possible. Consider the Barber Paradox. At first, it seems possible that, in a town, a man shaves only those persons that shave themselves. We may perhaps imagine such a situation, but logically there cannot be such a situation, as Bertrand Russell showed. The lesson to be learned is that imagination might not be a good guide to possibility. And, although Descartes appears to have no trouble imagining an incorporeal mind, such a scenario might not be possible. However, dualists may argue that there is no neat difference between a psychological and a logical process, as logic seems to be itself a psychological process.

Descartes presents another argument. As Leibniz would later formalize in the Principle of Identity of Indiscernibles, two entities can be considered identical, if and only if, they exhaustively share the same attributes. Descartes exploits this principle, and attempts to find a property of the mind not shared by the body (or vice versa), in order to argue that they are not identical, and hence, are separate substances.

Descartes states: There is a great difference between a mind and a body, because the body, by its very nature, is something divisible, whereas the mind is plainly indivisible. . . insofar as I am only a thing that thinks, I cannot distinguish any parts in me. . . . Although the whole mind seems to be united to the whole body, nevertheless, were a foot or an arm or any other bodily part amputated, I know that nothing would be taken away from the mind (Descartes, 1980: 97).

Descartes believed, then, that mind and body cannot be the same substance. Descartes put forth another similar argument: the body has extension in space, and as such, it can be attributed physical properties. We may ask, for instance, what the weight of a hand is, or what the longitude of a leg is. But the mind has no extension, and therefore, it has no physical properties. It makes no sense to ask what the color of the desire to eat strawberries is, or what the weight of Communist ideology is. If the body has extension, and the mind has no extension, then the mind can be considered a separate substance.

Yet another of Descartes arguments appeals to some difference between mind and body. Descartes famously contemplated the possibility that an evil demon might be deceiving him about the world. Perhaps the world is not real. In as much as that possibility exists, Descartes believed that one may be doubt the existence of ones own body. But, Descartes argued that one cannot doubt the existence of ones own mind. For, if one doubts, one is thinking; and if one thinks, then it can be taken for certain that ones mind exists. Hence Descartes famous phrase: cogito ergo sum, I think, therefore, I exist. Now, if one may doubt the existence of ones body, but cannot doubt the existence of ones mind, then mind and body are different substances. For, again, they do not share exhaustively the same attributes.

These arguments are not without critics. Indeed, Leibnizs Principle of Indiscernibles would lead us to think that, in as much as mind and body do not exhaustively share the same properties, they cannot be the same substance. But, in some contexts, it seems possible that A and B may be identical, even if that does not imply that everything predicated of A can be predicated of B.

Consider, for example, a masked man that robs a bank. If we were to ask a witness whether or not the masked man robbed the bank, the witness will answer yes!. But, if we were to ask the witness whether his father robbed the bank, he may answer no. That, however, does not imply that the witness father is not the bank robber: perhaps the masked man was the witness father, and the witness was not aware of it. This is the so-called Masked Man Fallacy.

This case forces us to reconsider Leibnizs Law: A is identical to B, not if everything predicated of A is predicated of B, but rather, when A and B share exhaustively the same properties. And, what people believe about substances are not properties. To be an object of doubt is not, strictly speaking, a property, but rather, an intentional relation. And, in our case, to be able to doubt the bodys existence, but not the minds existence, does not imply that mind and body are not the same substance.

In more recent times, Descartes strategy has been used by other dualist philosophers to account for the difference between mind and body. Some philosophers argue that the mind is private, whereas the body is not. Any person may know the state of my body, but no person, including even possibly myself, can truly know the state of my mind.

Some philosophers point intentionality as another difference between mind and body. The mind has intentionality, whereas the body does not. Thoughts are about something, whereas body parts are not. In as much as thoughts have intentionality, they may also have truth values. Not all thoughts, of course, are true or false, but at least those thoughts that pretend to represent the world, may be. On the other hand, physical states do not have truth values: neurons activating in the brain are neither true, nor false.

Again, these arguments exploit the differences between mind and body. But, very much as with Descartes arguments, it is not absolutely clear that they avoid the Masked Man Fallacy.

Opponents of dualism not only reject their arguments; they also highlight conceptual and empirical problems with this doctrine. Most opponents of dualism are materialists: they believe that mental stuff is really identical to the brain, or at the most, an epiphenomenon of the brain. Materialism limits the prospects for immortality: if the mind is not a separate substance from the brain, then at the time of the brains death, the mind also becomes extinct, and hence, the person does not survive death. Materialism need not undermine all expectations of immortality (see resurrection below), but it does undermine the immortality of the soul.

The main difficulty with dualism is the so-called interaction problem. If the mind is an immaterial substance, how can it interact with material substances? The desire to move my hand allegedly moves my hand, but how exactly does that occur? There seems to be an inconsistency with the minds immateriality: some of the time, the mind is immaterial and is not affected by material states, at other times, the mind manages to be in contact with the body and cause its movement. Daniel Dennett has ridiculed this inconsistency by appealing to the comic-strip character Casper. This friendly ghost is immaterial because he is able to go through walls. But, all of a sudden, he is also able to catch a ball. The same inconsistency appears with dualism: in its interaction with the body, sometimes the mind does not interact with the body, sometimes it does (Dennett, 1992).Dualists have offered some solutions to this problem. Occasionalists hold that God directly causes material events. Thus, mind and body never interact. Likewise, parallelists hold that mental and physical events are coordinated by God so that they appear to cause each other, but in fact, they do not. These alternatives are in fact rejected by most contemporary philosophers.

Some dualists, however, may reply that the fact that we cannot fully explain how body and soul interact, does not imply that interaction does not take place. We know many things happen in the universe, although we do not know how they happen. Richard Swinburne, for instance, argues as follows: That bodily events cause brain events and that these cause pains, images, and beliefs (where their subjects have privileged access to the latter and not the former), is one of the most obvious phenomena of human experience. If we cannot explain how that occurs, we should not try to pretend that it does not occur. We should just acknowledge that human beings are not omniscient, and cannot understand everything (Swinburne, 1997, xii).

On the other hand, Dualism postulates the existence of an incorporeal mind, but it is not clear that this is a coherent concept. In the opinion of most dualists, the incorporeal mind does perceive. But, it is not clear how the mind can perceive without sensory organs. Descartes seemed to have no problems in imagining an incorporeal existence, in his thought experiment. However, John Hospers, for instance, believes that such a scenario is simply not imaginable:

You see with eyes? No, you have no eyes, since you have no body. But let that pass for a moment; you have experiences similar to what you would have if you had eyes to see with. But how can you look toward the foot of the bed or toward the mirror? Isnt looking an activity that requires having a body? How can you look in one direction or another if you have no head to turn? And this isnt all; we said that you cant touch your body because there is no body there; how did you discover this?... Your body seems to be involved in every activity we try to describe even though we have tried to imagine existing without it. (Hospers, 1997: 280)

Furthermore, even if an incorporeal existence were in fact possible, it could be terribly lonely. For, without a body, could it be possible to communicate with other minds. In Paul Edwards words: so far from living on in paradise, a person deprived of his body and thus of all sense organs would, quite aside from many other gruesome deprivations, be in a state of desolate loneliness and eventually come to prefer annihilation. (Edwards, 1997:48). However, consider that, even in the absence of a body, great pleasures may be attained. We may live in a situation the material world is an illusion (in fact, idealists inspired in Berkley lean towards such a position), and yet, enjoy existence. For, even without a body, we may enjoy sensual pleasures that, although not real, certainly feel real. However, the problems with dualism do not end there. If souls are immaterial and have no spatial extension, how can they be separate from other souls? Separation implies extension. Yet, if the soul has no extension, it is not at all clear how one soul can be distinguished from another. Perhaps souls can be distinguished based on their contents, but then again, how could we distinguish two souls with exactly the same contents? Some contemporary dualists have responded thus: in as much as souls interact with bodies, they have a spatial relationships to bodies, and in a sense, can be individuated.

Perhaps the most serious objection to dualism, and a substantial argument in favor of materialism, is the minds correlation with the brain. Recent developments in neuroscience increasingly confirm that mental states depend upon brain states. Neurologists have been able to identify certain regions of the brain associated with specific mental dispositions. And, in as much as there appears to be a strong correlation between mind and brain, it seems that the mind may be reducible to the brain, and would therefore not be a separate substance.

In the last recent decades, neuroscience has accumulated data that confirm that cerebral damage has a great influence on the mental constitution of persons. Phineas Gages case is well-known in this respect: Gage had been a responsible and kind railroad worker, but had an accident that resulted in damage to the frontal lobes of his brain. Ever since, Gage turned into an aggressive, irresponsible person, unrecognizable by his peers (Damasio, 2006).

Departing from Gages case, scientists have inferred that frontal regions of the brain strongly determine personality. And, if mental contents can be severely damaged by brain injuries, it does not seem right to postulate that the mind is an immaterial substance. If, as dualism postulates, Gage had an immortal immaterial soul, why didnt his soul remain intact after his brain injury?

A similar difficulty arises when we consider degenerative neurological diseases, such as Alzheimers disease. As it is widely known, this disease progressively eradicates the mental contents of patients, until patients lose memory almost completely. If most memories eventually disappear, what remains of the soul? When a patient afflicted with Alzheimer dies, what is it that survives, if precisely, most of his memories have already been lost? Of course, correlation is not identity, and the fact that the brain is empirically correlated with the mind does not imply that the mind is the brain. But, many contemporary philosophers of mind adhere to the so-called identity theory: mental states are the exact same thing as the firing of specific neurons.

Dualists may respond by claiming that the brain is solely an instrument of the soul. If the brain does not work properly, the soul will not work properly, but brain damage does not imply a degeneration of the soul. Consider, for example, a violinist. If the violin does not play accurately, the violinist will not perform well. But, that does not imply that the violinist has lost their talent. In the same manner, a person may have a deficient brain, and yet, retain her soul intact. However, Occams Razor requires the more parsimonious alternative: in which case, unless there is any compelling evidence in its favor, there is no need to assume the existence of a soul that uses the brain as its instrument.

Dualists may also suggest that the mind is not identical to the soul. In fact, whereas many philosophers tend to consider the soul and mind identical, various religions consider that a person is actually made up of by three substances: body, mind and soul. In such a view, even if the mind degenerates, the soul remains. However, it would be far from clear what the soul exactly could be, if it is not identical to the mind.

Any philosophical discussion on immortality touches upon a fundamental issue concerning personspersonal identity. If we hope to survive death, we would want to be sure that the person that continues to exist after death is the same person that existed before death. And, for religions that postulate a Final Judgment, this is a crucial matter: if God wants to apply justice, the person rewarded or punished in the afterlife must be the very same person whose deeds determine the outcome.

The question of personal identity refers to the criterion upon which a person remains the same (that is, numerical identity) throughout time. Traditionally, philosophers have discussed three main criteria: soul, body and psychological continuity.

According to the soul criterion for personal identity, persons remains the same throughout time, if and only if, they retain their soul (Swinburne, 2004). Philosophers who adhere to this criterion usually do not think the soul is identical to the mind. The soul criterion is favored by very few philosophers, as it faces a huge difficulty: if the soul is an immaterial non-apprehensible substance (precisely, in as much as it is not identical to the mind), how can we be sure that a person continues to be the same? We simply do not know if, in the middle of the night, our neighbors soul has transferred into another body. Even if our neighbors body and mental contents remain the same, we can never know if his soul is the same. Under this criterion, it appears that there is simply no way to make sure someone is always the same person.

However, there is a considerable argument in favor of the soul criterion. To pursue such an argument, Richard Swinburne proposes the following thought experiment: suppose Johns brain is successfully split in two, and as a result, we get two persons; one with the left hemisphere of Johns brain, the other with the right hemisphere. Now, which one is John? Both have a part of Johns brain, and both conserve part of Johns mental contents. So, one of them must presumably be John, but which one? Unlike the body and the mind, the soul is neither divisible nor duplicable. Thus, although we do not know which would be John, we do know that only one of the two persons is John. And it would be the person that preserves Johns souls, even if we have no way of identifying it. In such a manner, although we know about Johns body and mind, we are not able to discern who is John; therefore, Johns identity is not his mind or his body, but rather, his soul (Swinburne, 2010: 68).

Common sense informs that persons are their bodies (in fact, that is how we recognize people ) but, although many philosophers would dispute this, ordinary people seem generally to adhere to such a view). Thus, under this criterion, a person continues to be the same, if, and only if, they conserve the same body. Of course, the body alters, and eventually, all of its cells are replaced. This evokes the ancient philosophical riddle known as the Ship of Theseus: the planks of Theseus ship were gradually replaced, until none of the originals remained. Is it still the same ship? There has been much discussion on this, but most philosophers agree that, in the case of the human body, the total replacement of atoms and the slight alteration of form do not alter the numerical identity of the human body.

However, the body criterion soon runs into difficulties. Imagine two patients, Brown and Robinson, who undergo surgery simultaneously. Accidentally, their brains are swapped in placed in the wrong body. Thus, Browns brain is placed in Robinsons body. Let us call this person Brownson. Naturally, in as much as he has Browns brain, he will have Browns memories, mental contents, and so forth. Now, who is Brownson? Is he Robinson with Browns brain; or is he Brown with Robinsons body? Most people would think the latter (Shoemaker, 2003). After all, the brain is the seat of consciousness.

Thus, it would appear that the body criterion must give way to the brain criterion: a person continues to be the same, if and only if, she conserves the same brain. But, again, we run into difficulties. What if the brain undergoes fission, and each half is placed in a new body? (Parfit, 1984). As a result, we would have two persons pretending to be the original person, but, because of the principle of transitivity, we know that both of them cannot be the original person. And, it seems arbitrary that one of them should be the original person, and not the other (although, as we have seen, Swinburne bites the bullet, and considers that, indeed, only one would be the original person). This difficulty invites the consideration of other criteria for personal identity.

John Locke famously asked what we would think if a prince one day woke up in a cobblers body, and the cobbler in a princes body (Locke, 2009). Although the cobblers peers would recognize him as the cobbler, he would have the memories of the prince. Now, if before that event, the prince committed a crime, who should be punished? Should it be the man in the palace, who remembers being a cobbler; or should it be the man in the workshop, who remembers being a prince, including his memory of the crime?

It seems that the man in the workshop should be punished for the princes crime, because, even if that is not the princes original body, that person is the prince, in as much as he conserves his memories. Locke, therefore, believed that a person continues to be the same, if and only if, she conserves psychological continuity.

Although it appears to be an improvement with regards to the previous two criteria, the psychological criterion also faces some problems. Suppose someone claims today to be Guy Fawkes, and conserves intact very vividly and accurately the memories of the seventeenth century conspirator (Williams, 1976). By the psychological criterion, such a person would indeed be Guy Fawkes. But, what if, simultaneously, another person also claims to be Guy Fawkes, even with the same degree of accuracy? Obviously, both persons cannot be Guy Fawkes. Again, it would seem arbitrary to conclude that one person is Guy Fawkes, yet the other person isnt. It seems more plausible that neither person is Guy Fawkes, and therefore, that psychological continuity is not a good criterion for personal identity.

In virtue of the difficulties with the above criteria, some philosophers have argued that, in a sense, persons do not exist. Or, to be more precise, the self does not endure changes. In David Humes words, a person is nothing but a bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement (Hume, 2010: 178). This is the so-called bundle theory of the self.

As a corollary, Derek Parfit argues that, when considering survival, personal identity is not what truly matters (Parfit, 1984). What does matter is psychological continuity. Parfit asks us to consider this example.

Suppose that you enter a cubicle in which, when you press a button, a scanner records the states of all the cells in your brain and body, destroying both while doing so. This information is then transmitted at the speed of light to some other planet, where a replicator produces a perfect organic copy of you. Since the brain of your replica is exactly like yours, it will seem to remember living your life up to the moment when you pressed the button, its character will be just like yours, it will be every other way psychologically continuous with you. (Parfit, 1997: 311)

Now, under the psychological criterion, such a replica will in fact be you. But, what if the machine does not destroy the original body, or makes more than one replica? In such a case, there will be two persons claiming to be you. As we have seen, this is a major problem for the psychological criterion. But, Parfit argues that, even if the person replicated is not the same person that entered the cubicle, it is psychologically continuous. And, that is what is indeed relevant.

Parfits position has an important implication for discussions of immortality. According to this view, a person in the afterlife is not the same person that lived before. But, that should not concern us. We should be concerned about the prospect that, in the afterlife, there will at least be one person that is psychologically continuous with us.

As we have seen, the doctrine of resurrection postulates that on Judgment Day the bodies of every person who ever lived shall rise again, in order to be judged by God. Unlike the doctrine of the immortality of the soul, the doctrine of resurrection has not been traditionally defended with philosophical arguments. Most of its adherents accept it on the basis of faith. Some Christians, however, consider that the resurrection of Jesus can be historically demonstrated (Habermas, 2002; Craig, 2008). And, so the argument goes, if it can be proven that God resurrected Jesus from the dead, then we can expect that God will do the same with every human being who has ever lived.

Nevertheless, the doctrine of resurrection runs into some philosophical problems derived from considerations on personal identity; that is, how is the person resurrected identical to the person that once lived? If we were to accept dualism and the soul criterion for personal identity, then there is not much of a problem: upon the moment of death, soul and body split, the soul remains incorporeal until the moment of resurrection, and the soul becomes attached to the new resurrected body. In as much as a person is the same, if and only if, she conserves the same soul, then we may legitimately claim that the resurrected person is identical to the person that once lived.

But, if we reject dualism, or the soul criterion for personal identity, then we must face some difficulties. According to the most popular one conception of resurrection, we shall be raised with the same bodies with which we once lived. Suppose that the resurrected body is in fact made of the very same cells that made up the original body, and also, the resurrected body has the same form as the original body. Are they identical?

Peter Van Inwagen thinks not (Van Inwagen, 1997). If, for example, an original manuscript written by Augustine is destroyed, and then, God miraculously recreates a manuscript with the same atoms that made up Augustines original manuscript, we should not consider it the very same manuscript. It seems that, between Augustines original manuscript, and the manuscript recreated by God, there is no spatio-temporal continuity. And, if such continuity is lacking, then we cannot legitimately claim that the recreated object is the same original object. For the same reason, it appears that the resurrected body cannot be identical to the original body. At most, the resurrected body would be a replica.

However, our intuitions are not absolutely clear. Consider, for example, the following case: a bicycle is exhibited in a store, and a customer buys it. In order to take it home, the customer dismantles the bicycle, puts its pieces in a box, takes it home, and once there, reassembles the pieces. Is it the same bicycle? It certainly seems so, even if there is no spatio-temporal continuity.

Nevertheless, there is room to doubt that the resurrected body would be made up of the original bodys same atoms. We know that matter recycles itself, and that due to metabolism, the atoms that once constituted the human body of a person may later constitute the body of another person. How could God resurrect bodies that shared the same atoms? Consider the case of cannibalism, as ridiculed by Voltaire:

A soldier from Brittany goes into Canada; there, by a very common chance, he finds himself short of food, and is forced to eat an Iroquis whom he killed the day before. The Iroquis had fed on Jesuits for two or three months; a great part of his body had become Jesuit. Here, then, the body of a soldier is composed of Iroquis, of Jesuits, and of all that he had eaten before. How is each to take again precisely what belongs to him? And which part belongs to each? (Voltaire, 1997: 147)

However, perhaps, in the resurrection, God neednt resurrect the body. If we accept the body criterion for personal identity, then, indeed, the resurrected body must be the same original body. But, if we accept the psychological criterion, perhaps God only needs to recreate a person psychologically continuous with the original person, regardless of whether or not that person has the same body. John Hick believes this is how God could indeed proceed (Hick, 1994).

Hick invites a thought experiment. Suppose a man disappears in London, and suddenly someone with his same looks and personality appears in New York. It seems reasonable to consider that the person that disappeared in London is the same person that appeared in New York. Now, suppose that a man dies in London, and suddenly appears in New York with the same looks and personality. Hick believes that, even if the cadaver is in London, we would be justified to claim that the person that appears in New York is the same person that died in London. Hicks implication is that body continuity is not needed for personal identity; only psychologically continuity is necessary.

And, Hick considers that, in the same manner, if a person dies, and someone in the resurrection world appears with the same character traits, memories, and so forth, then we should conclude that such a person in the resurrected world is identical to the person who previously died. Hick admits the resurrected body would be a replica, but as long as the resurrected is psychologically continuous with the original person, then it is identical to the original person.

Yet, in as much as Hicks model depends upon a psychological criterion for personal identity, it runs into the same problems that we have reviewed when considering the psychological criterion. It seems doubtful that a replica would be identical to the original person, because more than one replica could be recreated. And, if there is more than one replica, then they would all claim to be the original person, but obviously, they cannot all be the original person. Hick postulates that we can trust that God would only recreate exactly one replica, but it is not clear how that would solve the problem. For, the mere possibility that God could make more than one replica is enough to conclude that a replica would not be the original person.

Peter Van Inwagen has offered a somewhat extravagant solution to these problems: Perhaps at the moment of each mans death, God removes his corpse and replaces it with a simulacrum which is what is burned or rots. Or perhaps God is not quite so wholesale as this: perhaps He removes for safekeeping only the core person the brain and central nervous system or even some special part of it (Van Inwagen, 1997: 246). This would seem to solve the problem of spatio-temporal continuity. The body would never cease to exist, it would only be stored somewhere else until the moment of resurrection, and therefore, it would conserve spatio-temporal continuity. However, such an alternative seems to presuppose a deceitful God (He would make us believe the corpse that rots is the original one, when in fact, it is not), and would thus contradict the divine attribute of benevolence (a good God would not lie), a major tenet of monotheistic religions that defend the doctrine of resurrection.

Some Christian philosophers are aware of all these difficulties, and have sought a more radical solution: there is no criterion for personal identity over time. Such a view is not far from the bundle theory, in the sense that it is difficult to precise how a person remains the same over time. This position is known as anti-criterialism, that is, there is no intelligible criterion for personal identity; Trenton Merricks (1998) is its foremost proponent. By doing away with criteria for personal identity, anti-criterialists purport to show that objections to resurrection based on difficulties of personal identity have little weight, precisely because we should not be concerned about criteria for personal identity.

The discipline of parapsychology purports to prove that there is scientific evidence for the afterlife; or at least, that there is scientific evidence for the existence of paranormal abilities that would imply that the mind is not a material substance. Originally founded by J.B.S. Rhine in the 1950s, parapsychology has fallen out of favor among contemporary neuroscientists, although some universities still support parapsychology departments.

Parapsychologists usually claim there is a good deal of evidence in favor of the doctrine of reincarnation. Two pieces of alleged evidence are especially meaningful: (1) past-life regressions; (2) cases of children who apparently remember past lives.

Under hypnosis, some patients frequently have regressions and remember events from their childhood. But, some patients have gone even further and, allegedly, have vivid memories of past lives. A few parapsychologists take these as so-called past-life regressions as evidence for reincarnation (Sclotterbeck, 2003).

However, past-life regressions may be cases of cryptomnesia, that is, hidden memories. A person may have a memory, and yet not recognize it as such. A well-known case is illustrative: an American woman in the 1950s was hypnotized, and claimed to be Bridey Murphy, an Irishwoman of the 19th century. Under hypnosis, the woman offered a fairly good description of 19th century Ireland, although she had never been in Ireland. However, it was later discovered that, as a child, she had an Irish neighbor. Most likely, she had hidden memories of that neighbor, and under hypnosis, assumed the personality of a 20th century Irish woman.

It must also be kept in mind that hypnosis is a state of high suggestibility. The person that conducts the hypnosis may easily induce false memories on the person hypnotized; hence, alleged memories that come up in hypnosis are not trustworthy at all.

Some children have claimed to remember past lives. Parapsychologist Ian Stevenson collected more than a thousand of such cases (Stevenson, 2001). And, in a good portion of those cases, children know things about the deceased person that, allegedly, they could not have known otherwise.

However, Stevensons work has been severely critiqued for its methodological flaws. In most cases, the childs family had already made contact with the deceaseds family before Stevensons arrival; thus, the child could pick up information and give the impression that he knows more than what he could have known. Paul Edwards has also accused Stevenson of asking leading questions towards his own preconceptions (Edwards, 1997: 14).

Moreover, reincarnation runs into conceptual problems of its own. If you do not remember past lives, then it seems that you cannot legitimately claim that you are the same person whose life you do not remember. However, a few philosophers claim this is not a good objection at all, as you do not remember being a very young child, and yet can still surely claim to be the same person as that child (Ducasse, 1997: 199).

Population growth also seems to be a problem for reincarnation: according to defenders of reincarnation, souls migrate from one body to another. This, in a sense, presupposes that the number of souls remains stable, as no new souls are created, they only migrate from body to body. Yet, the number of bodies has consistently increased ever since the dawn of mankind. Where, one may ask, were all souls before new bodies came to exist? (Edwards, 1997: 14). Actually, this objection is not so formidable: perhaps souls exist in a disembodied form as they wait for new bodies to come up (DSouza, 2009: 57).

During the heyday of Spiritualism (the religious movement that sought to make contact with the dead), some mediums gained prominence for their reputed abilities to contact the dead. These mediums were of two kinds: physical mediums invoked spirits that, allegedly, produced physical phenomena (for example, lifting tables); and mental mediums whose bodies, allegedly, were temporarily possessed by the spirits.

Most physical mediums were exposed as frauds by trained magicians. Mental mediums, however, presented more of a challenge for skeptics. During their alleged possession by a deceased persons spirit, mediums would provide information about the deceased person that, apparently, could not have possibly known. William James was impressed by one such medium, Leonora Piper, and although he remained somewhat skeptical, he finally endorsed the view that Piper in fact made contact with the dead.

Some parapsychologists credit the legitimacy of mental mediumship (Almeder, 1992). However, most scholars believe that mental mediums work through the technique of cold reading: they ask friends and relatives of a deceased person questions at a fast pace, and infer from their body language and other indicators, information about the deceased person (Gardner, 2003).

Read the original post:

Immortality | Internet Encyclopedia of Philosophy

Freedom Health – Tampa, FL – Inc.com

Freedom Health - Tampa, FL You're about to be redirected

We notice you're visiting us from a region where we have a local version of Inc.com.

READ THIS ARTICLE ON

or remain on inc.com

' + searchedData[i].rank + '

' + growth + '

' + revenue + '

' + searchedData[i].industry + '

' + searchedData[i].metro + '

Please search by company, industry, or metro area above...

Please search by company name, industry, or metro area above...

CLOSE

Get Inc. Straight to Your Inbox

SIGN UP FOR TODAY'S 5 MUST READS

Freedom Health administers Medicare and Medicaid benefits in numerous counties in Florida. A health insurance company owned and operated by physicians, Freedom Health focuses on providing cost-effective health insurance that both improves quality of care and reduces total out-of-pocket costs for its members.

Read the rest here:

Freedom Health - Tampa, FL - Inc.com

In ‘Full-on War on Drugs Scare-Fest,’ Trump Proposes Death …

In a speech officially unveiling his administration's plan to combat the nation's ongoing opioid epidemic, President Donald Trump on Monday saod he would fight the crisis with "toughness", the creation of "very...very...bad commercials" aimed at children; andas expectedproposed that the death penalty be applied to drug dealers.

However, as drug policy reform advocates feared, he showed little understanding of the origins of the crisis and neglected to mention numerous measures public health experts have advocated for to stop the deadly epidemic.

A key tenet of Trump's plan to combat the crisis, which killed nearly 64,000 Americans in 2016, is to launch an advertising campaign showing the effects of opioid use.

"The best way to beat the drug crisis is to keep people from getting hooked on drugs to begin with," he told a crowd in Manchester, N.H. "As part of that effortso important, this is something I've been strongly in favor ofspending a lot of money on great commercials showing how bad it is."

The ads, Trump added, would be "very...very...bad commercials...And when they see those commercials, hopefully they're not going to be going to drugs of any kind."

Trump expresses support for anti-drug commercials aimed at kids to stop them from getting addicted to opioids: "That's the least expensive thing we can we do, where you scare them from ending up like the people in the commercials" pic.twitter.com/rIgUmRBMHL

BuzzFeed News (@BuzzFeedNews) March 19, 2018

The proposal struck critics as similar to First Lady Nancy Reagan's "Just Say No" campaign of the 1980s, which has been denounced as "simplistic and vague" and which studies have shown did not make young Americans any less likely to use drugs.

Trump today called for "great commercials" that show kids "how bad" drugs are. As we explained recently, that strategy has been tried and hasn't worked. https://t.co/JSu2WDOjEo

The Upshot (@UpshotNYT) March 19, 2018

Scare tactics & Just Say No programs are not effective. Its better to equip our young people and parents with real information. https://t.co/GvZKxYyooI

Drug Policy Alliance (@DrugPolicyOrg) March 19, 2018

While Trump spent a large portion of his speech talking about keeping kids away from drugs, statistics show that Americans in their 50s and 60s are most at risk for overdosing on prescription opioidsa major driver of the overall crisis.

The prevalence of heroin abuse is of greater concern among younger Americans, but recent studies have shown that three-quarters of people who began using heroin in the 2000s abused prescription opioid painkillers first. Doctors have suggested that lax prescribing practices within their own profession continue to contribute to the opioid crisiscalling into question the notion that commercials would successfully steer Americans away from the drugs.

The president also linked the epidemic to immigration, urging Democrats to back his plan to "build the wall to keep the damn drugs out" and leading audience members in the chant, "Build the wall!"

But drug policy experts say that tougher border security would have little to no effect on the prevalence of drugs in the U.S.

"A wall alone cannot stop the flow of drugs into the United States," Christopher Wilson of the Mexico Institute at the Wilson Center, told Vox last year. "...history shows us that border enforcement has been much more effective at changing the when and where of drugs being brought into the United States rather than the overall amount of drugs being brought into the United States."

Critics also expressed shock at the president's proposal to seek the death penalty for drug dealersa plan that was hinted at last week. Trump has expressed admiration for Filipino President Rodrigo Duterte's drug warwhich has resulted in the deaths of thousands, many in poor communitiesand the stringent drug policies applied by Singapore's government.

"We can have all the blue ribbon committees we want but if we don't get tough on the drug dealers we're wasting our time...and that toughness includes the death penalty," said the president.

The remark was condemned on social media by many, including drug policy experts, who have long said drug addition should be treated as a public health issue instead of a criminal matter.

Anyone casually invoking taking the life of someone needs to be put in time out, including the POTUS. If you are not willing to clearly articulate a precise definition for what counts as taking someones life for an action, you have zero credibility and should be treated as such.

Bryan William Jones (@BWJones) March 19, 2018

Text from govt health official:

Potus remarks are a full-on War on Drugs scare fest. Health policy staff are extremely disappointed by this divisive rhetoric, and his focus on actions that we know dont work.

Dan Diamond (@ddiamond) March 19, 2018

Had heard from health officials who were crossing their fingers that Trump would lay off death penalty language and focus on the many public health proposals in the plan.

(He didnt.)

Dan Diamond (@ddiamond) March 19, 2018

Read the original:

In 'Full-on War on Drugs Scare-Fest,' Trump Proposes Death ...

Orfox: Tor Browser for Android – Android Apps on Google Play

Orfox is built from the same source code as Tor Browser (which is built upon Firefox), but with a few minor modifications to the privacy enhancing features to make them compatible with Firefox for Android and the Android operating system.

Orfox REQUIRES Orbot app for Android to connect to the Tor network.

In as many ways as possible, we adhere to the design goals of Tor Browser (https://www.torproject.org/projects/torbrowser/design/), by supporting as much of their actual code as possible, and extending their work into the additional Android components of Firefox for Android.

** Also, includes NoScript and HTTPSEverywhere add-ons built in!

The Tor software protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, it prevents the sites you visit from learning your physical location, and it lets you access sites which are blocked.

Learn more at:https://guardianproject.info/apps/orfox

* * How is Orfox different than Tor Browser for desktop?

* The Orfox code repository is at https://github.com/guardianproject/tor-browser and the Tor Browser repository is here:https://gitweb.torproject.org/tor-browser.git/. The Orfox repository is a fork of the Tor Browser repository with the necessary modification and Android-specific code as patches on top of the Tor Browser work. We will keep our repository in sync with updates and release of Tor Browser.

* Orfox is built from the Tor Browser repo based on ESR38 (https://dev.guardianproject.info/issues/5146https://dev.guardianproject.info/news/221) and has only two modified patches that were not relevant or necessary for Android

* Orfox does not currently include the mobile versions of the Tor Browser * Button, but this we will be added shortly, now that we have discovered how to properly support automatic installation of extensions on Android (https://dev.guardianproject.info/issues/5360)

* Orfox currently allows for users to bookmark sites, and may have additional data written to disk beyond what the core gecko browser component does. We are still auditing all disk write code, and determining how to appropriately disable or harden it. (https://dev.guardianproject.info/issues/5437)

* * How is Orfox different than Orweb?

Orweb is our current default browser for Orbot/Tor mobile users (https://guardianproject.info/apps/orweb) that has been downloaded over 2 million times. It is VERY VERY SIMPLE, as it only has one tab, no bookmark capability, and an extremely minimal user experience.

Orweb is built upon the bundled WebView (Webkit) browser component inside of the Android operating system. This has proven to be problematic because we cannot control the version of that component, and cannot upgrade it directly when bugs are found. In addition, Google has made it very difficult to effectively control the network proxy settings of all aspects of this component, making it difficult to guarantee that traffic will not leak on all devices and OS versions.

Orweb also only provides a very limited amount of capability of Tor Browser, primarily related to reducing browser fingerprinting, minimizing disk writes, and cookie and history management. It trys to mimic some of the settings of Tor Browser, but doesnt actually use any of the actual code written for Tor Browser security hardening.

Original post:

Orfox: Tor Browser for Android - Android Apps on Google Play

Lau Islands – Wikipedia

Location of the Lau Islands in the Pacific Ocean

The Lau Islands (also called the Lau Group, the Eastern Group, the Eastern Archipelago) of Fiji are situated in the southern Pacific Ocean, just east of the Koro Sea. Of this chain of about sixty islands and islets, about thirty are inhabited. The Lau Group covers a land area of 188 square miles (487 square km), and had a population of 10,683 at the most recent census in 2007. While most of the northern Lau Group are high islands of volcanic origin, those of the south are mostly carbonate low islands.

Administratively the islands belong to Lau Province.

The British explorer James Cook reached Vatoa in 1774. By the time of the discovery of the Ono Group in 1820, the Lau archipelago was the most mapped area of Fiji.

Political unity came late to the Lau Islands. Historically, they comprised three territories: the Northern Lau Islands, the Southern Lau Islands, and the Moala Islands. Around 1855, the renegade Tongan prince Enele Ma'afu conquered the region and established a unified administration. Calling himself the Tui Lau, or King of Lau, he promulgated a constitution and encouraged the establishment of Christian missions. The first missionaries had arrived at Lakeba in 1830, but had been expelled. The Tui Nayau, who had been the nominal overlord of the Lau Islands, became subject to Ma'afu.

The Tui Nayau and Tui Lau titles came into personal union in 1969, when Ratu Sir Kamisese Mara, who had already been installed as Tui Lau in 1963 by the Yavusa Tonga, was also installed as Tui Nayau following the death of his father Ratu Tevita Uluilakeba III in 1966. The title Tui Lau was left vacant from his uncle, Ratu Sir Lala Sukuna, in 1958 as referenced in Mara, The Pacific Way Paper.

The Northern Lau Islands, which extended as far south as Tuvuca, were under the overlordship of Taveuni and paid tribute to the Tui Cakau (Paramount Chief of Cakaudrove). In 1855, however, Ma'afu gained sovereignty over Northern Lau, establishing Lomaloma, on Vanua Balavu, as his capital.

The Southern Lau Islands extended from Ono-i-Lau, in the far south, to as far north as Cicia. They were the traditional chiefdom of the Tui Nayau, but with Ma'afu's conquest in the 1850s, he became subject to Tongan supremacy.

The Moala Islands had closer affiliation with Bau Island and Lomaiviti than with Lau, but Ma'afu's conquest united them with the Lau Islands. They have remained administratively a part of the Lau Province ever since.

Since they lie between Melanesian Fiji and Polynesian Tonga, the Lau Islands are a meeting point of the two cultural spheres. Lauan villages remain very traditional, and the islands' inhabitants are renowned for their wood carving and masi paintings. Lakeba especially was a traditional meeting place between Tongans and Fijians. The south-east trade winds allowed sailors to travel from Tonga to Fiji, but much harder to return. The Lau Island culture became more Fijian rather than Polynesian beginning around 500 BC.[1] However, Tongan influence can still be found in names, language, food, and architecture. Unlike the square-shaped ends characterizing most houses elsewhere in Fiji, Lauan houses tend to be rounded, following the Tongan practice.

In early July 2014, Tonga's Lands Minister, Lord Maafu Tukuiaulahi, revealed a proposal for Tonga to give the disputed Minerva Reefs to Fiji in exchange for the Lau Group.[2] At the time that news of the proposal first broke, it had not yet been discussed with the Lau Provincial Council.[3] Many Lauans have Tongan ancestors and some Tongans have Lauan ancestors; Tonga's Lands Minister is named after Enele Ma'afu, the Tongan Prince who originally claimed parts of Lau for Tonga.[4] Historically, the Minerva Reefs have been part of the fishing grounds belonging to the people of Ono-i-Lau, an island in the Lau Group.[5]

Just off the island of Vanua Balavu at Lomaloma was the Yanuyanu Island Resort, built to encourage tourism in what has been a less accessible area of Fiji, but the small resort failed almost immediately and has been abandoned since the year 2000. An airstrip is located off Malaka village and a port is also located on Vanua Balavu, at Lomaloma. There are guest houses on Vanua Balavu and on Lakeba, the other principal island.

The Lau Islands are the centre of the game of Cricket in Fiji. Cricket is the most popular team sport in Lau, unlike the rest of the country where Rugby and Association Football are preferred. The national team is invariably dominated by Lauan players.

The Lau Islands' most famous son is the late Ratu Sir Kamisese Mara (1920-2004), the Tui Lau, Tui Nayau, Sau ni Vanua (hereditary Paramount Chief of the Lau Islands) and the founding father of modern Fiji who was Prime Minister for most of the period between 1967 and 1992, and President from 1993 to 2000. Other noted Lauans include Ratu Sir Lala Sukuna (1898-1958), who forged embryonic constitutional institutions for Fiji in the years that preceded independence. Other notable Lauans include:

Given its small population, the Lau Islands' contribution to the leadership of Fiji has been disproportionately large.[citation needed]

List of resources about traditional arts and culture of Oceania

Coordinates: 1750S 17840E / 17.833S 178.667E / -17.833; 178.667

See the original post here:

Lau Islands - Wikipedia

nootropics / smart drugs

Sceptics about the possibility of nootropics("smart drugs")are victims of the so-called Panglossianparadigm of evolution. They believe that our cognitive architecture has beenso fine-honed by natural selection that any tinkering with such a wonderfullyall-adaptive suite of mechanisms is bound to do more harm than good. Certainlythe notion that merely popping a pill could make you intellectually brighter sounds implausible - the sort of journalistic excess that sits more comfortably in thepages of Fortean Times than any scholarlyjournal of repute.

Yet as Dean,Morgenthaler and Fowkes'(hereafter "DMF") book attests, the debunkers are wrong. On the one hand, numerousagents with anticholinergicproperties are essentially dumb drugs.Anticholinergics impair memory, alertness, focus, verbal facility and creative thought. Conversely,a variety of cholinergic drugsand nutrients, which form a large part of the smart chemist's arsenal, can subtlybut significantly enhance cognitive performance on a whole range of tests. Thisholds true for victims of Alzheimer'sDisease, who suffer in particular from a progressive and disproportionateloss of cholinergic neurons.Yet, potentially at least, cognitive enhancers can aid non-dementedpeople too. Many members of the "normally" ageing population can benefit from an increasedavailability of acetylcholine,improved blood-flow to the brain, increased ATP production and enhanced oxygen and glucose uptake. Mostrecently, research with ampakines,modulators of neurotrophin-regulating AMPA-typeglutamate receptors, suggests that designer nootropics will soon deliver sharperintellectual performance even to healthy young adults.

DMFprovide updates from Smart Drugs (1) on piracetam,acetyl-l-carnitine,vasopressin, and severalvitamin therapies. Smart Drugs II offers profiles of agents such asselegiline (l-deprenyl), melatonin,pregnenolone,DHEA and ondansetron(Zofran). There is also a provocative question-and-answer section; a discussionof product sources; and aguide to further reading.

Sowhat's the catch? Unfortunately, there are many. Large, well-controlled, long-term trials of putative nootropics are scarce: the whole field of cognitive enhancement is rife with self-deception, snake-oil, hucksterism and (at best) publication bias. Another problem, to which not all authorities on nootropics giveenough emphasis, is the complex interplay between cognition and mood.Thus great care should be taken before tampering with the noradrenaline/acetylcholineaxis. Thought-frenziedhypercholinergic states,for instance, are characteristic of one "noradrenergic"sub-type of depression. A predominance of forebrain cholinergic activity, frequentlytriggered by chronic uncontrolled stress,can lead to a reduced sensitivity to reward,an inability to sustain effort, and behaviouralsuppression.

This mood-modulatingeffect does make some sort of cruel genetic sense. Extreme intensityof reflective thought may function as an evolutionarily adaptive response whenthings go wrong. When they're going right, as in optimal states of "flow experience", we don't need to bother. Hence boostingcholinergic function, aloneand in the absence of further pharmacologic intervention, can subdue mood. Cholinergics can even induce depression in susceptible subjects. Likewise, beta-adrenergic antagonists(e.g. propranolol(Inderal)) can induce depression and fatigue. Conversely, "dumb-drug" anticholinergicsmay sometimes have mood-brightening - progressing to deliriant- effects. Indeed antimuscarinic agents acting in the nucleus accumbens may eveninduce a "mindless" euphoria.

Now it might seem axiomaticthat helping everyone think more deeply is just what the doctor ordered. Yet our educationsystem is already pervaded by an intellectual snobbery that exalts academic excellenceover social cognition and emotional well-being. In the modern era, examinationrituals bordering on institutionalised child-abuse take a heavy toll on younglives. Depression and anxiety-disorders among young teens are endemic - and stillrising. It's worth recallingthat research laboratories routinely subject non-human animals to a regimen of"chronic mild uncontrolled stress" to induce depression in their captive animalpopulation; investigators then test putative newantidepressants on the depressed animals to see if their despair can beexperimentally reversed by patentabledrugs. The "chronic mild stressors" that we standardly inflict on adolescent humans can have noless harmful effects on the mental health of captive school-students; but in this case,no organised effort is made to reverse it. Instead its victims often go on toself-medicate with ethyl alcohol,tobacco and streetdrugs. So arguably at least, the deformed and emotionally pre-literate mindschurned out by our schools stand in need of safe, high-octane mood-brightenersmore urgently than cognitive tweakers. Memory-enhancers might be more worthwhile if we had more experiences worth remembering.

Onepossible solution to this dilemma involves taking a cholinergic agent such aspiracetam (Nootropil) or aniracetam(Draganon, Ampamet) that also enhances dopamine function. In the late twentieth century, many researchersbelieved that the mesolimbicdopamine system acts as the finalcommon pathway for pleasure in the brain. This hypothesis turned out to be simplistic at best. The mesolimbic dopamine system is most directly implicated in motivation and the capacity to anticipate future pleasures. The endogenous opioid system, and in particular activation of the mu opioid receptors, that mediates pure pleasure. Mesolimbic dopamine amplifies "incentive-motivation": "wanting" and "liking" may have different substrates, albeit intimately linked. Moreover mood-elevating memory-enhancers such as phosphodiesterase inhibitors (e.g. the selective PDE4 inhibitor rolipram) act on different neural pathways - speeding and strengthening memory-formation by prolonging the availability of CREB. In any event, severalof the most popular smart drugs discussed by DMF do indeed act on both the cholinergicand dopaminergic systems. In addition, agents like aniracetamand its analogs increase hippocampal glutaminergic activity. Hippocampalfunction is critical to memory- and mood. Thusnewly developed ampakines,agents promoting long-term potentiation of AMPA-typeglutamate receptors, are powerful memory-enhancers and future nootropics.

Another approach to enhancingmood and intellect alike involves swapping or combining a choline agonist with a different, primarily dopaminergic drug. Here admittedly there are methodological problems. The improved test score performances reported on so-called smart dopaminergics may have other explanations. Not all studies adequately exclude the confounding variables of increased alertness, sharper sensory acuity, greater motor activity or improved motivation - as distinct from any "pure" nootropic action. Yet the selective dopamine reuptake blocker amineptine(Survector) is both a mood-brightenerand a possible smart-drug. Likewiseselegiline, popularly known as l-deprenyl,has potentially life-enhancing properties. Selegiline is a selective, irreversibleMAO-b inhibitor with antioxidant,immune-system-boosting andanti-neurodegenerative effects. It retards the metabolism not just of dopaminebut also of phenylethylamine, atrace amine also found in chocolate andreleased when we're in love. Selegiline also stimulates the release of superoxidedismutase (SOD); SOD is a key enzymewhich helps to quench damaging free-radicals. Taken consistently in low doses,selegiline extends the life-expectancy of ratsby some 20%; enhances drive, libido and endurance; and independently improvescognitive performance in Alzheimer'spatients and in some healthy normals. It is used successfully to treat caninecognitive dysfunction syndrome (CDS) in dogs. In 2006, higher dose (i.e. less MAO-b selective) selegiline was licensed as the antidepressant EMSAM, a transdermal patch.Selegiline also protects the brain's dopaminecells from oxidative stress. The brain has only about 400,000 - 600,000 dopaminergicneurons in all. We lose perhaps 13% a decade in adult life. An eventual70%-80% loss leads to the dopamine-deficiency disorder Parkinson'sdisease and frequently depression.Clearly anything that spares so precious a resource might prove a valuable toolfor life-enrichment.

In 2005, a second selective MAO-b inhibitor, rasagiline (Azilect) gained an EC product license. Its introduction was followed a year later in the USA. Unlike selegiline, rasagiline doesn't have amphetamine trace metabolites - a distinct if modest therapeutic advantage.

Looking further ahead, the bifunctional cholinesterase inhibitor and MAO-b inhibitor ladostigil acts both as a cognitive enhancer and a mood brightener. Ladostigil has neuroprotective and potential antiaging properties too. Its product-license is several years away at best.

Consider,for instance, the plight of genetically engineered "smartmice" endowed with an extra copy of the NR2Bsubtype of NMDA receptor.It is now known that such brainy "Doogie" mice suffer from a chronically increasedsensitivity to pain.Memory-enhancing drugs and potential gene-therapies targeting the same receptorsubtype might cause equally disturbing side-effects in humans. Conversely, NMDAantagonists like the dissociative anaesthetic drug ketamineexert amnestic, antidepressant and analgesic effects in humans and non-humansalike.

Amplified memory canitself be a mixed blessing. Even among the drug-nave and chronically forgetful,all kinds of embarrassing, intrusive and traumatic memories may haunt our lives.Such memories sometimes persistfor months, years or even decades afterwards. Unpleasant memories can sour thewell-being even of people who don't suffer from clinical PTSD.The effects of using all-round memory enhancers might do something worse thanmerely fill our heads with clutter. Such agents could etch traumatic experiencesmore indelibly into our memories. Or worse, such all-round enhancers might promotethe involuntary recall of our nastiest memories with truly nightmarish intensity. Ironically, a popular smart drug such as modafinil can be used experimentally to prevent long-term memory consolidation in animal models" - not quite the effect pill-popping students cramming for exams have in mind. Like most psychostimulants, modafinil may also have a subtle anti-empathetic effect.

By contrast, the design ofchemical tools that empower us selectively to forget unpleasant memoriesmay prove to be at least as life-enriching as agents that help us remember moreeffectively. Unlike the software of digital computers, human memories can't bespecifically deleted to order. But this design-limitation may soon be overcome.The synthesis of enhanced versions of protease inhibitors such as anisomycinmay enable us selectively to erase horrible memories. If such agents can be refinedfor our personal medicine cabinets, then we'll potentially be able to rid ourselvesof nasty or unwanted memories at will - as distinct from drowning our sorrows withalcohol or indiscriminatelydulling our wits with tranquillisers.In future, the twin availability of 1] technologies to amplify desirable memories,and 2] selective amnestics to extinguish undesirable memories, promisesto improve our quality of life far more dramatically than use of today's lame smartdrugs.

Such a utopianpharmaceutical toolkit is still some way off. Given our current primitive state of knowledge,it's hard to boost the function of one neurotransmitter signalling system or receptorsub-type without eliciting compensatory and often unwanted responses from others.Life's successful, dopamine-driven go-getters, for instance, whether naturallypropelled or otherwise, maybe highly productive individuals. Yet they are rarely warm, relaxed and sociallyempathetic. This is because, crudely,dopamine overdrive tends to impair "civilising serotonin" function. Likewise, testosterone functionally antagonises pro-social oxytocin in the CNS. Unfortunately,tests of putative smart drugs typically reflect an impoverished and culture-boundconception of intelligence. Indeed today's "high IQ" alpha males may strike posterity as more akin to idiot savants than imposing intellectual giants. IQ tests, and all conventional scholastic examinations,neglect creative and practical intelligence. IQ tests simply ignore social cognition.Social intelligence, and its cognate notion of "emotionalIQ", isn't some second-rate substitute for people who can't do IQ tests. Onthe contrary, according to the Machiavellianape hypothesis, the evolution of human intelligence has been driven by oursuperior "mind-reading" skills. Higher-order intentionality [e.g. "you believe that I hope that she thinks that I want...", etc] is central to the lives of advanced social beings. The unique development of human mind is an adaptationto social problem-solving and the selective advantages it brings. Yetpharmaceuticals that enhance our capacity for empathy,enrich our socialskills, expand our "state-space" of experience, or deepen our introspectiveself-knowledge are not conventional candidates for smart drugs. For such facultiesdon't reflect our traditional [male] scientific value-judgements on what qualifiesas "intelligence". Thus in academia, for instance, competitive dominance behaviour among "alpha" male human primates often masquerades as the pursuit of scholarship. Emotional literacy is certainly harder to quantifyscientifically than mathematical puzzle-solving ability or performance in verbalmemory-tests. But to misquote Robert McNamara, we need to stop making what is measurable important, and find ways to make the important measurable. By some criteria, contemporary IQ tests are better measures of high-grade autism than mature full-spectrum intelligence. So before chemically manipulating one's mind, it's worth criticallyexamining which capacities one wants to enhance; and to what end?

Inpractice, the first and most boring advice is often the most important.Many potential users of smart pills would be better and more simply advised tostop taking tranquillisers, sleeping tablets or toxic recreational drugs; practise good sleep discipline; eat omega-3 rich foods, more vegetables and generally improvetheir diet; and try more mentally challengingtasks. One of the easiest ways of improving memory,for instance, is to increase the flow of oxygenated blood to the brain. Enhanced cerebrovascular function canbe achieved by running, swimming, dancing, brisk walking, and more sex.Regular vigorous exercisealso promotes nervecell growth in the hippocampus. Hippocampal brain cell growth potentiallyenhances mood, memory andcognitive vitality alike. Intellectuals are prone to echo J.S. Mill: "Better to be an unhappy Socrates than a happy pig". But happiness is typically good for the hippocampus; by contrast, the reduced hippocampal volume anatomically characteristic of depressives correlates with the length of their depression.

In our current state of ignorance, homely remedies are still sometimes best. Thus moderateconsumption of adenosine-inhibiting,common-or-garden caffeineimproves concentration, mood and alertness; enhances acetylcholinerelease in the hippocampus; and statistically reduces the risk of suicide. Regular coffee drinking induces competitive and reversible inhibition of MAO enzymes type A and B owing to coffee's neuroactive beta-carbolines. Coffee is also rich in antioxidants.Non-coffee drinkers are around threetimes more likely to contract Parkinson's disease. A Michigan studyfound caffeine use was correlated with enhanced male virility in later life.

Before resorting to pills, aspiringintellectual heavyweights might do well to start the day with a low-fat/high carbohydratebreakfast: muesli ratherthan tasty well-buttered croissants. This will enhance memory, energy and bloodglucose levels. An omega-3 rich diet will enhance all-round emotional and intellectual health too. A large greasy fry-up, on the other hand, can easily leave onefeeling muddle-headed, drowsy and lethargic. If one wants to stay sharp, and toblunt the normal mid-afternoon dip, then eating big fatty lunches isn't a goodidea either. Fat releases cholecystokinin(CCK) from the duodenum. Modest intravenous infusions of CCK makeone demonstrably dopey and subdued.

To urgesuch caveats is not to throw up one's hands in defeatist resignation. Creativepsychopharmacology can often in principle circumvent such problems, even today.There may indeed be no safe drugs but just safe dosages.Yet some smart drugs, such as piracetam,are relatively innocuous. If the user doesn't like their effects, (s)he can simply stop taking them. Agents such as the alpha-1adrenergic agonist adrafinil (Olmifron) typicallydo have both mood-brightening and intellectually invigorating effects. Adrafinil,like its chemical cousin modafinil (Provigil),promotes alertness, vigilance and mental focus; and its more-or-less pure CNSaction ensures it doesn't cause unwanted peripheral sympathetic stimulation.

Unfortunately the lay public iscurrently ill-served, a few shining exceptions aside, by the professionals. A conditionof ignorance and dependence is actively fostered where it isn't just connivedat in the wider population. So there's often relatively little point in advisinganyone contemplating acting on DMF's book to consult their physician first. Forit's likely their physician won't want to know, or want them to know, in the firstinstance.

As traditional formsof censorship, news-management and governmental information-control break down,however, and the Net insinuates itself into ever more areas of daily life, moreand more people are stumbling upon - initially - and then exploring,the variety of drugs and combination therapies which leading-edge pharmaceuticalresearch puts on offer. They are increasingly doing so as customers,and not as patronisingly labelled role-bound "patients". Those outside the charmed circle havepreviously been cast in the obligatory role of humble supplicants. The more jaundicedor libertarian among the excluded may have felt themselves at the mercy of prescription-wielding,or -withholding,agents of one arm of the licensed drugcartels. So when the control of the cartels and their agents falters, thereis an especially urgent need for incisive and high-quality information to be madereadily accessible. Do DMF fulfil it?

SmartDrugs 2 lays itself wide open to criticism; but then it takes on an impossibletask. In the perennial trade-off between accessibility and scholarly rigour, compromisesare made on both sides. Ritual disclaimers aside, DMF's tone can at times seemtoo uncritically gung-ho. Their drug-profiles and cited studies don't always givedue weight to the variations in sample size and the quality of controls. Nor dothey highlight the uncertain calibre of the scholarly journals in which some ofthe most interesting results are published. DMFs inclusion of anecdote-studdedpersonal testimonials is almost calculated to inflame medical orthodoxy. Moreoverit should be stressed that the scientific gold-standard of large, placebo-controlled,double-blind cross-over prospective trialsare still quite rare in this field as a whole.

Looking ahead, this century'smood-boosting, intellect-sharpening,empathy-enhancingand personality-enriching drugs arethemselves likely to prove only stopgaps. This is because invincible, life-longhappiness and supergeniusintellect may one day be geneticallypre-programmed and possibly ubiquitous in our transhuman successors.Taking drugs to repair Nature's deficiencies may eventually become redundant.Memory- and intelligence-boosting gene therapies are already imminent.But in repairing the deficiencies of an educational system geared to producingdysthymic pharmacological illiterates, SmartDrugs 1 and 2 offers a warmly welcome start.

DP (1998, 2017).

Continued here:

nootropics / smart drugs

Cyborg – Injustice:Gods Among Us Wiki

Cyborg

Victor "Vic" Stone

DC Comics Presents #26 (October 1980)

Let's get this party started.

Cyborg is a playable hero character in Injustice: Gods Among Us and Injustice 2. He is classified as a Power User.

Part man, part machine, Victor Stone is able to shift his cybernetic body parts into whatever tech he requires. A member of the Justice League, Cyborg is one of crimes most formidable enemies.

Cyborgs fellow Teen Titans did not survive Supermans rise to power. This trauma, coupled with the influence of other, more experienced heroes, led Cyborg to become one of the oppressive regimes enforcers.

Victor Stone lost more than his friends at the tragedy of Metropolis, he lost his hope. His anger tempered his loyalty for Superman and he has remained eager to serve the Regime. With the world left unprepared for the looming threat, Cyborg may be the only one who can combat the technological might of Brainiac.

Cyborg first appears defending the Watchtower alongside Nightwing and Raven against Lex Luthor, Catwoman, Bane and Solomon Grundy. After Batman arrives to assist them, he and Cyborg receive a warning signal about the Joker setting up a nuclear bomb in the center of Metropolis. After Batman, the Joker, and several of the Justice League members are teleported a parallel dimension, Cyborg, Superman and the Flash begin working tirelessly to locate them and bring them back.

In the parallel dimension, Cyborg is shown having joined Superman's One Earth regime and subsequently undergone enhancements to his body. Green Lantern encounters him and the Regime's Raven on the Ferris Aircraft facility torturing their dimension's Deathstroke, who refused the amnesty offered to him by the High Councilor Superman. Green Lantern confronts the two, causing them confusion at first due to his change of uniform color from the Yellow Lantern they know. After Raven is defeated by Green Lantern, Cyborg confronts him but is beaten.

Back in the Justice League's Watchtower, Cyborg and the Flash manage to locate the alternate dimension where their allies were sent and plan to use the Flash's Cosmic Treadmill to pull them back into their dimension. Upon making the necessary modifications, they put their plan into motion. However, the inter-dimensional gateway belonging to the Insurgents activates at the same time, pulling Cyborg into their dimension, where he's needed to repair the kryptonite weapon Batman built to use against Superman. After encountering Deathstroke and Lex Luthor in the Insurgency's hideout, Cyborg misinterprets their intentions and attacks them. He fights them both to a standstill until Batman's counterpart and the members of the Justice League arrive and explain the situation to him.

When Superman's counterpart announces that the displaced Batman will be executed publicly on Stryker's Island, the Insurgency forms a plan to rescue him using the Watchtower's teleporter. Disguising himself as his counterpart, Cyborg infiltrates the Hall of Justice in order to gain access to the Watchtower, grudgingly accompanied by Deathstroke, who Cyborg doesn't trust, regardless of their lack of history. Cyborg goes to activate the teleporter, when it suddenly activates, bringing Catwoman into the Hall. She greets Cyborg, believing him to be his counterpart, but grows suspicious when he uncharacteristically greets her back. She confirms this suspicion by implying that the two of them are involved with each other, causing Cyborg to play along, unaware that she's lying. His cover blown, Catwoman deactivates Cyborg's disguise and attacks him. He manages to defeat her, only to have his legs remotely locked up by his counterpart, who sends for backup from Wonder Woman, unaware that he's speaking to her displaced counterpart. The two Cyborgs start remotely hacking each other's systems simultaneously, ending in a stalemate. Deciding to settle this like men, the two fight one on one, ending in defeat for the Regime's Cyborg.

Their way clear, Cyborg and Deathstroke teleport to the Watchtower, where Cyborg easily takes control of the teleporter using his counterpart's stolen security protocols. However, Deathstroke overloads the Watchtower's reactor as an act of revenge against Superman, jeopardizing the mission and giving the Insurgents only 90 minutes to complete their mission. Once they've secured the displaced Batman, Cyborg teleports them to safety despite a brief malfunction in the teleporter's system. Their mission complete, Cyborg teleports himself and everyone else in the Watchtower to safety, just before the reactor explodes, destroying the Watchtower.

In the wake of the attack on Stryker's and the death of Luthor, the Regime's Superman announces to the council his intentions to destroy Metropolis and Gotham to set an example. He orders his Cyborg and Raven to take control of media broadcasts so the entire world can see it. When the Regime begins their attack on Gotham, the displaced Cyborg fights alongside the Insurgents in the defense of Gotham.

In the epilogue, Cyborg is shown visiting Lex Luthor's grave, where he places the chestpiece of Luthor's battle suit as homage to their fallen comrade. Meanwhile, his counterpart is taken into custody with the rest of Superman's accomplices.

In a flashback, Cyborg, as an accomplice of Superman's plan to remove the Arkham Asylum inmates, was stationed at Gotham to prevent anyone from interfering. Off-screen, he sets the approaching Batplane's systems to autopilot, and appears before them via Boom Tube to attack them. He fires off a warning shot, which doesn't affect Batman but causes Damian Wayne to veer off course. Cyborg confronts Batman on the ground by admitting his reluctance to attack the latter unless he has to. When Batman refuses to back down, Cyborg tells him that he lost all of his friends in Metropolis, and that a similar incident won't happen again if Superman executes his plan. Batman tells him of his right to be angry, but disproves of Superman's plan by stating it's "not a blank check. And the Justice League isn't a death squad," and the two fight, ending in Cyborg's defeat.

In the present day, Cyborg is incarcerated alongside Damian and Superman at Stryker's Island for his role in the Regime, and is guarded by Firestorm and Blue Beetle. During Brainiac's invasion, he is set free by Kara in their attempt to free Superman, and is sent to disable the red sun generators at Superman's prison. While doing so however, Firestorm or Blue Beetle faces off against him, defeating him. He is among his fellow Regime members to come together and outnumber the two heroes, causing Firestorm to prepare to go nuclear. Just as Wonder Woman is about to attack Firestorm however, Batman intervenes, disarming Firestorm. Cyborg then looks on as Batman agrees that he can't defeat Brainiac alone and frees Superman.

He is among the heroes at the Justice League table, as Catwoman goes over the plan. Cyborg is tasked to head to the Batcave to bring Brother Eye back online, much to his reluctance. He is also informed he can't Boom Tube in as Batman had reverse-engineered his Mother Box's technology and should Cyborg try to boom tube into the Batcave, he'll explode. He is also informed that Catwoman and Harley Quinn will be going with him.

On the mission, the trio boom tube at Arkham Asylum, before being ambushed by Poison Ivy who uses pheromones to mind control Harley to attack them. Either Cyborg or Catwoman defeats her and soon Poison Ivy. While on their walk through the sewers to the Batcave, Harley expresses hope that Black Canary and Green Arrow are still alive, before Cyborg doubts that, explaining that Brainiac only takes the best and the two don't qualify as the best. Catwoman replies that she'd still take them over him. Once the trio reach the entrance of the Batcave, he and Catwoman progress further, while Harley Quinn is left to guard the entrance. The duo are then ambushed by Deadshot and Bane, though they both emerge victorious. The duo reach a corrupted Brother Eye, where Cyborg tries to reboot Brother Eye, before being intercepted by Brainiac speaking through the monitors. Brainiac claims that Cyborg is "the pinnacle of human evolution", though claims that his humanity inhibits him from reaching his full potential, before freeing Grid from his memory subsystems. He is defeated by either Cyborg or Catwoman. Cyborg then returns to trying to regain access to the Brother Eye's neural network. Brainiac states that not even he could regain access, before Cyborg responds that he isn't trying to regain access, but rather teach it to ignore Brainiac. It is successful and Brother Eye comes back online.

He is later seen with the other heroes after Superman's apparent death witnessing Brainiac offer a trade surrender Supergirl and he'll spare the Earth. They refuse to take the deal and Cyborg suggests that they short out Brainiac's shields, leaving his ship vulnerable. Black Adam agrees and offers to channel energy from the Rock of Eternity to do so with the Trident of Atlantis as a medium to control it. Cyborg also considers the idea that he create a signal disruptor that could disconnect Brainiac from his ship, similar to how he was disconnected from Brother Eye earlier.

He is later seen handing the disruptor to Batman, also informing him that he'll have to be arms reach within Brainiac in order for it to work. Cyborg is never seen again for the rest of the story, though he is mentioned by Superman that with Cyborg's aid, Superman can gain control over Brainiac's ship, in which Cyborg presumably did so in Superman's ending.

Super Move-1375398843

What little remains of Victor Stone's body is protected by Promethium metal shaped into a mechanical exo-skeleton, armed with advanced weaponry and constantly synced to the internet 24/7, allowing Cyborg complete and total access to all information stored in the World Wide Web. Cyborg's mechanical body affords him superhuman strength and durability high enough to trade and survive blows from Solomon Grundy, though not overpower the zombie. Cyborg's on-board weaponry includes his trademarked arm cannon, which can fire either high decibel blasts of sound or small spheres of energy, either in a single burst or rapid fire. Cyborg also possesses a large amount of missiles for long range attacks, which he can fire from a launcher on his back or from his shoulders. Cybrog also contains a built in Boom Tube to allow himself instantaneous teleportation from one location to another.

Though his arsenal his impressive, Cyborg's real talent is his computer skills. Victor can navigate and coordinate massive strikes through his natural connection to the web, and hack through almost any security system and take complete control of it himself, so long as he remains conscious during the takeover. If he is knocked unconscious during, or hacked himself, the feedback can knock him unconscious.

Repair Circuit:Cyborg's character trait is the ability to regenerate health. The longer the button is held, the more health Cyborg regenerates.

Injustice Gods Among Us - Cyborg Ending HD

After Superman's defeat, Cyborg led the assault on the Fortress of Solitudeto flush out remnants of the High Councilor's regime. The Fortress was well defended, the battle intense. Cyborg was forced to use unfamiliar Kryptonian tools to make repairs to his damaged cybernetics. Enhanced with the alien technology, Cyborg found he could communicate with Superman's androids and order them to apprehend the opposition. With his army of super androids, Cyborg will bring justice to the world.

Injustice 2 Cyborg's Ending

Brainiac thought he had me all figured out. Said my humanity made me weak. But fighting for humanity gave me the strength to body that punk-ass Coluan. And before he dropped, I took a few things... His twelfth-level intellect and his ship's data core. I thought the Internet was gigantic. But now? I've got the whole wide universe at my fingertips. First up, I put back every Earth city Brainiac stole, starting with my hometown, Dakota City! Then I keep going... Superman wants to secure one world, but I can reboot tens of thousands! Every last one in Brainiac's Collection. Gonna be a long trip. But another benefit of my new twelfth-level intellect is I can reunite with some old friends. Titans Together. Boo-yah.

Cyborg's costume is comprised of a metal exoskeleton, which is primarily gray and silver. Half of his face and the underside of his arms are left bare. He has a red glowing circle in the middle of his chest and he can transform his exoskeleton into different weapons at will.

Cyborg has a more advanced robotic exoskeleton which features more plated armor, with two red wires connecting his arms to his back. The only human part of his body is the right side of his face, which is bald.

Cyborg's metal exoskeleton combines elements from both costumes in Injustice, being sleeker in appearance but featuring more armored parts. The exoskeleton is colored white and dark gray, with his symbol being featured on his chest.

View original post here:

Cyborg - Injustice:Gods Among Us Wiki

Gene therapy – About – Mayo Clinic

Overview

Gene therapy involves altering the genes inside your body's cells in an effort to treat or stop disease.

Genes contain your DNA the code that controls much of your body's form and function, from making you grow taller to regulating your body systems. Genes that don't work properly can cause disease.

Gene therapy replaces a faulty gene or adds a new gene in an attempt to cure disease or improve your body's ability to fight disease. Gene therapy holds promise for treating a wide range of diseases, such as cancer, cystic fibrosis, heart disease, diabetes, hemophilia and AIDS.

Researchers are still studying how and when to use gene therapy. Currently, in the United States, gene therapy is available only as part of a clinical trial.

Gene therapy is used to correct defective genes in order to cure a disease or help your body better fight disease.

Researchers are investigating several ways to do this, including:

Gene therapy has some potential risks. A gene can't easily be inserted directly into your cells. Rather, it usually has to be delivered using a carrier, called a vector.

The most common gene therapy vectors are viruses because they can recognize certain cells and carry genetic material into the cells' genes. Researchers remove the original disease-causing genes from the viruses, replacing them with the genes needed to stop disease.

This technique presents the following risks:

The gene therapy clinical trials underway in the U.S. are closely monitored by the Food and Drug Administration and the National Institutes of Health to ensure that patient safety issues are a top priority during research.

Currently, the only way for you to receive gene therapy is to participate in a clinical trial. Clinical trials are research studies that help doctors determine whether a gene therapy approach is safe for people. They also help doctors understand the effects of gene therapy on the body.

Your specific procedure will depend on the disease you have and the type of gene therapy being used.

For example, in one type of gene therapy:

Viruses aren't the only vectors that can be used to carry altered genes into your body's cells. Other vectors being studied in clinical trials include:

The possibilities of gene therapy hold much promise. Clinical trials of gene therapy in people have shown some success in treating certain diseases, such as:

But several significant barriers stand in the way of gene therapy becoming a reliable form of treatment, including:

Gene therapy continues to be a very important and active area of research aimed at developing new, effective treatments for a variety of diseases.

Explore Mayo Clinic studies testing new treatments, interventions and tests as a means to prevent, detect, treat or manage this disease.

Dec. 29, 2017

Read more here:

Gene therapy - About - Mayo Clinic

NEO Coin and Its Applications: Places Where You Can Use NEO Coin

What Is NEO Coin?
NEO coin is the native currency of the blockchain platform NEO. It was the first decentralized, open-source cryptocurrency and blockchain platform launched in China.

NEO uses blockchain technology to automate the management of digital assets using smart contracts, à la Ethereum. Not surprisingly, it is often referred to as "Chinese Ethereum."

The primary NEO coin applications are to facilitate smart contracts and to become a digital, decentralized, and distributed representative of non-digital assets. Simply put, NEO makes.

The post NEO Coin and Its Applications: Places Where You Can Use NEO Coin appeared first on Profit Confidential.

Follow this link:
NEO Coin and Its Applications: Places Where You Can Use NEO Coin

Yudkowsky – Staring into the Singularity 1.2.5

This document has been marked as wrong, obsolete, deprecated by an improved version, or just plain old.

The address of this document is http://sysopmind.com/singularity.html.If you found it elsewhere, please visit the foregoing link for themost recent version.

Computing speed doubles every two years.Computing speed doubles every two years of work.Computing speed doubles every two subjective years of work.

Two years after Artificial Intelligences reach human equivalence, theirspeed doubles. One year later, their speed doubles again.

Six months - three months - 1.5 months ... Singularity.

Plug in the numbers for current computing speeds, the current doublingtime, and an estimate for the raw processing power of the human brain,and the numbers match in: 2021.

But personally, I'd like to do it sooner.

It began three and a half billion years ago in a pool of muck, whena molecule made a copy of itself and so became the ultimate ancestor ofall earthly life.

It began four million years ago, when brain volumes began climbing rapidlyin the hominid line.

Fifty thousand years ago with the rise of Homo sapiens sapiens.Ten thousand years ago with the invention of civilization.Five hundred years ago with the invention of the printing press.Fifty years ago with the invention of the computer.

In less than thirty years, it will end.

At some point in the near future, someone will come up with a methodof increasing the maximum intelligence on the planet - either codinga true Artificial Intelligence or enhancinghuman intelligence. An enhanced human would be better at thinkingup ways of enhancing humans; would have an "increased capacity for invention".What would this increased ability be directed at? Creating the nextgeneration of enhanced humans.

And what would those doubly enhanced minds do? Research methodson triply enhanced humans, or build AI minds operating at computer speeds.And an AI would be able to reprogram itself, directly, to run faster- or smarter. And then our crystal ball explodes, "life aswe know it" is over, and everything we know goes out the window.

A civilization with high technology is unstable; it ends when the speciesdestroys itself or improves on itself. If the current trends continue- if we don't run up against some unexpected theoretical cap on intelligence,or turn the Earth into a radioactive wasteland, or bury the planet undera tidal wave of voracious self-reproducing nanodevices - the Singularityis inevitable. The most-quoted estimate for the Singularity is 2035- within your lifetime! - although many, including I, think that the Singularitymay occur substantially sooner.

Some terminology, due to Vernor Vinge's Hugo-winning AFire Upon The Deep:

Power - An entity from beyond the Singularity.Transcend, Transcended, Transcendence - The act of reprogrammingoneself to be smarter, reprogramming (with one's new intelligence) to besmarter still, and so on ad Singularitum. The "Transcend"is the metaphorical area where the Powers live.Beyond - The grey area between being human and being a Power;the domain inhabited by entities smarter than human, but not possessingthe technology to reprogram themselves directly and Transcend.

"I imagine bugs and girls have a dim perception that Nature playeda cruel trick on them, but they lack the intelligence to really comprehendits magnitude."-- Calvin and Hobbes

But why should the Powers be so much more than we are now?Why not assume that we'll get a little smarter, and that's it?

Consider the sequence 1, 2, 4, 8, 16, 32. Consider the iterationof F(x) = (x + x). Every couple of years, computer performance doubles.(1)That is the demonstrated rate of improvement as overseen by constant, unenhancedminds - progress according to mortals.

Right now the amount of networked silicon computing power on the planetis slightly above the power of a human brain. The power of a humanbrain is 10^17 ops/sec, or one hundred million billion operations per second(2), versus a billionor so computers on the Internet with somewhere between 100 millions ops/secand 1 billion ops/sec apiece. The total amount of computingpower on the planet is the amount of power in a human brain, 10^17 ops/sec,multiplied by the number of humans, presently six billion or 6x10^9.The amount of artificial computing power is so small as to be irrelevant,not because there are so many humans, but because of the sheer raw powerof a single human brain.

At the old rate of progress, when the original Singularity calculationswere performed in 1988 (3),computers were expected to reach human-equivalent levels - 10^17 floating-pointoperations per second, or one hundred petaflops - at around 2035.But at that rate of progress, one-teraflops machines were expected in 2000;as it turned out, one-teraflops machines were around in 1996, when thisdocument was first written. In 1998 the top speed was 3.2 teraflops,and in 1999 IBM announced theBlue Gene project to build a petaflops machine by 2005. So theold estimates may be a little conservative.

Once we have human-equivalent computers, the amount of computing poweron the planet is equal to the number of humansplus the number ofcomputers. The amount of intelligence available takes a huge jump.Ten years later,humans become a vanishing quantity in the equation.

That doubling sequence is actually a pessimistic projection,because it assumes that computing power continues to double at the samerate. But why? Computer speeds don't double due to some inexorablephysical law, but because researchers and engineers find ways to make fasterchips. If some of the researchers and engineers are themselvescomputers...

A group of human-equivalent computers spends 2 years to double computerspeeds. Then they spend another 2 subjective years, or 1 yearin human terms, to double it again. Then they spend another 2 subjectiveyears, or six months, to double it again. After four years total,the computing power goes to infinity.

That is the "Transcended" version of the doubling sequence. Let'scall the "Transcend" of a sequence {a0, a1, a2...}the function where the interval between an and an+1is inversely proportional to an. (4).So a Transcended doubling function starts with 1, in which case it takes1 time-unit to go to 2. Then it takes 1/2 time-units to go to 4.Then it takes 1/4 time-units to go to 8. This function, if it werecontinuous, would be the hyperbolic function y = 2/(2 - x). Whenx= 2, then (2 - x) = 0 and y = infinity. Thebehavior at that point is known mathematically as a singularity.

And the Transcended doubling sequence is also a pessimistic projection,not a Singularity at all, because it assumes that only speed isenhanced. What if the quality of thought were enhanced?Right now, two years of work - well, these days, eighteen months of work.Eighteen subjective months of work suffices to double computing speeds.Shouldn't this improve a bit with thought-sharing and eidetic memories?Shouldn't this improve if, say, the total sum of human scientific knowledgeis stored in predigested, cognitive, ready-to-think format? Shouldn'tthis improve with short-term memories capable of holding the whole of humanknowledge? A human-equivalent AI isn't "equivalent" - if Kasparovhad had even the smallest, meanest automatic chess-playing program integratedsolidly with his intuitions, he would have beat Deep Blue into a pulp.That's TheAI Advantage: Simple tasks carried out at blinding speeds andwithout error, conscious tasks carried out with perfect memory and totalself-awareness.

I haven't even started on the subject of AIs redesigning theircognitive architectures, although they'll have a far easier time of itthan we would - especially if they can make backups. Transcendeddoubling might run up against the laws of physics before reachinginfinity... but even the laws of physics as now understood wouldallow one gram (more or less) to store and run the entire human race ata million subjective years per second. (5).

Let's take a deep breath and think about that for a moment. Onegram. The entire human race. One million yearsper second. That means, using only this planetary mass for computingpower, it would be possible to support more people than the entire Universecould support if biological humans colonized every single planet.It means that, in a single day, a civilization could live over 80 billionyears, several times older than the age of the Universe to date.

The peculiar thing is that most people who talk about "the laws of physics"setting hard limits on Powers would never even dream of setting the samelimits on a (merely) galaxy-spanning civilization of (normal) humans a(brief) billion years old. Part of that is simply a cultural conventionof science fiction; interstellar civilizations can break any physical lawthey please, because the readers are used to it. But part of thatis because scientists and science-fiction authors have been taught, somany times, that Ultimate Unbreakable Limits usually fall to human ingenuityand a few generations of time. Nobody dares say what might be possiblea billion years from now because that is a simply unimaginable amountof time.

We know that change crept at a snail's pace a mere millennium ago, andthat even a hundred years ago it would have been impossible to placecorrect limits on the ultimate power of technology. We know thatthe past could never have placed limits on the present, and so we don'ttry to place limits on the future. But with transhumans, the analogyis not to Lord Kelvin, nor Aristotle, nor to a hunter-gatherer - all ofwhom had human intelligence - but to a Neanderthal. With Powers,to a fish. And yet, because the power of higher intelligence is notas publicly recognized as the power of a few million years - because wehave no history of naysayers being embarrassed by transhumans insteadof mere time - some of us still sit, grunting around the fire, settingultimate limits on the sharpness of spears; some of us still swim about,unblinking, unable to engage in abstract thought, but knowing that theentire Universe is, must be, wet.

To convey the rate of progress driven by smarter researchers,I needed to invent a function more complex than the doubling function usedabove. We'll call this new function T(n). Youcan think of T(n) as representing the largest number conceivableto someone with an n-neuron brain. More formally, T(n)is defined as the longest block of 1s produced by any halting n-stateTuringMachine acting on an initially blank tape. If you're familiarwith computers but not Turing Machines, consider T(n) tobe the largest number that can be produced by a computer program with ninstructions. Or, if you're an information theorist, think of T(n)as the inverse function of complexity; it produces the largest number withcomplexity n or less.

The sequence produced by iterating T(n), S{n}= T(S{n - 1}), is constant for very low values of n.S{0}is defined to be 0; a program of length zero produces no output.This corresponds to a Universe empty of intelligence. T(1) = 1.This corresponds to an intelligence not capable of enhancing itself; thiscorresponds to where we are now. T(2) = 3. Here beginsthe leap into the Abyss. Once this function increases at all, itimmediately tapdances off the brink of the knowable. T(3) = 6?T(6) = 64?

T(64) = vastly more than 1080, the number of atomsin the Universe. T(1080) is something that onlya Transcendent entity will ever be able to calculate, and that only ifTranscendent entities can create new Universes, maybe even new laws ofphysics, to supply the necessary computing power. Even T(64)will probably never be known to any strictly human being.

Now take the Transcended version of S{n}, starting at2. Half a time-unit later, we have 3. A third of a time-unitafter that, 6. A sixth later - one whole unit after this functionstarted - we have 64. A sixty-fourth later, 10^80. An unimaginablytiny fraction of a second later... Singularity.

Is S{n} really a good model of the Singularity?Of course not. "Good model of the Singularity" is an oxymoron; that'sthe wholepoint; the Singularity will outrun any model a human couldhave formulated a hundred years ago, and the Singularity will outrun anymodel we formulate today. (6)

The main objection, though, would be that S{n} is anungrounded metaphor. The Transcended doubling sequence models fasterresearchers. It's easy to say that S{n} models smarterresearchers, but what does smarter actually mean in this context?

Smartness is the measure of what you see as obvious, what youcan see as obvious in retrospect, what you can invent, andwhat you can comprehend. To be more precise about it, smartnessis the measure of your semantic primitives (what is simple in retrospect),the way in which you manipulate the semantic primitives (what is obvious),the structures your semantic primitives can form (what you can comprehend),and the way you can manipulate those structures (what you can invent).If you speak complexity theory, the difference between obvious andobviousin retrospect, or inventable andcomprehensible, is likethe difference between NP and P.

All humans who have not suffered neural injuries have the same semanticprimitives. What is obvious in retrospect to one is obviousin retrospect to all. (Four notes: First, by "neural injuries"I do not mean anything derogatory - it's just that a person missing thevisual cortex will not have visual semantic primitives. If certainneural pathways are severed, people not only lose their ability to seecolors; they lose their ability to remember or imagine colors.Second, theorems in math may be obvious in retrospect only to mathematicians- but anyone else who acquired theskill would have the abilityto see it. Third, to some extent what we speak of as obviousinvolves not just the symbolic primitives but very short links betweenthem. I am counting the primitive link types as being included under"semantic primitives". When we look at a thought-sequence and seeit as being obvious in retrospect, it is not necessarily a singlesemantic primitive, but is composed of a very short chain of semantic primitivesand link types. Fourth, I apologize for my tendency to dissect myown metaphors; I really can't help it.)

Similarly, the human cognitive architecture is universal. We allhave the same sorts of underlying mindstuff. Though the nature ofthis mindstuff is not necessarily known, our ability to communicate witheach other indicates that, whatever we are communicating, it is the sameon both sides. If any two humans share a set of concepts, any structurecomposed of those concepts that is understood by one will be understoodby the other.

Different humans may have different degrees of the ability to manipulateand structure concepts; different humans may see and invent differentthings. The great breakthroughs of physics and engineering did notoccur because a group of people plodded and plodded and plodded for generationsuntil they found an explanation so complex, a string of ideas so long,that only time could invent it. Relativity and quantum physics andbuckyballs and object-oriented programming all happened because someoneput together a short, simple, elegant semantic structure in a way thatnobody had ever thought of before. Being a little bit smarteris where revolutions come from. Not time. Not hard work.Although hard work and time were usually necessary, others had worked farharder and longer without result. The essence of revolution is rawsmartness.

Now think about the Singularity. Think about a chimpanzee tryingto understand integral calculus. Think about the people with damaged visualneurology who cannot remember what it was like to see, who cannot imaginethe color red or visualize two-dimensional structures. Think abouta visual cortex with trillions of times as many neuron-equivalents.Think about twenty thousand distinct colors in the rainbow, none a shadeof any other. Think about rotating fifty-dimensional objects. Thinkabout attaching semantic primitives to the pixels, so that one could seea rainbow of ideas in the same way that we see a rainbow of colors.

Our semantic primitives even determine what we can know.Why does anything exist at all? Nobody knows. And yet the answeris obvious. The First Cause must be obvious. It hasto be obvious toNothing, present in the absence of anything else,a substance formed from -blank-, a conclusion derived without dataor initial assumptions. What is it that evokes consciousexperience, the stuff that minds are made of? We are madeof conscious experiences. There is nothing we experience moredirectly. How does it work? We don't have a clue. Twoand a half millennia of trying to solve it and nothing to show forit but "I think therefore I am." The solutions seem to be necessarilysimple, yet are demonstrably imperceptible. Perhaps the solutionsoperate outside the representations that can be formed with the human brain.

If so, then our descendants, successors, future selves will figure outthe semantic primitives necessary and alter themselves to perceive them.The Powers will dissect the Universe and the Reality until they understandwhy anything exists at all, analyze neurons until they understand qualia.And that will only be the beginning. It won't end there.Why should there be only two hard problems? After all, if not forhumans, the Universe would apparently contain only one hard problem, forhow could a non-conscious thinker formulate the hard problem of consciousness?Might there be states of existence beyond mere consciousness - transsentience?Might solving the nature of reality create the ability to create new Universes,manipulate the laws of physics, even alter the kind of things that canbe real - "ontotechnology"? That's what the Singularityis all about.

So before you talk about life as a Power or the Utopia to come - a favoritepastime of transhumanists and Extropiansis to discuss the problems of uploading, life afterbeing uploaded, and so on - just remember that you probably have a muchbetter chance of solving both hard problems than you do of making a validstatement about the future. This goes for me too. I'll standby everything I said about humans, including our inability to understandcertain things, but everything I said about the Powers is almost certainlywrong. "They'll figure out the semantic primitives necessary andalter themselves to perceive them." Wrong. "Figure out.""Semantic primitives." "Alter." "Perceive." I would beton all of these terms becoming obsolete after the Singularity. Thereare better ways and I'm sure They - or It, or [sound of exploding brain]will "find them".

I would like to introduce a unit of post-Singularity progress, the PerceptualTranscend or PT.

[Brief pause while audience collapses in helpless laughter.]

A Perceptual Transcend occurs when all things that were comprehensiblebecomeobvious in retrospect, and all things that were inventablebecome obvious. A Perceptual Transcend occurs when the semanticstructures of one generation become the semantic primitives of the next.To put it another way, one PT from now, the whole of human knowledgebecomes perceivable in a single flash of experience, in the same waythat we now perceive an entire picture at once.

Computers are a PT above humans when it comes to arithmetic - sort of.While we need to manipulate an entire precarious pyramid of digits, rowsand columns in order to multiply 62305 by 10358, a computer can spit outthe answer - 645355190 - in a single obvious step. These computersaren't actually a PT above us at all, for two reasons. First of all,they just handle numbers up to two billion instead of 9; after that theyneed to manipulate pyramids too. Far more importantly, they don'tnotice anything about the numbers they manipulate, as humans do.If you multiply 23704 by 14223, using the wedding-cake method of multiplication,you won't multiply 23704 by 2 twice in a row; you'll just steal the resultsfrom last time. If one of the interim results is 12345 or 99999 or314159, you'll notice that, too. The way computers manipulate numbersis actually less powerful than the way we manipulate numbers.

Would the Powers settle for less? A PT above us, multiplicationis carried out automatically but with full attention to interimresults, numbers that happen to be prime, and the like. If I weredesigning one of the first Powers - and, down at the SingularityInstitute, this is what we're doing - I would create an entire subsystemfor manipulating numbers, one that would pick up on primality, complexity,and all the numeric properties known to humanity. A Power would understandwhy62305 times 10358 equals 645355190, with the same understanding that wouldbe achieved by a top human mathematician who spent hours studying all thenumbers involved. And at the same time, the Power will multiply thetwo numbers automatically.

For such a Power, to whom numbers were true semantic primitives, Fermat'sLast Theorem and the Goldbach Conjecture and the Riemann Hypothesis mightbe obvious. Somewhere in the back of its mind, the Power wouldtest each statement with a million trials, subconsciously manipulatingall the numbers involved to find why they were not the sum of twocubes or why they were the sum of two primes or why theirreal part was equal to one-half. From there, the Power could intuitthe most basic, simple solution simply by generalizing. Perhaps humanmathematicians, if they could perform the arithmetic for a thousand trialsof the Riemann Hypothesis, examining every intermediate step, looking forcommon properties and interesting shortcuts, could intuit a formal solution.But they can't, and they certainly can't do it subconsciously, which iswhy the Riemann Hypothesis remains unobvious and unproven - it is a conceptualstructureinstead of a conceptual primitive.

Perhaps an even more thought-provoking example is provided by our visualcortex. On the surface, the visual cortex seems to be an image processor.In a modern computer graphics engine, an image is represented by a two-dimensionalarray of pixels (7).To rotate this image - to cite one operation - each pixel's rectangularcoordinates {x, y} are converted to polar coordinates {theta, r}. All thetas,representing the angle, have a constant added. The polar coordinatesare then converted back to rectangular. There are ways to optimizethis process, and ways to account for intersecting and empty pixels onthe new array, but the essence is clear: To perform an operationon an entire picture, perform the operation on each pixel in that picture.

At this point, one could say that a Perceptual Transcend depends onwhat level you're looking at the operation. If you view yourselfas carrying out the operation pixel by pixel, it is an unimaginably tediouscognitive structure, but if you view the whole thing in a single lump,it is a cognitive primitive - a point made in Hofstadter's Ant Fugue whendiscussing ants and colonies. Not very exciting unless it's Hofstadterexplaining it, but there's more to the visual cortex than that.

For one thing, we consciously experience redness. (If you're notsure what conscious experiencea.k.a. "qualia" means, the short version is that you are not the one whospeaksyour thoughts, you are the one who hears your thoughts.) Qualiaare the stuff making up the indescribable difference between redand green.

The term "semantic primitive" describes more than just the level atwhich symbols are discrete, compact objects. It describes the levelof conscious perception. Unlike the computer manipulating numbersformed of bits, and like the imagined Power manipulating theorems formedof numbers, we don't lose any resolution in passing from the pixel levelto the picture level. We don't suddenly perceive the idea "thereis a bear in front of me"; we see a picture of a bear, containing millionsof pixels, every one of which is consciously experienced simultaneously.A Perceptual Transcend isn't "just" the imposition of a new cognitive level;it turns the cognitive structures into consciously experienced primitives.

"To put it another way, one PT from now, the whole of human knowledgebecomes perceivable in a single flash of experience, in the same waythat we now perceive an entire picture at once."

Of course, the PT won't be used as a post-Singularity unit of progress.Even if it were initially, it won't be too long before "PT" itself is Transcendedand the Powers jump out of the system yet again. After all, the Singularityis ultimately as far beyond me, the author, as it is beyond any other human,and so my PTs will be as worthless a description as the doubling sequencediscarded so long ago. Even if we accept the PT as the basic unitof measure, it simply introduces a secondary Singularity. Maybe thePerceptual Transcends will occur every two consciously experienced yearsat first, but then will occur every conscious year, and then every conscioussix months - get the picture?

It's like the "Birthday Cantatatata..." in Hofstadter'sbookGodel, Escher, Bach. Youcan start with the sequence {1, 2, 3, 4 ...} and jump out of it to w(omega), the symbol for infinity. But then one has {w, w+1, w + 2 ... }, and we jump out again to 2w. Then 3w,and 4w, and w2 andw3 and wwand w^(ww) and higher towers of w untilwe jump out to the ordinale0, which includes all exponentialtowers of ws.

The PTs may introduce a second Singularity, and a third Singularity,and a fourth, until Singularities are coming faster and faster and thefirst w-Singularity is imminent -

Or the Powers may simply jump beyond that system. The BirthdayCantatatata... was written by a human - admittedly Douglas Hofstadter,but still a human - and the concepts involved in it may be Transcendedby the very first transhuman.

The Powers are beyond our ability to comprehend.

Get the picture?

It's hard to appreciate the Singularity properly withoutfirst appreciating really large numbers. I'm not talking about littletiny numbers, barely distinguishable from zero, like the number of atomsin the Universe or the number of years it would take a monkey to duplicatethe works of Shakespeare. I invite you to consider what was, circa1977, the largest number ever to be used in a serious mathematical proof.The proof, by Ronald L.Graham, is an upper bound to a certain question of Ramsey theory.In order to explain the proof, one must introduce a new notation, due toDonaldE. Knuth in the article Coping With Finiteness. The notationis usually a small arrow, pointing upwards, here abbreviated as ^.Written as a function:

2^4 = 24 = 16.

3^^4 = 3^(3^(3^3)) = 3^(3^27) = 37,625,597,484,987

7^^^^3 = 7^^^(7^^^7).

3^3 = 3 * 3 * 3 = 27. This number is small enough to visualize.

3^^3 = 3^(3^3) = 3^27 = 7,625,597,484,987. Larger than 27, butso small I can actually type it. Nobody can visualize seven trillionof anything, but we can easily understand it as being on roughly the sameorder as, say, the gross national product.

3^^^3 = 3^^(3^^3) = 3^(3^(3^(3^...^(3^3)...))). The "..." is 7,625,597,484,987threes long. In other words, 3^^^3 or arrow(3,3, 3) is an exponential tower of threes 7,625,597,484,987 levels high.The number is now beyond the human ability to understand, but the procedurefor producing it can be visualized. You take x=1. Youlet x equal 3^x. Repeat seven trillion times.While the very first stages of the number are far too large to be containedin the entire Universe, the exponential tower, written as "3^3^3^3...^3",is still so small that it could be stored on a modern supercomputer.

3^^^^3 = 3^^^(3^^^3) = 3^^(3^^(3^^...^^(3^^3)...)). Both the numberand the procedure for producing it are now beyond human visualization,although the procedure can be understood. Take a number x=1.Letxequal an exponential tower of threes of height x.Repeat 3^^^3 times, where 3^^^3 equals an exponential tower seven trillionthrees high.

And yet, in the words of Martin Gardner: "3^^^^3 is unimaginablylarger than 3^^^3, but it is still small as finite numbers go, since mostfinite numbers are very much larger."

And now, Graham's number. Let x equal3^^^^3, or the unimaginable number just described above. Let x equal3^^^^^^^(x arrows)^^^^^^^3. Repeat 63 times, or 64 includingthe starting 3^^^^3.

Graham's number is far beyond my ability to grasp. I can describeit, but I cannot properly appreciate it. (Perhaps Graham can appreciateit, having written a mathematical proof that uses it.) This numberis far larger than most people's conception of infinity. Iknow that it was larger than mine. My sense of awe when I first encounteredthis number was beyond words. It was the sense of looking upon somethingso much larger than the world inside my head that my conceptionof the Universe was shattered and rebuilt to fit. All theologiansshould face a number like that, so they can properly appreciate what theyinvoke by talking about the "infinite" intelligence of God.

My happiness was completed when I learned that the actual answertothe Ramsey problem that gave birth to that number - rather than the upperbound - was probably six.

Why was all of this necessary, mathematical aesthetics aside?Because until you understand the hollowness of the words "infinity", "large"and "transhuman", you cannot appreciate the Singularity. Even appreciatingthe Singularity is as far beyond us as visualizing Graham's number is toa chimpanzee. Farther beyond us than that. No human analogieswill ever be able to describe the Singularity, because we are only human.

The number above was forged of the human mind. It is nothing buta finite positive integer, though a large one. It is composite andodd, rather than prime or even; it is perfectly divisible by three.Encoded in the decimal digits of that number, by almost any encoding schemeone cares to name, are all the works ever written by the human hand, andall the works that could have been written, at a hundred thousand wordsper minute, over the age of the Universe raised to its own power a thousandtimes. And yet, if we add up all the base-ten digits the result willbe divisible by nine. The number is still a finite positive integer.It may contain Universes unimaginably larger than this one, but it is stillonly a number. It is a number so small that the algorithm to produceit can be held in a single human mind.

The Singularity is beyond that. We cannot pigeonhole it by statingthat it will be a finite positive integer. We cannot say anythingat all about it, except that it will be beyond our understanding.

If you thought that Knuth's arrow notation produced some fairly largenumbers, what about T(n)? How many states does a Turing machineneed to implement the calculation above? What is the complexity ofGraham'snumber, C(Graham)? Probably on the order of 100. And moreover,T(C(Graham)) is likely to be much, much larger than Graham's number.Why go throughx = 3^(x ^s)^3 only 64 times? Why not3^^^^3 times? That'd probably be easier, since we already need togenerate 3^^^^3, but not 64. And with the extra space, we might evenbe able to introduce an even more computationally complex algorithm.In fact, Knuth's arrow notation may not be the most powerful algorithmthat fits into C(Knuth) states.

T(n) is the metaphor for the growth rate of a self-enhancingentity because it conveys the concept of having additional intelligencewith which to enhance oneself. I don't know when T(n) passesbeyond the threshold of what human mathematicians can, in theory, calculate.Probably more thann=10 and less than n=100. The pointis that after a few iterations, we wind up with T(4294967296). Now,I don't know what T(4294967296) will be equal to, but the winning Turingmachine will probably generate a Power whose purpose is to think of a reallylarge number. That's what the term "large" means.

It's all very well to talk about cognitive primitives and obviousness,but again - what does smarter mean? The meaning of smartcan't be grounded in the Singularity - I haven't been there yet.So what's my practical definition?

"Of course, I never wrote the 'important' story, the sequel about thefirst amplified human. Once I tried something similar. JohnCampbell's letter of rejection began: 'Sorry - you can't write thisstory. Neither can anyone else.'" -- Vernor Vinge

Let's take a concrete example, the story Flowers for Algernon(later the movie Charly), by Daniel Keyes. (I'm afraid I'llhave to tell you how the story comes out, but it's a Character story, notan Idea story, so that shouldn't spoil it.) Flowers for Algernonis about a neurosurgical procedure for intelligence enhancement.This procedure was first tested on a mouse, Algernon, and later on a retardedhuman, Charlie Gordon. The enhanced Charlie has the standard science-fictionalset of superhuman characteristics; he thinks fast, learns a lifetime ofknowledge in a few weeks, and discusses arcane mathematics (not shown).Then the mouse, Algernon, gets sick and dies. Charlie analyzes theenhancement procedure (not shown) and concludes that the process is basicallyflawed. Later, Charlie dies.

That's a science-fictional enhanced human. A real enhanced humanwould not have been taken by surprise. A real enhanced human wouldrealize that any simple intelligence enhancement will be a net evolutionarydisadvantage - if enhancing intelligence were a matter of a simple surgicalprocedure, it would have long ago occurred as a natural mutation.This goes double for a procedure that works on rats! (As far as Iknow, this never occurred to Keyes. I selected Flowers, outof all the famous stories of intelligence enhancement, because, for reasonsof dramatic unity, this story shows what happens to be the correct outcome.)

Note that I didn't dazzle you with an abstruse technobabble explanationfor Charlie's death; my explanation is two sentences long and can be understoodby someone who isn't an expert in the field. It's the simplicityof smartness that's so impossible to convey in fiction, and so shockingwhen we encounter it in person. All that science fiction can do toshow intelligence is jargon and gadgetry. A truly ultrasmart CharlieGordon wouldn't have been taken by surprise; he would have deduced hisprobable fate using the above, very simple, line of reasoning. Hewould have accepted that probability, rearranged his priorities, and actedaccordingly until his time ran out - or, more probably, figured out anequally simple and obvious-in-retrospect way to avoid his fate. IfCharlie Gordon had really been ultrasmart, there would have beenno story.

There are some gaps so vast that they make all problems new. Imaginewhatever field you happen to be an expert in - neuroscience, programming,plumbing, whatever - and consider the gap between a novice, just approachinga problem for the first time, and an expert. Even if a thousand novicestry to solve a problem and fail, there's no way to say that a single expertcouldn't solve the problem casually, offhandedly. If a hundred well-educatedphysicists try to solve a problem and fail, an Einstein might still beable to succeed. If a thousand twelve-year-olds try for a year tosolve a problem, it says nothing about whether or not an adult is likelyto be able to solve the problem. If a million hunter-gatherers tryto solve a problem for a century, the answer might still be obvious toany educated twenty-first-century human. And no number of chimpanzees,however long they try, could ever say anything about whether the leasthuman moron could solve the problem without even thinking. Thereare some gaps so vast that they make all problems new; and some of them,such as the gap between novice and expert, or the gap between hunter-gathererand educated citizen, are not even hardware gaps - they deal not with themagic of intelligence, but the magic of knowledge, or oflackof stupidity.

I think back to before I started studying evolutionary psychology andcognitive science. I know that I could not then have come close topredicting the course of the Singularity. "If I couldn't have gottenit right then, what makes me think I can get it right now?"I am a human, and an educated citizen, and an adult, and an expert, anda genius... but if there is even one more gap of similar magnitude remainingbetween myself and the Singularity, then my speculations will be no betterthan those of an eighteenth-century scientist.

We're all familiar with individual variations in human intelligence,distributed along the great Gaussian curve; this is the only referent mostof us have for "smarter". But precisely because these variationsfall within the design range of the human brain, they're nothing out ofthe ordinary. One of the very deep truths about the human mind isthat evolution designed us to be stupid - to be blinded by ideology, torefuse to admit we're wrong, to think "the enemy" is inhuman, to be affectedby peer pressure. Variations in intelligence that fall within thenormal design range don't directly affect this stupidity. That'swhere we get the folk wisdom that intelligence doesn't imply wisdom, andwithin the human range this is mostly correct (8).The variations we see don't hit hard enough to make people appreciatewhat "smarter" means.

I am a Singularitarian because I have some small appreciation of howutterly, finally, absolutelyimpossible it is to think like someoneeven a little tiny bit smarter than you are. I know that we are allmissing the obvious, every day. There are no hard problems, onlyproblems that are hard to a certain level of intelligence. Move thesmallest bit upwards, and some problems will suddenly move from "impossible"to "obvious". Move a substantial degree upwards, and all of themwill become obvious. Move a huge distance upwards...

And I know that my picture of the Singularity will still fallshort of the truth. I may not be modest, but I have my humility -if I can spot anthropomorphisms and gaping logical flaws in every allegedtranshuman in every piece of science fiction, it follows that a slightlyhigher-order genius (never mind a real transhuman!) could read this pageand laugh at my lack of imagination. Call it experience, call ithumility, call it self-awareness, call it the Principle of Mediocrity;I've crossed enough gaps to believe there are more. I know, in adim way, just how dumb I am.

I've tried to show the Beyondness of the Singularity by brute force,but it doesn't take infinite speeds and PTs and ws to placesomething utterly beyond us. All it takes is a little tiny bitof edge, a bit smarter, and the Beyond stares us in the face oncemore. I've never been through the Singularity. I'venever been to the Transcend. I just staked out an area of the LowBeyond. This page is devoted to communicating a sense of awe thatcomes from personal experience, and is, therefore, merely human.

From my cortex, to yours; every concept here was born of a plain oldHomosapiens - and any impression it has made on you was likewise born ofa plain old Homo sapiens. Someone who has devoted a bit morethought, or someone a bit more extreme; it makes no difference. Whateverimpression you got from this page has not been an accurate picture of thefar future; it has, unavoidably, been an impression of me.And I am not the far future. Only a version of "Staring intothe Singularity" written by an actual Power could convey experience ofthe actual Singularity.

Take whatever future shock this page evoked, and associate it not withthe Singularity; associate it with me, the mild, quiet-spoken fellow infinitesimallydifferent from the rest of humanity. Don't bother trying to extrapolatebeyond that. You can't. Nobody can - not you, not me.

2035. Probably earlier.

Since the Internet exploded across the planet, there has been enoughnetworked computing power for intelligence. If the Internet wereproperly reprogrammed, it would be enough to run a human brain, or a seedAI. On the nanotechnology side, we possess machines capable ofproducing arbitrary DNA sequences, and we know how to turn arbitrary DNAsequences into arbitrary proteins (9).We have machines - Atomic Force Probes - that can put single atoms anywherewe like, and which have recently [1999] been demonstrated to be capableof forming atomic bonds. Hundredth-nanometer precision positioning,atomic-scale tweezers... the news just keeps on piling up.

If we had a time machine, 100K of information from the future couldspecify a protein that built a device that would give us nanotechnologyovernight. 100K could contain the code for a seed AI. Eversince the late 90's, the Singularity has been only a problem of software.And software is information, the magic stuff that changes at arbitrarilyhigh speeds. As far as technology is concerned, the Singularity couldhappen tomorrow. One breakthrough - just one major insight- in the science of protein engineering or atomic manipulation or ArtificialIntelligence, one really good day at Webmindor Zyvex, and the door to Singularitysweeps open.

Drexler has writtena detailed, technical,how-to book for nanotechnology. After stalling for thirty years,AI is making a comeback. Computers are growing in power even fasterthan their usual, pedestrian rate of doubling in power every two years.Quate has constructed a 16-head parallel ScanningTunnelling Probe. [Written in '96.] I'm starting to workout methods of coding atranshuman AI. [Written in '98.] The first chemical bondhas been formed using an atomic-force microscope. The U.S. governmenthas announced its intent to spend hundreds of millions of dollars on nanotechnologyresearch. IBM has announced the BlueGene project to achieve petaflops (10) computing power by 2005, with intent tocrack the protein folding problem. The SingularityInstitute for Artificial Intelligence, Inc. has been incorporated asa nonprofit with the express purpose of codinga seed AI. [Written in '00.]

The exact time of Singularity is customarily predicted by taking a trendand extrapolating it, much as The Population Bomb predicted thatwe'd run out of food in 1977. For example, population growth is hyperbolic.(Maybe you learned it was exponential in math class, but it's hyperbolicto a much better fit than exponential.)If that trend continues, world population reaches infinity on Aug 17, 2027,plus or minus 1.8 years. It is, of course, impossible for the humanpopulation to reach infinity. Some say that if we can create AIs,then the graph might measure sentient population instead of humanpopulation. These people are torturing the metaphor. Nobodydesigned the population curve to take into account developments in AI.It's just a curve, a bunch of numbers. It can't distortthe future course of technology just to remain on track.

If you project on a graph the minimum size of the materials we can manipulate,it reaches the atomic level - nanotechnology- in I forget how many years (the page vanished), but I think around 2035.This, of course, was before the time of the ScanningTunnelling Microscope and "IBM" spelled out in xenon atoms. Forthat matter, we now have theartificialatom ("You can make any kind of artificial atom - long, thin atomsand big, round atoms."), which has in a sense obsoleted merely molecularnanotechnology. As of '95, Drexler was giving the ballpark figureof 2015 (11).I suspect the timetable has been accelerated a bit since then. Myown guess would be no later than 2010.

Similarly, computing power doubles every two yearseighteen months. If we extrapolate forty thirtyfifteen years ahead we find computers with as much raw power (10^17ops/sec) assome people think humans have, arriving in 20352025 2015. [The previous sentence was written in1996, revised later that year, and then revised again in 2000; hence thepeculiar numbers.] Does this mean we have the softwareto spin minds? No. Does this mean we can program smarter people?No. Does this take into account any breakthroughsbetween now and then? No. Does this take into account the lawsof physics? No. Is this a detailed model of all the researchersaround the planet? No.

It's just a graph. The "amazing constancy" of Moore's Law entitlesit to consideration as a thought-provoking metaphor of the future, butnothing more. The Transcended doubling sequence doesn'taccount for how the faster computer-based researchers can get the physicalmanufacturing technology for the next generation set up in picoseconds,or how they can beat the laws of physics. That's not to say thatsuch things are impossible - it doesn't actually strike me as allthat likely that modern-day physics has really reached the ultimate bottomlevel. Maybe there are no physical limits. The pointis that Moore's Law doesn't explain how physics can be bypassed.

Mathematics can't predict when the Singularity is coming. (Well,it can, but it won't get it right.) Even the remarkably steady numbers,such as the one describing the doubling rate of computing power, (A) describeunaided human minds and (B) are speeding up, perhaps due to computer-aideddesign programs. Statistics may be used to predict the future, butthey don't model it. What I'm trying to say here is that "2035"is just a wild guess, and it might as well be next Tuesday.

The rest is here:

Yudkowsky - Staring into the Singularity 1.2.5

Cyborg Superman – Wikipedia

Cyborg Superman is a persona that has been used by two fictional characters in the DC Universe, both of which are supervillains that appear in comic books published by DC Comics.

Hank Henshaw was an astronaut at NASA until a solar flare hit his space shuttle during an experiment in space, damaging the ship and the crew. Henshaw and the crew found that their bodies had begun to mutate and, after returning to Earth, Henshaw's entire crew (including his wife) eventually committed suicide. After learning that Superman had thrown the Eradicator into the sun in a battle during the space shuttle experiment, Henshaw blamed Superman for the solar flare and the accident. Before his body completely disintegrated due to the radiation exposure, Henshaw was able to save his consciousness. Using NASA communications equipment, Henshaw beamed his mind into the birthing matrix which had carried Superman from Krypton to Earth as an infant. He created a small exploration craft from the birthing matrix and departed into outer space alone. Becoming increasingly mentally unstable, Henshaw used Superman's birthing matrix to create a body identical to Superman's, albeit with cybernetic parts. He returned to Earth to kill Superman, only to discover that Superman had already died during Henshaw's absence. Following Superman's eventual resurrection, Henshaw would not only become a recurring adversary of Superman but of Green Lantern as well. Hank Henshaw became a member of the Sinestro Corps during the Sinestro Corps War.

Zor-El is the younger brother of Jor-El, husband of Alura, father of Supergirl, and paternal uncle of Superman. Originally, he escaped from Krypton's destruction along with the other inhabitants of Argo City. In September 2011, The New 52 rebooted DC's continuity. In this new timeline, Supergirl discovers an amnesiac Cyborg Superman living on the planet I'noxia. This turns out to be Zor-El, who was rescued from Krypton's destruction by Brainiac and reconfigured as a half-man half-machine to be his scout looking for stronger species in the universe.[1]

As Cyborg Superman, Hank Henshaw possesses the ability to control and reanimate various machines. Due to his experience with Superman's birth matrix, Henshaw now has all of Superman's powers and genetic tissue identical to the Man of Steel's. As a member of the Sinestro Corps, Henshaw has access to a power ring fueled by fear energy that allows him to create any construct he can imagine.

As Cyborg Superman, Zor-El is cybernetically enhanced with the ability to fly, fire powerful heat rays from his cybernetic eye, and project electricity from his body. Zor-El's cybernetic arm can shape shift into whatever he desires, limited only by the technology available to him at the given moment that he chooses to use this ability. Zor-El is virtually indestructible, and also has super-speed and super-strength.

DC's direct-to-DVD movie Superman: Doomsday, based on "The Death of Superman" storyline, features a variation on the Cyborg Superman character. One of the many changes is a streamlined cast which cuts the four Superman imposters, including Cyborg Superman. Elements from three of the four impostors (Hank Henshaw, Superboy, and the Eradicator), were combined into the Superman clone created by Lex Luthor in the film.[5]

British wunderkind radio producer Dirk Maggs produced a Superman radio series for BBC Radio 5 in the 1990s. When the "Death of Superman" story arc happened in the comics, Maggs presented a very faithful, though much pared down, version of the tale which featured Stuart Milligan as Clark Kent/Superman, Lorelei King as Lois Lane, and William Hootkins as Lex Luthor. Versatile American actor Kerry Shale was cast both as the villainous Hank Henshaw and as Superboy. The story arc was packaged for sale on cassette and CD as Superman: Doomsday and Beyond in the UK and as Superman Lives! in the USA.

Read the rest here:

Cyborg Superman - Wikipedia

What’s Wrong With Ethereum, Ripple, and Litecoin? — The …

Investors who had the nerve and wherewithal to invest in cryptocurrencies early in 2017 and hold throughout the year were probably handsomely rewarded. Between the beginning and end of 2017, the aggregate cryptocurrency market cap gained almost $600 billion, which works out to an increase in value of more than 3,300%. For a single asset class, it might just be the greatest 12-month return we will see in our lifetimes.

Unfortunately, 2018 hasn't looked anything like the previous record-shattering year. After hitting an all-time market cap high of $835 billion on Jan. 7, cryptocurrencies pushed to lows that hadn't been seen since around Thanksgiving on March 17 ($274 billion). But it's not the drop itself that's necessarily the most attention-grabbing point. Instead, it's what's leading the crypto market cap significantly lower.

Image source: Getty Images.

Most folks would probably assume bitcoin is to blame. After all, bitcoin is the largest virtual coin by market cap, and frankly it's the only one most of the public has probably heard about. While bitcoin has indeed performed poorly, it's not been the driving force behind the recent collapse in crypto prices. That credit belongs to everything not named bitcoin.

You see, the fourth quarter of the previous year absolutely belonged to cryptocurrencies not named bitcoin. Having seen bitcoin tokens explode from less than $0.01 to $10,000 per token in under eight years had speculators throwing darts at dozens of virtual currencies in 2017, grasping for straws at what might be "the next bitcoin." As a result, many of bitcoin's chief rivals ran circles around it last year. Ethereum, which is the second-largest cryptocurrency by market cap, increased in value by 9,383% in 2017, while Ripple and Litecoin, two other extremely popular digital currencies, surged by 35,564% and 5,260%, respectively. Meanwhile, bitcoin rose 1,364%.

This year has seen a complete reversal of the fourth-quarter 2017 trend. After hitting a high of $1,432 per Ether token on Jan. 13, Ethereum has pushed as low as $460, representing a loss in value of 68%. Ripple, which was unstoppable last year and hit an all-time high of $3.84 per XRP token on Jan. 3, has since seen its coin trade as low as $0.55 -- a decline of 86%. Even Litecoin, which in some circles is viewed as bitcoin's biggest rival, has plummeted from a peak of $375 on Dec. 19 to as low as $107 in early February.

Comparatively, bitcoin, which made up as little as 33% of the aggregate cryptocurrency market cap in mid-January, now comprises 44.4% of the crypto market cap, according to CoinMarketCap.com. That's close to a three-month high. In short, even though bitcoin is falling, it's dropping far less than its peers.

Image source: Getty Images.

That prompts the question: What the heck is wrong with Ethereum, Ripple, and Litecoin?

Taking a broader look at bitcoin's biggest rivals, two factors stand out as being responsible for their considerable declines in recent months.

First, there's the simple fact that competition within the digital currency and blockchain space has exploded. Blockchain is the digital, distributed, and decentralized ledger often underpinning cryptocurrencies that's responsible for recording all transactions without the need for a financial intermediary, like a bank.

Last summer, there were around 900 virtual currencies that investors could buy. As of March 17, there were more than 1,650, with nearly all of them accompanied by their very own proprietary blockchain technology.

Were this not competition enough, brand-name companies have been working on developing their own blockchain technologies, some of which work independently of a virtual currency. For example, IBM (NYSE:IBM) is developing blockchain solutions for the financial services industry, as well as non-currency applications. In October 2017, IBM partnered with Stellar to use its Lumens coin as a financial intermediary in cross-border transactions in the South Pacific region processed on IBM's blockchain platform. Also, IBM and shipping giant A.P. Moller-Maersk recently announced their intention to create a separate joint venture that'll focus on developing shipping-based blockchain solutions. Such solutions could allow for real-time tracking of shipped products, as well as expedite approvals by eliminating paper from the equation.

In other words, growing competition from other cryptocurrencies and brand-name companies as a result of the low barrier to entry in developing and deploying blockchain and virtual coins is making life difficult.

Image source: Getty Images.

The other issue appears to be the proof-of-concept Catch-22 that practically every cryptocurrency is currently stuck in.

On the surface, Ethereum, Ripple, and Litecoin have done a really good job of brandling their blockchain technology or tokens, and the result has been countless partnerships. The Enterprise Ethereum Alliance had 200 member organizations as of October testing a version of Ethereum's blockchain across a variety of industries.Meanwhile, Ripple landed five brand-name financial partners in under two years' time, and Litecoin's average daily transactions have been steadily climbing since founder Charlie Lee announced he'd be working full-time to further Litecoin as a medium of exchange for goods and services.

But the underlying problem for most cryptocurrencies is that their platforms are being tested in small-scale projects and demos and not in large-scale, real-world scenarios. Enterprises simply don't feel comfortable yet with the idea of switching to blockchain platforms because they haven't been proven on a large scale. Yet, they'll never be proven on a large scale until a handful of enterprises gives blockchain a chance. This Catch-22 is precisely why cryptocurrency valuations have been deflating. If businesses don't move beyond this proof-of-concept conundrum, it's possible we could see Ethereum, Ripple, and Litecoin fall further, despite each offering unique advantages over bitcoin.

Read the rest here:

What's Wrong With Ethereum, Ripple, and Litecoin? -- The ...

The Final Word: Healthcare vs. Health Care – arcadia.io

A cursory review of all the textbooks, dictionaries, style guides, and news sources in the Anglophone world would reveal a complete lack of consistency in the conventions of how healthcare/health care/health-care (h/h/h) is written. Is anyone elses mind blown that no convention has been developed for how to write about a multi-trillion dollar industry? Mine certainly was. This is my attempt to rectify the lack of clear, well-researched direction on this subject.

If you were to look for an authoritative source on the topic, you would turn up a series of loose sets of rules and meritless rationales for conventions surrounding the veritable word cloud miasma that hovers around our industry. As such, I took to reading through the decisions handed down from the Court of Common Opinion in search of a compelling narrative for how we Anglophones the world over should free ourselves of this embarrassingly debilitating failure of language.

Frankly, this has annoyed the Internet for way too long. Health care is in the top 20% most searched words on Merriam-Websters online dictionary and understandably so. No one is looking up healthcare because its some hard, new word: people are looking up health care because they need to know conventions for how to use and spell it! And as I did yesterday, most people walk away from Merriam-Webster and similar sources with tails between legs, depressed they have to go through yet another day with no direction on whether they are using and writing h/h/h properly.

Michael Millenson recently tried his hand at unraveling this topic. He did a compelling investigational guest piece tracking down the history of usage and spelling for h/h/h on the blogThe Doctor Weighs In. Unfortunately, at the end of the article, Im still head-desking because Michael joins everyone else in what Im calling The Great Healthcare/Health Care Vacillation by not making an argument one way or another for usage and spelling.

The most developed, logical, and applicable set of conventions I have found was developed here by Deane Waldman, MD, MBA on his blog, Medical Malprocess. His refreshing approach is that we should use both healthcare and health care each for different purposes because the need for specificity is so great that no one version of this word/phrase would be sufficient. Here is my interpretation of how he has parsed these words:

health care (noun)

Definition: a set of actions by a person or persons to maintain or improve the health of a patient/customer

Examples:

healthcare (noun or adjective)

Definition: a system, industry, or field that facilitates the logistics and delivery of health care for patients/consumers

Examples:

To put it more simply, Dr. Waldman writes:

Health caretwo wordsrefers to provider actions.

Healthcareone wordis a system.

We need the second in order to have the first.

While this is a thorough and terribly useful set of conventions, the fact remains that in the US the most commonly accepted form in professional writing is health care (the Associated Press feels pretty strongly about it), regardless of the words part of speech and the concepts to which the author means to refer. My problem with this heavy-handed approach is that it flattens the language and allows the speaker and audience to discuss h/h/h with little specificity, leading to generalities made about h/h/h that are not valid for the other forms of the word/phrase/concept. As such, I think that Dr. Waldmans model, which judiciously incorporates both forms, should supplant all of, in my opinion, the half-formed and barely-enforced rules on how to write h/h/h.

You may be wondering why I (and others) care so much about this issue. The short answer is that healthcare has taken on more meaning as a closed compound word to describe the system/industry/field than is captured in the two separate words health and care. Health care does not sufficiently capture the increasing demand for nuance and specificity in referring to topics surrounding the practice and facilitation of services to maintain or improve health. Healthcare represents the political, financial, historical, sociological, and social implications of a system that provides health care to the masses.

As professionals in a fast-paced and demanding field, we should hold ourselves to a high standard of precision and accuracy in our language. More than a few (by that, I mean literally 100%) of the professionals in healthcare have found themselves at some point wondering whether they are writing this word/phrase properly. I say the time has come to end the Great Healthcare/Health Care Vacillation.

It is understandable for many to feel they have neither the time nor resources to dedicate themselves to the pursuit of grammatical perfection. However, our issue here is not simply a lack of differentiation between two words in some obscure intellectual niche. Our issue is that our entire profession, industry, and field lacks a single, unifying convention for how to portray itself to the world. There is no excuse for confusion coupled with a lack of conviction for the need and method to address the problem.

I am not so deluded to think this set of conventions will become common knowledge, but I can hope and pray that those of us tasked with writing about the healthcare system and the evolution of health care in practices will endeavor to establish and monitor a consistent set of conventions about something as powerful and pervasive as our health and the industry that supports it.

Original post:

The Final Word: Healthcare vs. Health Care - arcadia.io

Hidden In Plain Sight – 4 Movies That Expose The Globalist …

by Gregg Prescott, M.S.Editor, In5D.com

While there are many movies that expose the globalist agenda, four movies particularly caught my attention.

There seems to be several agendas going on simultaneously, such as the alien agenda and the New World Order agenda, but one other agenda is being shoved down our collective throats for at least 30 years: The transhumanism agenda.

The premise of transhumanism dates as far back as mans first search for the elixir to immortality and in recent years has segued into glorifying the idea of combining man with machine.

IMDb describes Chappie as:

In the near future, crime is patrolled by an oppressive mechanized police force. But now, the people are fighting back. When one police droid, Chappie, is stolen and given new programming, he becomes the first robot with the ability to think and feel for himself. As powerful, destructive forces start to see Chappie as a danger to mankind and order, they will stop at nothing to maintain the status quo and ensure that Chappie is the last of his kind.

Chappie is glorifying the transhumanism agenda in conjunction with artificial intelligence where people will soon be offered to live as immortal gods in exchange for being hooked up to the matrix, which inevitably, will make these same people perpetual, subservient slaves.

We are starting to see the beginning of this through digital tattoos, smart tattoos, ingestible RFID chips, and nanoparticle RFIDs. Globalist shill Regina Dugan, former DARPA head who now leads advanced research for Motorola stated, It may be true that 10-20 year olds dont want to wear a watch on their wrists, but you can be sure that theyll be far more interested in wearing an electronic tattoo if only to piss off their parents.

For many people, The Matrix was just another science fiction movie but for even more people, this is the initial movie that truly woke the masses out of their collective stupor.

IMDb: A computer hacker learns from mysterious rebels about the true nature of his reality and his role in the war against its controllers.

Thomas A. Anderson is a man living two lives. By day he is an average computer programmer and by night a hacker known as Neo. Neo has always questioned his reality, but the truth is far beyond his imagination. Neo finds himself targeted by the police when he is contacted by Morpheus, a legendary computer hacker branded a terrorist by the government. Morpheus awakens Neo to the real world, a ravaged wasteland where most of humanity have been captured by a race of machines that live off of the humans body heat and electrochemical energy and who imprison their minds within an artificial reality known as the Matrix. As a rebel against the machines, Neo must return to the Matrix and confront the agents: super-powerful computer programs devoted to snuffing out Neo and the entire human rebellion.

More and more people are beginning to realize the many truths in this movie which basically shows how we are living in a simulated reality while our bodies are living as an energy source for our overlords.

Similar to Chappie, transhumanism takes precedent as a means of going in and out of the matrix. While caught within the matrix, we all assume that this is real but relatively few people question why we need to work for money and cannot comprehend the premise behind the question, If there was no such thing as money, what would you be doing with your life? Weve been brainwashed for millennia about living in this false reality constructed to keep us living in subservience, control and conformity to a system designed to keep us living in fear as economic slaves.

When you look at it from this perspective, does it make sense to waste the majority of your life working some job that you hate for a boss whos an a*hole, only to get that 1 or 2 weeks off a year to enjoy as a vacation while your literally recharge your battery? Theres a reason we look forward to the weekend because by the weekend, we are weakened.

Mark Passio does an amazing job analyzing The Matrix trilogy:

IMDbs description of Network: A television network cynically exploits a deranged former anchors ravings and revelations about the news media for its own profit.

In the 1970s, terrorist violence is the stuff of networks nightly news programming and the corporate structure of the UBS Television Network is changing. Meanwhile, Howard Beale, the aging UBS news anchor, has lost his once strong ratings share and so the network fires him. Beale reacts in an unexpected way. We then see how this affects the fortunes of Beale, his coworkers (Max Schumacher and Diana Christensen), and the network.

The star of the film, Howard Beale, even hinted at transhumanism:

The whole world is becoming humanoid creatures that look human, but arent. The whole world, not just us.

The bottom line is how the nightly news influences and persuades public opinion, even through blatant lies. Youll never feel good after watching the nightly news. Why? Because when you live in the lower vibration of fear, you can be easily controlled and manipulated. The current terrorist agenda is the perfect ploy by the globalists because its a war that can never be won. Additionally, people will gladly give up their civil liberties and freedom in exchange for perceived protection by the government to fight these non-existent entities.

David Icke calls this Problem. Reaction. Solution in which the government creates a problem through false flags, we react by saying the government needs to address the problem and the government has a solution to the problem, which ALWAYS involves the loss of civil liberties and freedom.

We are just starting to see a group of disgruntled reporters leave the industry because they do not agree with how the news is scripted or the propaganda that is being pushed by the CIA in order to influence public opinion regarding everything from how well the economy is doing to why we should start yet another war. Unfortunately, there are plenty of buffoons in search of fame and notoriety (ego) who are willing to take the places of these reporters who have left the business, and they will conform to whatever their overlords desire, even if that means hurting their friends and family by reporting lies to the masses.

John Carpenters 1988 cult classic, They Live combines an alien agenda with how the mainstream media is brainwashing the masses.

IMDb describes the movie as A drifter discovers a pair of sunglasses that allow him to wake up to the fact that aliens have taken over the Earth.

Nada, a down-on-his-luck construction worker, discovers a pair of special sunglasses. Wearing them, he is able to see the world as it really is: people being bombarded by media and government with messages like Stay Asleep, No Imagination, Submit to Authority. Even scarier is that he is able to see that some usually normal-looking people are in fact ugly aliens in charge of the massive campaign to keep humans subdued.

An intriguing part of the movie is when the aliens throw a party for their human collaborators who agree to push the alien agenda. This is very reminiscent of lobbyists who push agendas for Monsanto, Big Pharma, etc.. The bottom line is that if you support the alien agenda, you will be generously compensated to keep your mouth shut. Does this sound familiar to you?

The Terminator

IMDb:

A cyborg is sent from the future on a deadly mission. He has to kill Sarah Connor, a young woman whose life will have a great significance in years to come. Sarah has only one protector Kyle Reese also sent from the future. The Terminator uses his exceptional intelligence and strength to find Sarah, but is there any way to stop the seemingly indestructible cyborg?

Lucy

IMDb:

It was supposed to be a simple job. All Lucy had to do was deliver a mysterious briefcase to Mr. Jang. But immediately Lucy is caught up in a nightmarish deal where she is captured and turned into a drug mule for a new and powerful synthetic drug. When the bag she is carrying inside of her stomach leaks, Lucys body undergoes unimaginable changes that begins to unlock her minds full potential. With her new-found powers, Lucy turns into a merciless warrior intent on getting back at her captors. She receives invaluable help from Professor Norman, the leading authority on the human mind, and French police captain Pierre Del Rio.

While it may seem like a glamorous idea to have infinite knowledge, there will be a price to pay. For example:

Its not enough to expose these agendas. One needs to be cognizant of what is being forced upon us and be willing to make decisions that are proactive, such as refusing any RFID chip implantation or simply not buying into the false promises of how great your life will be as a cyborg. By choosing artificial intelligence, there is no spiritual progression for the soul, if any part of the soul remains.

The power of thought can also create the world you want to see. Try envisioning a world without transhumanism, money or globalist agendas. Replace the negative things in this world, such as nuclear energy, gas or coal, with free energy. We have the ability RIGHT NOW to create a world where everyone can live in abundance and prosperity without the need for economic subservience.

You were born as a PERFECT soul and upon returning to the Creator, you will remain in complete perfection without the need for artificial intelligence or transhumanism.

Follow In5D on Facebook!

Click here for more articles by Gregg Prescott!

About the Author:Gregg Prescott, M.S. is the founder and editor of In5D and BodyMindSoulSpirit. You can find his In5D Radio shows on the In5D Youtube channel. He is a visionary, author, a transformational speaker, and promotes spiritual, metaphysical and esoteric conferences in the United States through In5dEvents. His love and faith for humanity motivates him to work in humanitys best interests 12-15+ hours a day, 365 days a year. Please like and follow In5D on Facebook as well as BodyMindSoulSpirit on Facebook!

Tags: 4 Movies That Expose The Globalist Agenda, agenda, alien agenda, artificial intelligence, chappie, David Icke, gregg prescott, Hidden In Plain Sight, Hidden In Plain Sight - 4 Movies That Expose The Globalist Agenda, if there was no such thing as money, lucy, movie, movies, network, NEW WORLD ORDER, propaganda, RFID chip, the matrix, the terminator, they live, transhumanism, transhumanism agenda, vibration of fear

See the original post here:

Hidden In Plain Sight - 4 Movies That Expose The Globalist ...

Litecoin Price Prediction: 3 Great Reasons to Stay Bullish on Litecoin as Markets Tumble

Daily Litecoin News Update
Cryptocurrency markets are taking another breather but if my analysis is correct, we may see another mini-rally over the weekend. Although erratic, price charts show that crypto prices have developed somewhat of a predictive pattern. It seems like prices crash through mid-week only to recover some of that drop over the weekends, as if all traders are driving the market in a concerted manner. But I wouldn't read too much into it.

I’ll be honest. Watching crypto prices swing back and forth on a daily basis is nauseating. So to keep my sanity intact, I try to focus on the long term. Try asking yourself, what would my Litecoins be worth a year from.

The post Litecoin Price Prediction: 3 Great Reasons to Stay Bullish on Litecoin as Markets Tumble appeared first on Profit Confidential.

Go here to read the rest:
Litecoin Price Prediction: 3 Great Reasons to Stay Bullish on Litecoin as Markets Tumble