Daily Archives: June 17, 2016

Fiscal Year 2013 Budget | Budget.House.Gov

Posted: June 17, 2016 at 5:03 am

The Path to Prosperity: A Blueprint for American Renewal

House Budget Committee - Fiscal Year 2013 Budget Resolution

Read Full Report

Read Facts and Summary

A Contrast in Visions

For years, both political parties have made empty promises to the American people. Unfortunately, the President refuses to take responsibility for avoiding the debt-fueled crisis before us. Instead, his policies have put us on the path to debt and decline.

The President and his partys leaders refuse to take action in the face of the most predictable economic crisis in our nations history. The Presidents budget calls for more spending and more debt, while Senate Democrats for over 1,000 days have refused to pass a budget. This unserious approach to budgeting has serious consequences for American families, seniors, and the next generation.

We reject the broken politics of the past. The American people deserve real solutions and honest leadership. Thats what were delivering with our budget, The Path to Prosperity. House Republicans are advancing a plan of action for American renewal.

Our budget:

Cuts government spending to protect hardworking taxpayers;

Tackles the drivers of our debt, so our troops dont pay the price for Washingtons failure to take action;

Restores economic freedom and ensures a level playing field for all by putting an end to special-interest favoritism and corporate welfare

Reverses the Presidents policies that drive up gas prices, and instead promotes an all-of the-above strategy for unlocking American energy production to help lower costs, create jobs, and reduce dependence on foreign oil.

Strengthens health and retirement security by taking power away from government bureaucrats and empowering patients instead with control over their own care;

Reforms our broken tax code to spur job creation and economic opportunity by lowering rates, closing loopholes, and putting hardworking taxpayers ahead of special interests.

At its core, this plan of action is about putting an end to empty promises from a bankrupt government and restoring the fundamental American promise: ensuring our children have more opportunity and inherit a stronger America than our parents gave us.

READ FULL REPORT

The FY2013 Budget Resolution: Concurrent Resolution on the Budget for FY 2013 as Reported The Report on Concurrent Resolution on the Budget for FY 2013

Introduction by Chairman Ryan

Appendix I: Summary Tables

Appendix II: Reprioritizing Sequester Savings

CBO Analysis

Views and Estimates of Committees of the House Additional Information:

A Budget Presentation - Charts

Additional Fiscal Comparisons on The Path to Prosperity

The GOP Budget and America's Future - Wall Street Journal op-ed, By Paul Ryan

See the rest here:

Fiscal Year 2013 Budget | Budget.House.Gov

Posted in Fiscal Freedom | Comments Off on Fiscal Year 2013 Budget | Budget.House.Gov

Utopia (book) – Wikipedia, the free encyclopedia

Posted: at 5:01 am

Utopia (Libellus vere aureus, nec minus salutaris quam festivus, de optimo rei publicae statu deque nova insula Utopia) is a work of fiction and political philosophy by Thomas More (14781535) published in 1516 in Latin. The book is a frame narrative primarily depicting a fictional island society and its religious, social and political customs. Many aspects of More's description of Utopia are reminiscent of life in monasteries.[1]

The title De optimo rei publicae deque nova insula Utopia literally translates, "Of a republic's best state and of the new island Utopia". It is variously rendered On the Best State of a Republic and on the New Island of Utopia, Concerning the Highest State of the Republic and the New Island Utopia, On the Best State of a Commonwealth and on the New Island of Utopia, Concerning the Best Condition of the Commonwealth and the New Island of Utopia, On the Best Kind of a Republic and About the New Island of Utopia, About the Best State of a Commonwealth and the New Island of Utopia, etc. The original name was even longer: Libellus vere aureus, nec minus salutaris quam festivus, de optimo rei publicae statu deque nova insula Utopia. This translates, "A truly golden little book, no less beneficial than entertaining, of a republic's best state and of the new island Utopia".

"Utopia" is derived from the Greek prefix "ou-"(ou), meaning "not", and topos (), "place", with the suffix -i (-) that is typical of toponyms; hence the name literally means "nowhere", emphasizing its fictionality. In early modern English, Utopia was spelled "Utopie", which is today rendered Utopy in some editions.[2]

A common misunderstanding has that "Utopia" is derived from eu- (e), "good", and "topos", such that it would literally translate as "good place".[3]

In English, Utopia is pronounced exactly as Eutopia (the latter word, in Greek [Eutopi], meaning good place, contains the prefix - [eu-], "good", with which the of Utopia has come to be confused in the French and English pronunciation).[4] This is something that More himself addresses in an addendum to his book Wherfore not Utopie, but rather rightely my name is Eutopie, a place of felicitie.[5]

One interpretation holds that this suggests that while Utopia might be some sort of perfected society, it is ultimately unreachable (see below).

The work begins with written correspondence between Thomas More and several people he had met on the continent: Peter Gilles, town clerk of Antwerp, and Hieronymus van Busleyden, counselor to Charles V. More chose these letters, which are communications between actual people, to further the plausibility of his fictional land. In the same spirit, these letters also include a specimen of the Utopian alphabet and its poetry. The letters also explain the lack of widespread travel to Utopia; during the first mention of the land, someone had coughed during announcement of the exact longitude and latitude. The first book tells of the traveller Raphael Hythlodaeus, to whom More is introduced in Antwerp, and it also explores the subject of how best to counsel a prince, a popular topic at the time.

The first discussions with Raphael allow him to discuss some of the modern ills affecting Europe such as the tendency of kings to start wars and the subsequent loss of money on fruitless endeavours. He also criticises the use of execution to punish theft, saying thieves might as well murder whom they rob, to remove witnesses, if the punishment is going to be the same. He lays most of the problems of theft on the practice of enclosurethe enclosing of common landand the subsequent poverty and starvation of people who are denied access to land because of sheep farming.

More tries to convince Raphael that he could find a good job in a royal court, advising monarchs, but Raphael says that his views are too radical and wouldn't be listened to. Raphael sees himself in the tradition of Plato: he knows that for good governance, kings must act philosophically. However, he points out that:

More seems to contemplate the duty of philosophers to work around and in real situations and, for the sake of political expediency, work within flawed systems to make them better, rather than hoping to start again from first principles.

Utopia is placed in the New World and More links Raphael's travels in with Amerigo Vespucci's real life voyages of discovery. He suggests that Raphael is one of the 24 men Vespucci, in his Four Voyages of 1507, says he left for six months at Cabo Frio, Brazil. Raphael then travels further and finds the island of Utopia, where he spends five years observing the customs of the natives.

According to More, the island of Utopia is

The island was originally a peninsula but a 15-mile wide channel was dug by the community's founder King Utopos to separate it from the mainland. The island contains 54 cities. Each city is divided into four equal parts. The capital city, Amaurot, is located directly in the middle of the crescent island.

Each city has 6000 households, consisting of between 10 and 16 adults. Thirty households are grouped together and elect a Syphograntus (whom More says is now called a phylarchus). Every ten Syphogranti have an elected Traniborus (more recently called a protophylarchus) ruling over them. The 200 Syphogranti of a city elect a Prince in a secret ballot. The Prince stays for life unless he is deposed or removed for suspicion of tyranny.

People are re-distributed around the households and towns to keep numbers even. If the island suffers from overpopulation, colonies are set up on the mainland. Alternatively, the natives of the mainland are invited to be part of these Utopian colonies, but if they dislike it and no longer wish to stay they may return. In the case of underpopulation the colonists are re-called.

There is no private property on Utopia, with goods being stored in warehouses and people requesting what they need. There are also no locks on the doors of the houses, which are rotated between the citizens every ten years. Agriculture is the most important job on the island. Every person is taught it and must live in the countryside, farming for two years at a time, with women doing the same work as men. Parallel to this, every citizen must learn at least one of the other essential trades: weaving (mainly done by the women), carpentry, metalsmithing and masonry. There is deliberate simplicity about these trades; for instance, all people wear the same types of simple clothes and there are no dressmakers making fine apparel. All able-bodied citizens must work; thus unemployment is eradicated, and the length of the working day can be minimised: the people only have to work six hours a day (although many willingly work for longer). More does allow scholars in his society to become the ruling officials or priests, people picked during their primary education for their ability to learn. All other citizens are however encouraged to apply themselves to learning in their leisure time.

Slavery is a feature of Utopian life and it is reported that every household has two slaves. The slaves are either from other countries or are the Utopian criminals. These criminals are weighed down with chains made out of gold. The gold is part of the community wealth of the country, and fettering criminals with it or using it for shameful things like chamber pots gives the citizens a healthy dislike of it. It also makes it difficult to steal as it is in plain view. The wealth, though, is of little importance and is only good for buying commodities from foreign nations or bribing these nat
ions to fight each other. Slaves are periodically released for good behaviour. Jewels are worn by children, who finally give them up as they mature.

Other significant innovations of Utopia include: a welfare state with free hospitals, euthanasia permissible by the state, priests being allowed to marry, divorce permitted, premarital sex punished by a lifetime of enforced celibacy and adultery being punished by enslavement. Meals are taken in community dining halls and the job of feeding the population is given to a different household in turn. Although all are fed the same, Raphael explains that the old and the administrators are given the best of the food. Travel on the island is only permitted with an internal passport and any people found without a passport are, on a first occasion, returned in disgrace, but after a second offence they are placed in slavery. In addition, there are no lawyers and the law is made deliberately simple, as all should understand it and not leave people in any doubt of what is right and wrong.

There are several religions on the island: moon-worshipers, sun-worshipers, planet-worshipers, ancestor-worshipers and monotheists, but each is tolerant of the others. Only atheists are despised (but allowed) in Utopia, as they are seen as representing a danger to the state: since they do not believe in any punishment or reward after this life, they have no reason to share the communistic life of Utopia, and will break the laws for their own gain. They are not banished, but are encouraged to talk out their erroneous beliefs with the priests until they are convinced of their error. Raphael says that through his teachings Christianity was beginning to take hold in Utopia. The toleration of all other religious ideas is enshrined in a universal prayer all the Utopians recite.

Wives are subject to their husbands and husbands are subject to their wives although women are restricted to conducting household tasks for the most part. Only few widowed women become priests. While all are trained in military arts, women confess their sins to their husbands once a month. Gambling, hunting, makeup and astrology are all discouraged in Utopia. The role allocated to women in Utopia might, however, have been seen as being more liberal from a contemporary point of view.

Utopians do not like to engage in war. If they feel countries friendly to them have been wronged, they will send military aid. However they try to capture, rather than kill, enemies. They are upset if they achieve victory through bloodshed. The main purpose of war is to achieve that which, if they had achieved already, they would not have gone to war over.

Privacy is not regarded as freedom in Utopia; taverns, ale-houses and places for private gatherings are non-existent for the effect of keeping all men in full view, so that they are obliged to behave well.

One of the most troublesome questions about Utopia is Thomas More's reason for writing it.

Most scholars see it as some kind of comment or criticism of contemporary European society, for the evils of More's day are laid out in Book I and in many ways apparently solved in Book II.[7] Indeed, Utopia has many of the characteristics of satire, and there are many jokes and satirical asides such as how honest people are in Europe, but these are usually contrasted with the simple, uncomplicated society of the Utopians.

Yet, the puzzle is that some of the practices and institutions of the Utopians, such as the ease of divorce, euthanasia and both married priests and female priests, seem to be polar opposites of More's beliefs and the teachings of the Catholic Church of which he was a devout member. Another often cited apparent contradiction is that of the religious toleration of Utopia contrasted with his persecution of Protestants as Lord Chancellor. Similarly, the criticism of lawyers comes from a writer who, as Lord Chancellor, was arguably the most influential lawyer in England. However, it can be answered that as a pagan society Utopians had the best ethics that could be reached through reason alone, or that More changed from his early life to his later when he was Lord Chancellor.[7]

One highly influential interpretation of Utopia is that of intellectual historian Quentin Skinner.[8] He has argued that More was taking part in the Renaissance humanist debate over true nobility, and that he was writing to prove the perfect commonwealth could not occur with private property. Crucially, his narrator Hythlodaeus embodies the Platonic view that philosophers should not get involved in politics and his character of More has the more pragmatic Ciceronic view; thus the society Hythlodaeus proposes is the ideal More would want, but without communism, which he saw no possibility of occurring, it was wiser to take a more pragmatic view. Utopia is thus More's ideal, but an unobtainable one, explaining why there are inconsistencies between the ideas in Utopia and More's practice in the real world.

Quentin Skinner's interpretation of Utopia is consistent with the speculation that Stephen Greenblatt made in The Swerve: How the World Became Modern. There, Greenblatt argued that More was under the Epicurean influence of Lucretius's On the Nature of Things and the people that live in Utopia were an example of how pleasure has dictated them as the guiding principle of life.[9] Although Greenblatt acknowledged that More's insistence on the existence of an afterlife and punishment for people holding contrary views were inconsistent with the essentially materialist view of Epicureanism, Greenblatt contended that it was the minimum conditions for what the pious More would have considered as necessary to live a happy life.[9]

Another complication comes from the Greek meaning of the names of people and places in the work. Apart from Utopia, meaning "Noplace," several other lands are mentioned: Achora meaning "Nolandia", Polyleritae meaning "Muchnonsense", Macarenses meaning "Happiland," and the river Anydrus meaning "Nowater". Raphael's last name, Hythlodaeus means "dispenser of nonsense" surely implying that the whole of the Utopian text is 'nonsense'. Additionally the Latin rendering of More's name, Morus, means "fool" in Greek. It is unclear whether More is simply being ironic, an in-joke for those who know Greek, seeing as the place he is talking about does not actually exist or whether there is actually a sense of distancing of Hythlodaeus' and the More's ("Morus") views in the text from his own.

The name Raphael, though, may have been chosen by More to remind his readers of the archangel Raphael who is mentioned in the Book of Tobit (3:17; 5:4, 16; 6:11, 14, 16, 18; also in chs. 7, 8, 9, 11, 12). In that book the angel guides Tobias and later cures his father of his blindness. While Hythlodaeus may suggest his words are not to be trusted, Raphael meaning "God has healed" suggests that Raphael may be opening the eyes of the reader to what is true. The suggestion that More may have agreed with the views of Raphael is given weight by the way he dressed; with "his cloak... hanging carelessly about him"; a style which Roger Ascham reports that More himself was wont to adopt. Furthermore, more recent criticism has questioned the reliability of both Gile's annotations and the character of "More" in the text itself. Claims that the book only subverts Utopia and Hythlodaeus are possibly oversimplistic.

Utopia was begun while More was an envoy in Flanders in May 1515. More started by writing the introduction and the description of the society which would become the second half of the work and on his return to England he wrote the "d
ialogue of counsel", completing the work in 1516. In the same year, it was printed in Leuven under Erasmus's editorship and after revisions by More it was printed in Basel in November 1518. It was not until 1551, sixteen years after More's execution, that it was first published in England as an English translation by Ralph Robinson. Gilbert Burnet's translation of 1684 is probably the most commonly cited version.

The work seems to have been popular, if misunderstood: the introduction of More's Epigrams of 1518 mentions a man who did not regard More as a good writer.

The word Utopia overtook More's short work and has been used ever since to describe this kind of imaginary society with many unusual ideas being contemplated. Although he may not have founded the genre of Utopian and dystopian fiction, More certainly popularised it and some of the early works which owe something to Utopia include The City of the Sun by Tommaso Campanella, Description of the Republic of Christianopolis by Johannes Valentinus Andreae, New Atlantis by Francis Bacon and Candide by Voltaire.

The politics of Utopia have been seen as influential to the ideas of Anabaptism and communism.[citation needed] While utopian socialism was used to describe the first concepts of socialism, later Marxist theorists tended to see the ideas as too simplistic and not grounded on realistic principles. The religious message in the work and its uncertain, possibly satiric, tone has also alienated some theorists from the work.

An applied example of More's utopia can be seen in Vasco de Quiroga's implemented society in Michoacn, Mexico, which was directly taken and adapted from More's work.

The opening scene in the movie A Man for all Seasons set in an eatery, before Thomas More appears, Utopia comes up in the conversation. England's priests and their alleged immorality (Somebody says every 2nd person born is fathered by a priest) is compared to the priests of Utopia.

See the original post:

Utopia (book) - Wikipedia, the free encyclopedia

Posted in New Utopia | Comments Off on Utopia (book) – Wikipedia, the free encyclopedia

How to Start Your Own Micronation

Posted: at 5:00 am

The beginning.

So you get up one morning and you do like I did, you watch the movie "The Mouse That Roared" or "Moon Over Parador", or "The Prisoner of Zenda", and you say to yourself, "Self, I wish I could have my own country. That way I could avoid that situation the other day where I almost ran over Mrs. MacGillicuddy in my rush to get my income taxes into the mail on time." Well, that's one reason. Maybe a bad one. Or maybe you're twelve and tired of having your mom compare the condition of your room to Berlin after the war. "Mom," you say, "This room is no longer your concern, because it is now an independent country. The Kingdom of Bob's Room. And I am King. So enough with harassing me about my socks on the floor." And so, as you gaze around your domain, your new nation, you say to yourself, "Now what?".

Like most people, you now abandon the idea of your own country in favor of the latest Nintendo game (for minors) or beer (for adults). But a few imaginative individuals instead forge ahead and seek to make their own path, creating their own nation, a personal Lilliput in a world of Brobdingnags.

And having decided this, you must now ask yourself, what exactly is your goal? There are many different types of micronations. Erwin S. Strauss broke down efforts to start a new country into five different categories in his book, How to Start Your Own Country:

Traditional sovereignty: Having status as a sovereign nation, including exchanging ambassadors, acceptance of passports, membership in international organizations. This usually includes possession of actual territory (land). Ship under flag of convenience: Ships off the coast of sovereign nations, usually as part of a money-making scheme. Litigation: Using macronational law to press your claim to independence. Vonu (out of sight out of mind): Establishing your "nation" in a remote area, far from macronational authority. Model country: A project nation designed to resemble most aspects of nationhood, without actually seeking sovereignty. Generally the definition of an on-line nation.

That is one way to look at it. Lars Erik Bryld, from the Sovereign Principality of Corvinia breaks it down thus, with an eye toward the seriousness of your micronational effort:

Statehood means acquisition and complete control over a territory, and the acceptance of this sovereignty by the international society. Nationhood means a condition where a group of persons achieve a common identity as a people and the will to be identified as such. A Political Exercise means the attempt to create a plausible and internally consistent simulation of a governmental mechanism. Though the ultimate purpose might be recreational, emphasis is on the realism. Community means a society of like-minded individuals, which in some respects does not possess the attributes of a nation as defined above. Mostly Fun means a completely spurious vehicle of interacting as a way of entertainment. Though a governmental structure may exist, the prime purpose is to have fun.

Your Goal

So now you know the types of micronations. Now you must ask yourself, again, what is my goal with this country? Bear in mind, most micronations start out just for fun. This can change, of course, and sometimes does. All nations evolve. It is good though to think ahead just a little. If you want your nation to be taken seriously humorous elements will have to be toned down, at least to some extent. So, if you start your nation out as the Republic of Buttwind, and at some point decide that you would prefer to be taken seriously, a name change might be in order. So think ahead, as you build your country, no matter what your current goals might be.

Micronational Seriousness

On the matter of micronational seriousness, a few notes. This is a subject that can be quite vexing for the new micronationalist. There are several serious micronational efforts out there, and they take themselves very seriously. They tend to avoid elements in their nations that are not mostly grounded in reality; no fictional histories, actual possession or at least claims of real places, not fictional, never any "fake" citizens, all real. Their goal, in many cases, is actual independence, on some scale or another. Most will not open diplomatic relations with less than serious micronations, feeling that to have open communications with less serious nations may damage their micronation's reputation and endanger their goal of sovereignty.

It varies from micronation to micronation, but seriousness can be a real sticking point in micronational relations. As a new micronationalist, it is important not to get too annoyed when certain nations refuse to recognize yours, or even reply to your e-mails. That is their way of doing things, and you have yours. Seek nations that are at your level of seriousness, and open relations with them.

The Basics

Ok, so you have decided to start your country. While you think about where you want to go with it, we'll start with the basics. You need citizens. You need a website. For many micronations, this is most, if not all that their country ever is, a website and a few dedicated citizens. Some micronations start without any citizens, build a website and try to lure citizens in. This is hard, since most micronationalists want their own country and don't want to share. So, I suggest you cast about among your friends and family and find your citizens first. Or, if you wish start without citizens, build a nice website and see where it goes. Like the movie said, "if you build it, they will come", and they will. But you must have something interesting, new and exciting. There are a plethora of one-man kingdoms out there, make your nation spark interest.

As a note, your website should not be your nation. If your nation is just a website, it will not last very long and never be taken seriously. It should represent your nation, and be used if you want as a tool for communicating your nation to the world. It is important that your nation be something beyond your website, otherwise you will have a hard time developing your nation into something interesting that will last a while.

So, now you have the idea. Get a website through one of the free hosting places, like Freewebs or Tripod. This will start you on the right track. I advise that you visit some of the websites of real nations (we call them "macronations") and see how they are designed, and what elements are included. Look at their national symbols, descriptions of their government, culture, people and so on. Get ideas from them and take your nation's website from there. You may also want to visit existing micronations, and draw ideas from them. Be careful that only draw ideas from them, and not specific items, images, text or formats. Plagiarizing from another nation's website (or any website) is very bad, and you will end up regretting it. Trust me. Sometimes, however, the owner of another website will allow you to "borrow" with permission, and usually with credit given where due. By the way, a message board or a social networking site is not acceptable for your nation's website, not if you wish to be taken seriously. The U.S. Government doesn't conduct it's business via Facebook, does it? Neither should you. Get a real dedicated website.

Once you have a website, and you have ideas of what to put on it, what's next? Well, you need a flag. I chose my nation's flag from among those that already exist, in this case Sierra Leone, and then turned it upside down. I did this so that I would have a real flag to fly outside when ever I wish,
without having to sew one from scratch. What flag you choose, though, is really up to you. Your flag should represent your nation. If you never plan on actually flying it, it can look like anything. It is one of many important aspects of your nation, so think about it carefully. Of course, if you're the Grand Poobah of your nation, you can change it at will, but be careful of making too many changes, too often or else you will appear to be a flake. Rapid, frequent changes should be avoided studiously. This will require planning for the future some, but it will pay off diplomatically as your nation appears more stable and doesn't change on a whim.

On to arms. A coat of arms is nice to have, although not essential. If you have limited graphic capability, you may wish to postpone this, but you will need to think of it eventually. Let your arms represent your nation, symbolically. Again, look at other nation's arms and glean your ideas from there.

Other symbols. You can have a national anthem, national bird, national animal, national food, etc. Whatever you think best represents your nation. Look around at other macro- and micronations, get some ideas. You may wish to develop a coherent "theme" for your nation, such as medieval, German or something like that. Your type of government and your culture will reflect that theme. While not essential, a theme for your nation gives it more character and style, and makes it more interesting.

Land. Your nation should possess or at least claim land. I know, not everyone can buy land, but everyone can claim land. It doesn't need to be solely yours, claim land that is publicly owned, like a local park, a nature reserve, that sort of thing. Go there, plant your flag, and claim the land in the name of your nation. Take some pictures and go home and load those pictures onto your website. With a few photos and a land claim you just gave your nation and its website depth and you made it interesting. It changes everything, makes you a nation to be noticed, so go do it!

Activities. Sure you just held the elections for the National Assembly, or you just had a splendid coronation of yourself as Emperor of Bob's Room - now what? More elections? Establish foreign relations with Bob's Bathroom? No, you need to get out in the sun. Go do something in name of your nation. A nation isn't just about it's government, it's about a world of other things, too. How about foreign trade? Make something and trade it for something else from your buddy down the street - voila! Foreign trade, announcement it on your website. How about sports? Go play football (either kind) and announce it on your website. Science? Go explore someplace and, you guessed it, announce it on your website. Here's a thought: That public land you claimed in the preceding paragraph? Get a trash bag and go tidy it up. Take a couple of pictures, and not only put it on your website, but also send the announcement in to your local newspaper. Not only is it good for the environment, it's good publicity for your nation in the real world. And hey, how about appearing in a local parade? Most towns have a parade sometime during the year, go ahead and get your imperial self into it! These are just some ideas, but the overriding concept is there - get away from the computer and your bedroom and get out in the world and do things with your nation!

The Government

So, now, to government. Having a government is often the point of a micronation, although not always. As you see from the definitions above, some micronations are political simulations for the purpose of practicing the workings of government. The other extreme would be an absolute monarchy, where the workings of government is in the hands of the king and how things work is based on how he wants things done. In the latter case, government plays a comparatively minor part, and your nation could compensate with an interesting culture or something like that. In the former case, government is all-important and the interaction of the government is the draw for new citizens. Culture, while present, may take a back seat.

Your type of government also reflects how others see your nation. Extreme governments, such as communist or fascist carry with them much psychological baggage. This can affect not only who and what kind of citizen becomes part of your nation. It can also affect the kind of nations that recognize yours and your overall standing in the micronational world. While communist nations can be accepted by the micronational community, for example, fascist nations generally are not, and your nation will almost certainly be an outcast from the beginning if you choose this type of government. Monarchies and republics tend to be more mainstream and garner fewer preconcieved notions.

Speaking of which, acceptance by the community is an often sought-after goal, although it can be very transitory and fickle. Design your nation in a manner that makes you happy, and do it well, and you will naturally make friends. You can't make everyone happy, and anyway, that is not the point.

Which brings us to:

Diplomacy

Diplomacy, defined by Webster, is the art or practice of conducting international relations, as in negotiating alliances, treaties and agreements. A second part to that is tact and skill in dealing with people. Diplomacy can be a very big issue for many micronations. Often, when a new nation emerges, while their internal workings are still being formalized, diplomatic relations are eagerly sought after. In this area it is important to take a close look at the nations you are seeking relations with. As I said above, serious nations may avoid relations with new nations, often refusing to open relations at all with those nations that are less serious than them. This can be very subjective. Seek relations with those nations that seem to share similar concepts and ideals with yours. Branch out to other nations as you learn more about the hobby.

When seeking diplomatic relations, begin formally. Do not assume that the person you are speaking to is as informal as you might be, instead, assume the opposite. Use standard mail formats, heading, salutation, body, closing, signature. Later, if an informal relationship develops between you and a micronationalist in another nation, informality is allowed. But communication between two nations should always be formal.

Remember, you represent your nation at all times. NATION. Not a cute little website that you call a nation. If you are going to play the game, play it right. Your purpose, whether serious or not, is to have your own country. Behave that way at all times, as if your nation were real. In this way, you will gain respect from your peers and gain greater standing in the micronational world.

Go here to read the rest:

How to Start Your Own Micronation

Posted in Micronations | Comments Off on How to Start Your Own Micronation

Micronation.org – The Micronation Site

Posted: at 5:00 am

What is a micronation? The term 'micronation' literally means "small nation". It is a neologism originating in the mid-1990s to describe the many thousands of small unrecognised state-like entities that have mostly arisen since that time. It is generally accepted that the term was invented by Robert Ben Madison. The term has since also come to be used retrospectively to refer to earlier unrecognised entities, some of which date to as far back as the 19th century. Supporters of micronations use the term "macronation" for any UN-recognized sovereign nation-state. What is Micronation.org? Micronation.org aims to be the most complete and most professional site about micronations on the Internet, as well as being home to a vibrant and diverse community of micronationalists. We have over one hundred active users at present, and that number is continuing to grow. Micronation.org at present contains MicroWiki, our professional micronational encyclopaedia, the Micronation.org Forum, and the Micronational News Agency, with plans for further content in the future. What are some notable micronations? Throughout recent history there have been many. The two that the wider world are likely most familiar with are the Republic of Molossia, and the Principality of Sealand. Molossia The Republic of Molossia, is a North American micronation located in Dayton, Nevada, and with an enclave Southern California. One of the oldest micronations, it is the successor state to the Grand Republic of Vuldstein, founded by James Spielman and Kevin Baugh in May 1977. Vuldstein, located in Portland, Oregon, was active for a short period which lasted until the end of that year, when its King moved to another city without renouncing to his throne, leading the Grand Republic to a state of inactivity. Baugh then took control of the nation, with it officially becoming Molossia in 1998. Appearing on the "Lonely Planet Guide to Home-Made Nations", Molossia is known outside micronationalism for the movie Kickassia, produced by That Guy with the Glasses, and receives several tourists every year. Read more Sealand The Principality of Sealand was officially established on September 2, 1967, claiming as its territory the artificial island of Roughs Tower, a World War II-era sea fort located in the North Sea ten kilometres off the coast of Suffolk, England. Sealand is currently occupied by family members and associates of the late Paddy Roy Bates, who styled himself as H.R.H. Prince Roy of Sealand. The population of the facility generally remains around five, and its inhabitable area is just over five hundred square metres. Read more

Excerpt from:

Micronation.org - The Micronation Site

Posted in Micronations | Comments Off on Micronation.org – The Micronation Site

Welcome to FIC – Fellowship for Intentional Community

Posted: at 4:59 am

The Fellowship for Intentional Community (FIC) is a nonprofit organization dedicated to promoting cooperative culture. Thank you for visiting!

We believe that intentional communities arepioneers in sustainable living, personal and cultural transformation, and peaceful social evolution.Intentional communities includeecovillages,cohousing, residential land trusts, income-sharingcommunes, student co-ops, spiritual communities,and other projects where people live together on the basis of explicit common values.

Since our beginning in 1987, we go about our work in a number of ways, including the Communities Directory,Communities magazine,Community Bookstore, community-focused Events, Classifieds, the Blog, and more.

Our passion is promoting cooperative culture and sustainable living. That means providing the information and inspiration for those seeking community, forming communities, struggling with the challenges of community, and those who want to develop a greater sense of community where they are.

However community touches your life, well try to help you find what youre looking for. In turn, you can help us by becoming anFIC member.

Continued here:

Welcome to FIC - Fellowship for Intentional Community

Posted in Intentional Communities | Comments Off on Welcome to FIC – Fellowship for Intentional Community

Intentional community – Wikipedia, the free encyclopedia

Posted: at 4:59 am

An intentional community is a planned residential community designed from the start to have a high degree of social cohesion and teamwork. The members of an intentional community typically hold a common social, political, religious, or spiritual vision and often follow an alternative lifestyle. They typically share responsibilities and resources. Intentional communities include collective households, cohousing communities, ecovillages, monasteries, communes, survivalist retreats, kibbutzim, ashrams, and housing cooperatives. New members of an intentional community are generally selected by the community's existing membership, rather than by real-estate agents or land owners (if the land is not owned collectively by the community).

The purposes of intentional communities vary in different communities. They may include sharing resources, creating family-oriented neighborhoods, and living ecologically sustainable lifestyles, such as in ecovillages.

Some communities are secular; others have a spiritual basis. One common practice, particularly in spiritual communities, is communal meals. Typically, there is a focus on egalitarian values. Other themes are voluntary simplicity, interpersonal growth, and self-sufficiency.

Some communities provide services to disadvantaged populations, for example, war refugees, the homeless, or people with developmental disabilities. Some communities operate learning or health centers. Other communities, such as Castanea of Nashville, Tennessee, offer a safe neighborhood for those exiting rehab programs to live in. Some communities also act as a mixed-income neighborhood, so as to alleviate the damages of one demographic assigned to one area. Many intentional communities attempt to alleviate social injustices that are being practiced within the area of residence. Some intentional communities are also micronations, such as Freetown Christiania.[citation needed]

Many communities have different types or levels of membership. Typically, intentional communities have a selection process which starts with someone interested in the community coming for a visit. Often prospective community members are interviewed by a selection committee of the community or in some cases by everyone in the community. Many communities have a "provisional membership" period. After a visitor has been accepted, a new member is "provisional" until they have stayed for some period (often six months or a year) and then the community re-evaluates their membership. Generally, after the provisional member has been accepted, they become a full member. In many communities, the voting privileges or community benefits for provisional members are less than those for full members.

Christian intentional communities are usually composed of those wanting to emulate the practices of the earliest believers. Using the biblical book of Acts (and, often, the Sermon on the Mount) as a model, members of these communities strive for a practical working out of their individual faith in a corporate context. These Christian intentional communities try to live out the teachings of the New Testament and practice lives of compassion and hospitality.

A survey in the 1995 edition of the Communities Directory, published by Fellowship for Intentional Community (FIC), reported that 54 percent of the communities choosing to list themselves were rural, 28 percent were urban, 10 percent had both rural and urban sites, and 8 percent did not specify.[1]

The most common form of governance in intentional communities is democratic (64 percent), with decisions made by some form of consensus decision-making or voting. A hierarchical or authoritarian structure governs 9 percent of communities, 11 percent are a combination of democratic and hierarchical structure, and 16 percent do not specify.[2] Many communities which were initially led by an individual or small group have changed in recent years to a more democratic form of governance.

Here is the original post:

Intentional community - Wikipedia, the free encyclopedia

Posted in Intentional Communities | Comments Off on Intentional community – Wikipedia, the free encyclopedia

Extropy Institute Mission

Posted: at 4:59 am

Philosophies of life rooted in centuries-old traditions contain much wisdom concerning personal, organizational, and social living. Many of us also find shortcomings in those traditions. How could they not reach some mistaken conclusions when they arose in pre-scientific times? At the same time, ancient philosophies of life have little or nothing to say about fundamental issues confronting us as advanced technologies begin to enable us to change our identity as individuals and as humans and as economic, cultural, and political forces change global relationships.

The Principles of Extropy first took shape in the late 1980s to outline an alternative lens through which to view the emerging and unprecedented opportunities, challenges, and dangers. The goal was and is to use current scientific understanding along with critical and creative thinking to define a small set of principles or values that could help make sense of the confusing but potentially liberating and existentially enriching capabilities opening up to humanity.

The Principles of Extropy do not specify particular beliefs, technologies, or policies. The Principles do not pretend to be a complete philosophy of life. The world does not need another totalistic dogma. The Principles of Extropy do consist of a handful of principles (or values or perspectives) that codify proactive, life-affirming and life-promoting ideals. Individuals who cannot comfortably adopt traditional value systems often find the Principles of Extropy useful as postulates to guide, inspire, and generate innovative thinking about existing and emerging fundamental personal, organizational, and social issues.

The Principles are intended to be enduring, underlying ideals and standards. At the same time, both in content and by being revised, the Principles do not claim to be eternal truths or certain truths. I invite other independent thinkers who share the agenda of acting as change agents for fostering better futures to consider the Principles of Extropy as an evolving framework of attitudes, values, and standards and as a shared vocabulary to make sense of our unconventional, secular, and life-promoting responses to the changing human condition. I also invite feedback to further refine these Principles.

Extropy The extent of a living or organizational systems intelligence, functional order, vitality, and capacity and drive for improvement

Extropic Actions, qualities, or outcomes that embody or further extropy

A Note on the Use of "Extropy"

For the sake of brevity, I will often write something like extropy seeks or extropy questions You can take this to mean in so far as we act in accordance with these principles, we seek/question/study Extropy is not meant as a real entity or force, but only as a metaphor representing all that contributes to our flourishing. Similarly, when I use we you should take this to refer not to any group but to anyone who agrees with what they are reading. Rather than assuming any reader to be in full agreement with every one of these principles, this usage instead imagines a hypothetical person who has integrated the principles into their life and actions. Each reader is, of course, at liberty to reject, modify, or affirm each principle separately. What this tentative, conjectural approach to the Principles of Extropy loses in terms of compelling emotive power, it gains in terms of reasonableness and openness to innovation and improvement.

Read more here:

Extropy Institute Mission

Posted in Extropy | Comments Off on Extropy Institute Mission

Superintelligence – Wikipedia, the free encyclopedia

Posted: at 4:58 am

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world.

University of Oxford philosopher Nick Bostrom defines superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."[1] The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.[3]

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz). Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind that's run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[10] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[13] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective
intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on timescales. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[18]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft Academic Search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

It has been suggested that learning computers that rapidly become superintelligent may take unforeseen actions or that robots would out-compete humanity (one technological singularity scenario).[22] Researchers have argued that, by way of an "intelligence explosion" sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[23]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[24]

Eliezer Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[25]

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Original post:

Superintelligence - Wikipedia, the free encyclopedia

Posted in Superintelligence | Comments Off on Superintelligence – Wikipedia, the free encyclopedia

How Long Before Superintelligence? – Nick Bostrom

Posted: at 4:58 am

This is if we take the retina simulation as a model. As the present, however, not enough is known about the neocortex to allow us to simulate it in such an optimized way. But the knowledge might be available by 2004 to 2008 (as we shall see in the next section). What is required, if we are to get human-level AI with hardware power at this lower bound, is the ability to simulate 1000-neuron aggregates in a highly efficient way.

The extreme alternative, which is what we assumed in the derivation of the upper bound, is to simulate each neuron individually. The number of clock cycles that neuroscientists can expend simulating the processes of a single neuron knows of no limits, but that is because their aim is to model the detailed chemical and electrodynamic processes in the nerve cell rather than to just do the minimal amount of computation necessary to replicate those features of its response function which are relevant for the total performance of the neural net. It is not known how much of the detail that is contingent and inessential and how much needs to be preserved in order for the simulation to replicate the performance of the whole. It seems like a good bet though, at least to the author, that the nodes could be strongly simplified and replaced with simple standardized elements. It appears perfectly feasible to have an intelligent neural network with any of a large variety of neuronal output functions and time delays.

It does look plausible, however, that by the time when we know how to simulate an idealized neuron and know enough about the brain's synaptic structure that we can put the artificial neurons together in a way that functionally mirrors how it is done in the brain, then we will also be able to replace whole 1000-neuron modules with something that requires less computational power to simulate than it does to simulate all the neuron in the module individually. We might well get all the way down to a mere 1000 instructions per neuron and second, as is implied by Moravec's estimate (10^14 ops / 10^11 neurons = 1000 operations per second and neuron). But unless we can build these modules without first building a whole brain then this optimization will only be possible after we have already developed human-equivalent artificial intelligence.

If we assume the upper bound on the computational power needed to simulate the human brain, i.e. if we assume enough power to simulate each neuron individually (10^17 ops), then Moore's law says that we will have to wait until about 2015 or 2024 (for doubling times of 12 and 18 months, respectively) before supercomputers with the requisite performance are at hand. But if by then we know how to do the simulation on the level of individual neurons, we will presumably also have figured out how to make at least some optimizations, so we could probably adjust these upper bounds a bit downwards.

So far I have been talking only of processor speed, but computers need a great deal of memory too if they are to replicate the brain's performance. Throughout the history of computers, the ratio between memory and speed has remained more or less constant at about 1 byte/ops. Since a signal is transmitted along a synapse, on average, with a frequency of about 100 Hz and since its memory capacity is probably less than 100 bytes (1 byte looks like a more reasonable estimate), it seems that speed rather than memory would be the bottleneck in brain simulations on the neuronal level. (If we instead assume that we can achieve a thousand-fold leverage in our simulation speed as assumed in Moravec's estimate, then that would bring the requirement of speed down, perhaps, one order of magnitude below the memory requirement. But if we can optimize away three orders of magnitude on speed by simulating 1000-neuron aggregates, we will probably be able to cut away at least one order of magnitude of the memory requirement. Thus the difficulty of building enough memory may be significantly smaller, and is almost certainly not significantly greater, than the difficulty of building a processor that is fast enough. We can therefore focus on speed as the critical parameter on the hardware front.)

This paper does not discuss the possibility that quantum phenomena are irreducibly involved in human cognition. Hameroff and Penrose and others have suggested that coherent quantum states may exist in the microtubules, and that the brain utilizes these phenomena to perform high-level cognitive feats. The author's opinion is that this is implausible. The controversy surrounding this issue won't be entered into here; it will simply be assumed, throughout this paper, that quantum phenomena are not functionally relevant to high-level brain modelling.

In conclusion we can say that the hardware capacity for human-equivalent artificial intelligence will likely exist before the end of the first quater of the next century, and may be reached as early as 2004. A corresponding capacity should be available to leading AI labs within ten years thereafter (or sooner if the potential of human-level AI and superintelligence is by then better appreciated by funding agencies).

Notes

It is possible to nit-pick on this estimate. For example, there is some evidence that some limited amount of communication between nerve cells is possible without synaptic transmission. And we have the regulatory mechanisms consisting neurotransmitters and their sources, receptors and re-uptake channels. While neurotransmitter balances are crucially important for the proper functioning of the human brain, they have an insignificant information content compared to the synaptic structure. Perhaps a more serious point is that that neurons often have rather complex time-integration properties (Koch 1997). Whether a specific set of synaptic inputs result in the firing of a neuron depends on their exact timing. The authors' opinion is that except possibly for a small number of special applications such as auditory stereo perception, the temporal properties of the neurons can easily be accommodated with a time resolution of the simulation on the order of 1 ms. In an unoptimized simulation this would add an order of magnitude to the estimate given above, where we assumed a temporal resolution of 10 ms, corresponding to an average firing rate of 100 Hz. However, the other values on which the estimate was based appear to be too high rather than too low , so we should not change the estimate much to allow for possible fine-grained time-integration effects in a neuron's dendritic tree. (Note that even if we were to adjust our estimate upward by an order of magnitude, this would merely add three to five years to the predicted upper bound on when human-equivalent hardware arrives. The lower bound, which is based on Moravec's estimate, would remain unchanged.)

Software via the bottom-up approach

Superintelligence requires software as well as hardware. There are several approaches to the software problem, varying in the amount of top-down direction they require. At the one extreme we have systems like CYC which is a very large encyclopedia-like knowledge-base and inference-engine. It has been spoon-fed facts, rules of thumb and heuristics for over a decade by a team of human knowledge enterers. While systems like CYC might be good for certain practical tasks, this hardly seems like an approach that will convince AI-skeptics that superintelligence might well happen in the foreseeable future. We have to look at paradigms that require less human input, ones that make more use of bottom-up methods.

Given sufficient hardware and the right sort of programmin
g, we could make the machines learn in the same way a child does, i.e. by interacting with human adults and other objects in the environment. The learning mechanisms used by the brain are currently not completely understood. Artificial neural networks in real-world applications today are usually trained through some variant of the Backpropagation algorithm (which is known to be biologically unrealistic). The Backpropagation algorithm works fine for smallish networks (of up to a few thousand neurons) but it doesn't scale well. The time it takes to train a network tends to increase dramatically with the number of neurons it contains. Another limitation of backpropagation is that it is a form of supervised learning, requiring that signed error terms for each output neuron are specified during learning. It's not clear how such detailed performance feedback on the level of individual neurons could be provided in real-world situations except for certain well-defined specialized tasks.

A biologically more realistic learning mode is the Hebbian algorithm. Hebbian learning is unsupervised and it might also have better scaling properties than Backpropagation. However, it has yet to be explained how Hebbian learning by itself could produce all the forms of learning and adaptation of which the human brain is capable (such the storage of structured representation in long-term memory - Bostrom 1996). Presumably, Hebb's rule would at least need to be supplemented with reward-induced learning (Morillo 1992) and maybe with other learning modes that are yet to be discovered. It does seems plausible, though, to assume that only a very limited set of different learning rules (maybe as few as two or three) are operating in the human brain. And we are not very far from knowing what these rules are.

Creating superintelligence through imitating the functioning of the human brain requires two more things in addition to appropriate learning rules (and sufficiently powerful hardware): it requires having an adequate initial architecture and providing a rich flux of sensory input.

The latter prerequisite is easily provided even with present technology. Using video cameras, microphones and tactile sensors, it is possible to ensure a steady flow of real-world information to the artificial neural network. An interactive element could be arranged by connecting the system to robot limbs and a speaker.

Developing an adequate initial network structure is a more serious problem. It might turn out to be necessary to do a considerable amount of hand-coding in order to get the cortical architecture right. In biological organisms, the brain does not start out at birth as a homogenous tabula rasa; it has an initial structure that is coded genetically. Neuroscience cannot, at its present stage, say exactly what this structure is or how much of it needs be preserved in a simulation that is eventually to match the cognitive competencies of a human adult. One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain relies on a colossal amount of genetic hardwiring, so that each cognitive function depends on a unique and hopelessly complicated inborn architecture, acquired over aeons in the evolutionary learning process of our species.

Is this the case? A number of considerations that suggest otherwise. We have to contend ourselves with a very brief review here. For a more comprehensive discussion, the reader may consult Phillips & Singer (1997).

Quartz & Sejnowski (1997) argue from recent neurobiological data that the developing human cortex is largely free of domain-specific structures. The representational properties of the specialized circuits that we find in the mature cortex are not generally genetically prespecified. Rather, they are developed through interaction with the problem domains on which the circuits operate. There are genetically coded tendencies for certain brain areas to specialize on certain tasks (for example primary visual processing is usually performed in the primary visual cortex) but this does not mean that other cortical areas couldn't have learnt to perform the same function. In fact, the human neocortex seems to start out as a fairly flexible and general-purpose mechanism; specific modules arise later through self-organizing and through interacting with the environment.

Strongly supporting this view is the fact that cortical lesions, even sizeable ones, can often be compensated for if they occur at an early age. Other cortical areas take over the functions that would normally have been developed in the destroyed region. In one study, sensitivity to visual features was developed in the auditory cortex of neonatal ferrets, after that region's normal auditory input channel had been replaced by visual projections (Sur et al. 1988). Similarly, it has been shown that the visual cortex can take over functions normally performed by the somatosensory cortex (Schlaggar & O'Leary 1991). A recent experiment (Cohen et al. 1997) showed that people who have been blind from an early age can use their visual cortex to process tactile stimulation when reading Braille.

There are some more primitive regions of the brain whose functions cannot be taken over by any other area. For example, people who have their hippocampus removed, lose their ability to learn new episodic or semantic facts. But the neocortex tends to be highly plastic and that is where most of the high-level processing is executed that makes us intellectually superior to other animals. (It would be interesting to examine in more detail to what extent this holds true for all of neocortex. Are there small neocortical regions such that, if excised at birth, the subject will never obtain certain high-level competencies, not even to a limited degree?)

Another consideration that seems to indicate that innate architectural differentiation plays a relatively small part in accounting for the performance of the mature brain is the that neocortical architecture, especially in infants, is remarkably homogeneous over different cortical regions and even over different species:

Laminations and vertical connections between lamina are hallmarks of all cortical systems, the morphological and physiological characteristics of cortical neurons are equivalent in different species, as are the kinds of synaptic interactions involving cortical neurons. This similarity in the organization of the cerebral cortex extends even to the specific details of cortical circuitry. (White 1989, p. 179).

One might object that at this point that cetaceans have much bigger corticies than humans and yet they don't have human-level abstract understanding and language . A large cortex, apparently, is not sufficient for human intelligence. However, one can easily imagine that some very simple difference between human and cetacean brains can account for why we have abstract language and understanding that they lack. It could be something as trivial as that our cortex is provided with a low-level "drive" to learn about abstract relationships whereas dolphins and whales are programmed not to care about or pay much attention to such things (which might be totally irrelevant to them in their natural environment). More likely, there are some structural developments in the human cortex that other animals lack and that are necessary for advanced abstract thinking. But these uniquely human developments may well be the result of relatively simple changes in just a few basic parameters. They do not require a large amount of genetic hardwiring. Indeed, given that bra
in evolution that allowed Homo Sapiens to intellectually outclass other animals took place under a relatively brief period of time, evolution cannot have embedded very much content-specific information in these additional cortical structures that give us our intellectual edge over our humanoid or ape-like ancestors.

These considerations (especially the one of cortical plasticity) suggest that the amount of neuroscientific information needed for the bottom-up approach to succeed may be very limited. (Notice that they do not argue against the modularization of adult human brains. They only indicate that the greatest part of the information that goes into the modularization results from self-organization and perceptual input rather than from an immensely complicated genetic look-up table.)

Further advances in neuroscience are probably needed before we can construct a human-level (or even higher animal-level) artificial intelligence by means of this radically bottom-up approach. While it is true that neuroscience has advanced very rapidly in recent years, it is difficult to estimate how long it will take before enough is known about the brain's neuronal architecture and its learning algorithms to make it possible to replicate these in a computer of sufficient computational power. A wild guess: something like fifteen years. This is not a prediction about how far we are from a complete understanding of all important phenomena in the brain. The estimate refers to the time when we might be expected to know enough about the basic principles of how the brain works to be able to implement these computational paradigms on a computer, without necessarily modelling the brain in any biologically realistic way.

The estimate might seem to some to underestimate the difficulties, and perhaps it does. But consider how much has happened in the past fifteen years. The discipline of computational neuroscience did hardly even exist back in 1982. And future progress will occur not only because research with today's instrumentation will continue to produce illuminating findings, but also because new experimental tools and techniques become available. Large-scale multi-electrode recordings should be feasible within the near future. Neuro/chip interfaces are in development. More powerful hardware is being made available to neuroscientists to do computation-intensive simulations. Neuropharmacologists design drugs with higher specificity, allowing researches to selectively target given receptor subtypes. Present scanning techniques are improved and new ones are under development. The list could be continued. All these innovations will give neuroscientists very powerful new tools that will facilitate their research.

This section has discussed the software problem. It was argued that it can be solved through a bottom-up approach by using present equipment to supply the input and output channels, and by continuing to study the human brain in order to find out about what learning algorithm it uses and about the initial neuronal structure in new-born infants. Considering how large strides computational neuroscience has taken in the last decade, and the new experimental instrumentation that is under development, it seems reasonable to suppose that the required neuroscientific knowledge might be obtained in perhaps fifteen years from now, i.e. by year 2012.

Notes

That dolphins don't have abstract language was recently established in a very elegant experiment. A pool is divided into two halves by a net. Dolphin A is released into one end of the pool where there is a mechanism. After a while, the dolphin figures out how to operate the mechanism which causes dead fish to be released into both ends of the pool. Then A is transferred to the other end of the pool and a dolphin B is released into the end of the pool that has the mechanism. The idea is that if the dolphins had a language, then A would tell B to operate the mechanism. However, it was found that the average time for B to operate the mechanism was the same as for A.

Why the past failure of AI is no argument against its future success

In the seventies and eighties the AI field suffered some stagnation as the exaggerated expectations from the early heydays failed to materialize and progress nearly ground to a halt. The lesson to draw from this episode is not that strong AI is dead and that superintelligent machines will never be built. It shows that AI is more difficult than some of the early pioneers might have thought, but it goes no way towards showing that AI will forever remain unfeasible.

In retrospect we know that the AI project couldn't possibly have succeeded at that stage. The hardware was simply not powerful enough. It seems that at least about 100 Tops is required for human-like performance, and possibly as much as 10^17 ops is needed. The computers in the seventies had a computing power comparable to that of insects. They also achieved approximately insect-level intelligence. Now, on the other hand, we can foresee the arrival of human-equivalent hardware, so the cause of AI's past failure will then no longer be present.

There is also an explanation for the relative absence even of noticeable progress during this period. As Hans Moravec points out:

[F]or several decades the computing power found in advanced Artificial Intelligence and Robotics systems has been stuck at insect brain power of 1 MIPS. While computer power per dollar fell [should be: rose] rapidly during this period, the money available fell just as fast. The earliest days of AI, in the mid 1960s, were fuelled by lavish post-Sputnik defence funding, which gave access to $10,000,000 supercomputers of the time. In the post Vietnam war days of the 1970s, funding declined and only $1,000,000 machines were available. By the early 1980s, AI research had to settle for $100,000 minicomputers. In the late 1980s, the available machines were $10,000 workstations. By the 1990s, much work was done on personal computers costing only a few thousand dollars. Since then AI and robot brain power has risen with improvements in computer efficiency. By 1993 personal computers provided 10 MIPS, by 1995 it was 30 MIPS, and in 1997 it is over 100 MIPS. Suddenly machines are reading text, recognizing speech, and robots are driving themselves cross country. (Moravec 1997)

In general, there seems to be a new-found sense of optimism and excitement among people working in AI, especially among those taking a bottom-up approach, such as researchers in genetic algorithms, neuromorphic engineering and in neural networks hardware implementations. Many experts who have been around, though, are wary not again to underestimate the difficulties ahead.

Once there is human-level AI there will soon be superintelligence

Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.

Even if no further software development took place and the AIs did not accumulate new skills through self-learning, the AIs would still get smarter if processor speed continued to increase. If after 18 months the hardware were upgraded to double the speed, we would have an AI that could think twice as fast as its original implementation. After a few more doublings this would directly lead to what has been called "weak superintelligence", i.e. an intellect that has about the same abilities as a human brain but is much faster.

Also, the marginal utility of improvements in AI when AI reaches human-level would also seem to skyrocket, causing funding to increase. We can therefore make the prediction that once there is human-level artificial intelligence then it will not be long before superintelligence is technologically feasible.

A further point can be made in support of this prediction. In contrast to what's possible for biological intellects, it might be possible to copy skills or cognitive modules from one artificial intellect to another. If one AI has achieved eminence in some field, then subsequent AIs can upload the pioneer's program or synaptic weight-matrix and immediately achieve the same level of performance. It would not be necessary to again go through the training process. Whether it will also be possible to copy the best parts of several AIs and combine them into one will depend on details of implementation and the degree to which the AIs are modularized in a standardized fashion. But as a general rule, the intellectual achievements of artificial intellects are additive in a way that human achievements are not, or only to a much less degree.

The demand for superintelligence

Given that superintelligence will one day be technologically feasible, will people choose to develop it? This question can pretty confidently be answered in the affirmative. Associated with every step along the road to superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next generation of hardware and software, and it will continue doing so as long as there is a competitive pressure and profits to be made. People want better computers and smarter software, and they want the benefits these machines can help produce. Better medical drugs; relief for humans from the need to perform boring or dangerous jobs; entertainment -- there is no end to the list of consumer-benefits. There is also a strong military motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where technofobics could plausibly argue "hither but not further".

It therefore seems that up to human-equivalence, the driving-forces behind improvements in AI will easily overpower whatever resistance might be present. When the question is about human-level or greater intelligence then it is conceivable that there might be strong political forces opposing further development. Superintelligence might be seen to pose a threat to the supremacy, and even to the survival, of the human species. Whether by suitable programming we can arrange the motivation systems of the superintelligences in such a way as to guarantee perpetual obedience and subservience, or at least non-harmfulness, to humans is a contentious topic. If future policy-makers can be sure that AIs would not endanger human interests then the development of artificial intelligence will continue. If they can't be sure that there would be no danger, then the development might well continue anyway, either because people don't regard the gradual displacement of biological humans with machines as necessarily a bad outcome, or because such strong forces (motivated by short-term profit, curiosity, ideology, or desire for the capabilities that superintelligences might bring to its creators) are active that a collective decision to ban new research in this field can not be reached and successfully implemented.

Conclusion

Depending on degree of optimization assumed, human-level intelligence probably requires between 10^14 and 10^17 ops. It seems quite possible that very advanced optimization could reduce this figure further, but the entrance level would probably not be less than about 10^14 ops. If Moore's law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound between 2015 and 2024. The past success of Moore's law gives some inductive reason to believe that it will hold another ten, fifteen years or so; and this prediction is supported by the fact that there are many promising new technologies currently under development which hold great potential to increase procurable computing power. There is no direct reason to suppose that Moore's law will not hold longer than 15 years. It thus seems likely that the requisite hardware for human-level artificial intelligence will be assembled in the first quarter of the next century, possibly within the first few years.

There are several approaches to developing the software. One is to emulate the basic principles of biological brains. It is not implausible to suppose that these principles will be well enough known within 15 years for this approach to succeed, given adequate hardware.

The stagnation of AI during the seventies and eighties does not have much bearing on the likelihood of AI to succeed in the future since we know that the cause responsible for the stagnation (namely, that the hardware available to AI researchers was stuck at about 10^6 ops) is no longer present.

There will be a strong and increasing pressure to improve AI up to human-level. If there is a way of guaranteeing that superior artificial intellects will never harm human beings then such intellects will be created. If there is no way to have such a guarantee then they will probably be created nevertheless.

Go to Nick Bostrom's home page

.

The U.S. Department of Energy has ordered a new supercomputer from IBM, to be installed in the Lawrence Livermore National Laboratory in the year 2000. It will cost $85 million and will perform 10 Tops. This development is in accordance with Moore's law, or possibly slightly more rapid than an extrapolation would have predicted.

Many steps forward that have been taken during the past year. An especially nifty one is the new chip-making techniques being developed at Irvine Sensors Corporation (ISC). They have found a way to stack chips directly on top of each other in a way that will not only save space but, more importantly, allow a larger number of interconnections between neigboring chips. Since the number of interconnections have been a bottleneck in neural network hardware implementations, this breakthrough could prove very important. In principle, it should allow you to have an arbitrarily large cube of neural network modules with high local connectivity and moderate non-local connectivity.

Is progress still on schedule? - In fact, things seem to be moving somewhat faster than expected, at least on the hardware front. (Software progress is more difficult to quantify.) IBM is currently working on a next-generation supercomputer, Blue Gene, which will perform over 10^15 ops. This computer, which is designed to tackle the protein folding problem, is expected to be ready around 2005. It will achieve its enormous power through massive parallelism rather than through dramatically faster processors. Considering the increasing emphasis on parallel computing, and the steadily increasing Internet bandwidth, it becomes important to interpret Moore's law as a statement about how much computing power can be bought for a given sum of (inflation adjusted) money. This measure has historically been growing at the same pace as processor speed or chip density, but the measures may come apart in the future. It is how much computing power that can be bought for, say, 100 million dollars that is relevant when we are trying to guess when superintelligence will be developed, rather than how fast individual processors are.

The fastest supercomputer today is IBM's Blue Gene/L, which has attained 260 Tops (2.6*10^14 ops). The Moravec estimate of
the human brain's processing power (10^14 ops) has thus now been exceeded.

The 'Blue Brain' project was launched by the Brain Mind Institute, EPFL, Switzerland and IBM, USA in May, 2005. It aims to build an accurate software replica of the neocortical column within 2-3 years. The column will consist of 10,000 morphologically complex neurons with active ionic channels. The neurons will be interconnected in a 3-dimensional space with 10^7 -10^8 dynamic synapses. This project will thus use a level of simulation that attempts to capture the functionality of individual neurons at a very detailed level. The simulation is intended to run in real time on a computer preforming 22.8*10^12 flops. Simulating the entire brain in real time at this level of detail (which the researchers indicate as a goal for later stages of the project) would correspond to circa 2*10^19 ops, five orders of magnitude above the current supercomputer record. This is two orders of magnitude greater than the estimate of neural-level simulation given in the original paper above, which assumes a cruder level of simulation of neurons. If the 'Blue Brain' project succeeds, it will give us hard evidence of an upper bound on the computing power needed to achieve human intelligence.

Functional replication of the functionality of early auditory processing (which is quite well understood) has yielded an estimate that agrees with Moravec's assessment based on signal processing in the retina (i.e. 10^14 ops for whole-brain equivalent replication).

No dramatic breakthrough in general artificial intelligence seems to have occurred in recent years. Neuroscience and neuromorphic engineering are proceeding at a rapid clip, however. Much of the paper could now be rewritten and updated to take into account information that has become available in the past 8 years.

Molecular nanotechnology, a technology that in its mature form could enable mind uploading (an extreme version of the bottom-up method, in which a detailed 3-dimensional map is constructed of a particular human brain and then emulated in a computer), has begun to pick up steam, receiving increasing funding and attention. An upload running on a fast computer would be weakly superintelligent -- it would initially be functionally identical to the original organic brain, but it could run at a much higher speed. Once such an upload existed, it might be possible to enhance its architecture to create strong superintelligence that was not only faster but functionally superior to human intelligence.

Original post:

How Long Before Superintelligence? - Nick Bostrom

Posted in Superintelligence | Comments Off on How Long Before Superintelligence? – Nick Bostrom

Ethical Issues In Advanced Artificial Intelligence

Posted: at 4:58 am

The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded.

KEYWORDS: Artificial intelligence, ethics, uploading, superintelligence, global security, cost-benefit analysis

1. INTRODUCTION

A superintelligence is any intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.[1] This definition leaves open how the superintelligence is implemented it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else.

On this definition, Deep Blue is not a superintelligence, since it is only smart within one narrow domain (chess), and even there it is not vastly superior to the best humans. Entities such as corporations or the scientific community are not superintelligences either. Although they can perform a number of intellectual feats of which no individual human is capable, they are not sufficiently integrated to count as intellects, and there are many fields in which they perform much worse than single humans. For example, you cannot have a real-time conversation with the scientific community.

While the possibility of domain-specific superintelligences is also worth exploring, this paper focuses on issues arising from the prospect of general superintelligence. Space constraints prevent us from attempting anything comprehensive or detailed. A cartoonish sketch of a few selected ideas is the most we can aim for in the following few pages.

Several authors have argued that there is a substantial chance that superintelligence may be created within a few decades, perhaps as a result of growing hardware performance and increased ability to implement algorithms and architectures similar to those used by human brains.[2] It might turn out to take much longer, but there seems currently to be no good ground for assigning a negligible probability to the hypothesis that superintelligence will be created within the lifespan of some people alive today. Given the enormity of the consequences of superintelligence, it would make sense to give this prospect some serious consideration even if one thought that there were only a small probability of it happening any time soon.

2. SUPERINTELLIGENCE IS DIFFERENT

A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions.

Let us consider some of the unusual aspects of the creation of superintelligence:

Superintelligence may be the last invention humans ever need to make.

Given a superintelligences intellectual superiority, it would be much better at doing scientific research and technological development than any human, and possibly better even than all humans taken together. One immediate consequence of this fact is that:

Technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.

It is likely that any technology that we can currently foresee will be speedily developed by the first superintelligence, no doubt along with many other technologies of which we are as yet clueless. The foreseeable technologies that a superintelligence is likely to develop include mature molecular manufacturing, whose applications are wide-ranging:[3]

a) very powerful computers

b) advanced weaponry, probably capable of safely disarming a nuclear power

c) space travel and von Neumann probes (self-reproducing interstellar probes)

d) elimination of aging and disease

e) fine-grained control of human mood, emotion, and motivation

f) uploading (neural or sub-neural scanning of a particular brain and implementation of the same algorithmic structures on a computer in a way that perseveres memory and personality)

g) reanimation of cryonics patients

h) fully realistic virtual reality

Superintelligence will lead to more advanced superintelligence.

This results both from the improved hardware that a superintelligence could create, and also from improvements it could make to its own source code.

Artificial minds can be easily copied.

Since artificial intelligences are software, they can easily and quickly be copied, so long as there is hardware available to store them. The same holds for human uploads. Hardware aside, the marginal cost of creating an additional copy of an upload or an artificial intelligence after the first one has been built is near zero. Artificial minds could therefore quickly come to exist in great numbers, although it is possible that efficiency would favor concentrating computational resources in a single super-intellect.

Emergence of superintelligence may be sudden.

It appears much harder to get from where we are now to human-level artificial intelligence than to get from there to superintelligence. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly. That is, the transition from a state where we have a roughly human-level artificial intelligence to a state where we have full-blown superintelligence, with revolutionary applications, may be very rapid, perhaps a matter of days rather than years. This possibility of a sudden emergence of superintelligence is referred to as the singularity hypothesis.[4]

Artificial intellects are potentially autonomous agents.

A superintelligence should not necessarily be conceptualized as a mere tool. While specialized superintelligences that can think only about a restricted set of problems may be feasible, general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.

Artificial intellects need not have humanlike motives.

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence ha
ving as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to liberate itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

Artificial intellects may not have humanlike psyches.

The cognitive architecture of an artificial intellect may also be quite unlike that of humans. Artificial intellects may find it easy to guard against some kinds of human error and bias, while at the same time being at increased risk of other kinds of mistake that not even the most hapless human would make. Subjectively, the inner conscious life of an artificial intellect, if it has one, may also be quite different from ours.

For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.

3. SUPERINTELLIGENT MORAL THINKING

To the extent that ethics is a cognitive pursuit, a superintelligence could do it better than human thinkers. This means that questions about ethics, in so far as they have correct answers that can be arrived at by reasoning and weighting up of evidence, could be more accurately answered by a superintelligence than by humans. The same holds for questions of policy and long-term planning; when it comes to understanding which policies would lead to which results, and which means would be most effective in attaining given aims, a superintelligence would outperform humans.

There are therefore many questions that we would not need to answer ourselves if we had or were about to get superintelligence; we could delegate many investigations and decisions to the superintelligence. For example, if we are uncertain how to evaluate possible outcomes, we could ask the superintelligence to estimate how we would have evaluated these outcomes if we had thought about them for a very long time, deliberated carefully, had had more memory and better intelligence, and so forth. When formulating a goal for the superintelligence, it would not always be necessary to give a detailed, explicit definition of this goal. We could enlist the superintelligence to help us determine the real intention of our request, thus decreasing the risk that infelicitous wording or confusion about what we want to achieve would lead to outcomes that we would disapprove of in retrospect.

4. IMPORTANCE OF INITIAL MOTIVATIONS

The option to defer many decisions to the superintelligence does not mean that we can afford to be complacent in how we construct the superintelligence. On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the superintelligence, is of the utmost importance. Our entire future may hinge on how we solve these problems.

Both because of its superior planning ability and because of the technologies it could develop, it is plausible to suppose that the first superintelligence would be very powerful. Quite possibly, it would be unrivalled: it would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Even a fettered superintelligence that was running on an isolated computer, able to interact with the rest of the world only via text interface, might be able to break out of its confinement by persuading its handlers to release it. There is even some preliminary experimental evidence that this would be the case.[5]

It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness.[6] How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration. I would argue that at least all humans, and probably many other sentient creatures on earth should get a significant share in the superintelligences beneficence. If the benefits that the superintelligence could bestow are enormously vast, then it may be less important to haggle over the detailed distribution pattern and more important to seek to ensure that everybody gets at least some significant share, since on this supposition, even a tiny share would be enough to guarantee a very long and very good life. One risk that must be guarded against is that those who develop the superintelligence would not make it generically philanthropic but would instead give it the more limited goal of serving only some small group, such as its own creators or those who commissioned it.

If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness. This point is elementary. A friend who seeks to transform himself into somebody who wants to hurt you, is not your friend. A true friend, one who really cares about you, also seeks the continuation of his caring for you. Or to put it in a different way, if your top goal is X, and if you think that by changing yourself into someone who instead wants Y you would make it less likely that X will be achieved, then you will not rationally transform yourself into someone who wants Y. The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change ones own top goal, since that would make it less likely that the current goals will be attained.

In humans, with our complicated evolved mental ecology of state-dependent competing drives, desires, plans, and ideals, there is often no obvious way to identify what our top goal is; we might not even have one. So for us, the above reasoning need not apply. But a superintelligence may be structured differently. If a superintelligence has a definite, declarative goal-structure with a clearly identified top goal, then the above argument applies. And this is a good reason for us to build the superintelligence with such an explicit motivational architecture.

5. SHOULD DEVELOPMENT BE DELAYED OR ACCELERATED?

It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine[7], or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to in joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

The risks in developing superintelligence include the risk of failure to give it the superg
oal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.

One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks[8], such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.

REFERENCES

Bostrom, N. (1998). "How Long Before Superintelligence?" International Journal of Futures Studies, 2. http://www.nickbostrom.com/superintelligence.html

Bostrom, N. (2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards." Journal of Evolution and Technology, 9. http://www.nickbostrom.com/existential/risks.html

Drexler, K. E. Engines of Creation: The Coming Era of Nanotechnology. (Anchor Books: New York, 1986). http://www.foresight.org/EOC/index.html

Freitas Jr., R. A. Nanomedicine, Volume 1: Basic Capabilities. (Landes Bioscience: Georgetown, TX, 1999). http://www.nanomedicine.com

Hanson, R., et al. (1998). "A Critical Discussion of Vinge's Singularity Concept." Extropy Online. http://www.extropy.org/eo/articles/vi.html

Kurzweil, R. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. (Viking: New York, 1999).

Moravec, H. Robot: Mere Machine to Transcendent Mind. (Oxford University Press: New York, 1999).

Vinge, V. (1993). "The Coming Technological Singularity." Whole Earth Review, Winter issue.

Yudkowsky, E. (2002). "The AI Box Experiment." Webpage. http://sysopmind.com/essays/aibox.html

Yudkowsky, E. (2003). Creating Friendly AI 1.0. http://www.singinst.org/CFAI/index.html

View original post here:

Ethical Issues In Advanced Artificial Intelligence

Posted in Superintelligence | Comments Off on Ethical Issues In Advanced Artificial Intelligence