The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: August 30, 2016
DoD Releases 2015 Fiscal Year Freedom of Navigation Report
Posted: August 30, 2016 at 11:10 pm
IMMEDIATE RELEASE
Press Operations
Today, the Department of Defense (DoD) released its 2015 fiscal year Freedom of Navigation (FON) Report, which provides a summary of excessive maritime claims that were challenged by U.S. forces during the period of Oct. 1, 2014, through Sept. 30, 2015. The report summarizes challenges to excessive maritime claims asserted by 13 claimants throughout the world.
The DoD FON program is comprehensive in scope, encompassing all of the rights, freedoms, and lawful uses of the sea and airspace available to all nations under international law. The program is implemented actively against excessive maritime claims by claimants in every region of the world in order to support DoDs global interest in mobility and access.
Each year, DoD compiles an unclassified FON Report providing summaries of the FON operations and other FON related activities conducted by U.S. forces. The summarized reports transparently demonstrate the U.S. non-acquiescence to excessive maritime claims, while still protecting the operational security of U.S. military forces.
The 2015 DoD FON Report is available at http://policy.defense.gov/OUSDPOffices/FON.aspx.
The rest is here:
Posted in Fiscal Freedom
Comments Off on DoD Releases 2015 Fiscal Year Freedom of Navigation Report
Oceania Cruises Riviera Cruise Ship | Riviera Deck Plans …
Posted: at 11:08 pm
The Epitome of Refined Elegance
Stunning Riviera was designed to be distinctive and special in so many ways. Featuring the magnificent Lalique Grand Staircase, stunning Owner's Suites furnished in Ralph Lauren Home, and designer touches throughout the entire ship, Riviera showcases rich residential design and furnishings. Rivieras refined ambiance truly embodies the unparalleled Oceania Cruises experience.
Ideally proportioned, Riviera still embraces the same warmth and charm of renowned Regatta, Insignia, Nautica and Sirena. While the impeccable level of personalized service and the country club casual ambiance remain the same, Riviera offers even more choices, as well as generous new amenities. Designed with the ultimate epicurean and travel connoisseur in mind, Riviera offers guests multiple dining venues, of which six are open-seating gourmet restaurants with no surcharge. La Reserve by Wine Spectator offers enlightening seminars, tastings, and gourmet food pairings. Riviera also features The Culinary Center, the only hands-on cooking school at sea which features a range of cooking classes by master chefs. In the Artist Loft, talented artists-in-residence offer step-by-step instruction in everything from photography to painting to printmaking. Baristas, our signature coffee bar, serves up illy espresso and coffee and fresh pastries made daily. Intimate spaces throughout the ship provide relaxing escapes. Spacious accommodations in every category showcase luxurious designer touches and lavish bathrooms.
Notably, the onboard experience continues to exude that comfortable familiarity guests have come to cherish. We have retained everything guests appreciate about our ships and continue to aim even higher. We look forward to welcoming you aboard.
View post:
Oceania Cruises Riviera Cruise Ship | Riviera Deck Plans ...
Posted in Oceania
Comments Off on Oceania Cruises Riviera Cruise Ship | Riviera Deck Plans …
Oceania (song) – Wikipedia, the free encyclopedia
Posted: at 11:08 pm
"Oceania" is a song recorded by Icelandic singer Bjrk for her sixth studio album Medlla. It was written and produced by Bjrk, with additional writing by Sjn and production by Mark Bell. The song was written by the singer specially for the 2004 Summer Olympics Opening Ceremony, after a request by the International Olympic Committee. "Oceania" was released as a promotional single in 2004, by One Little Indian Records. The song was written at the ocean's point of view, from which the singer believes all life emerged, and details the human's evolution, whilst accompanied by a choir. "Oceania" was generally well received by music critics, who believed it was the best track from Medlla, although some thought it was not the best choice for a promotional release.
The accompanying music video for the song, directed by Lynn Fox, features Bjrk as "Mother Oceania", whilst being jewel-encrusted in dark watery depths, with a colourful sunset and swirling floral creatures above her. A remix of the song, featuring additional lyrics and vocals by Kelis on her point of view of the continents, was featured as a B-side to the "Who Is It" single. A piano version also appeared on the DVD single, and was assisted in its creation by Nico Muhly. The song was premiered during Bjrk's performance on the Summer Olympics ceremony, and was later included on the setlist of the Volta Tour (200708). At the 47th Grammy Awards in 2005, it was nominated in the category of Best Female Pop Vocal Performance. Cover versions of "Oceania" were done six times, while it was sampled once.
The International Olympic Committee commissioned a song by Bjrk specially for the 2004 Summer Olympics opening ceremony. The singer revealed that the committee asked her to do a kind of "Ebony and Ivory" or "We Are the World" type of song, which are "smashing tunes" according to her, but she thought, "'Maybe there's another angle to this'. When I tried to write an Olympic lyric, though, it was full of sports socks and ribbons. I ended up pissing myself laughing". Then, she called Sjn, an Icelandic poet who had previously collaborated with her on songs such as "Bachelorette" from her fourth studio album Homogenic (1997). When she said to him that they would need something "suitably epic" for the Olympics, the poet even took a short course about Greek mythology at Reykjavk University. "Oceania" was the last song recorded for Medlla.[1] Bjrk said about the song: "I am incredibly honoured to have been asked to write a song and sing it at the Olympics. The song is written from the point of view of the ocean that surrounds all the land and watches over the humans to see how they are doing after millions of years of evolution. It sees no borders, different races or religion which has always been at the core of these [games]".[2]
During an interview with British radio station XFM, Bjrk explained its recording process, saying work on "Oceania" was kept being delayed because she wanted to do it especially for the Olympics. During the last day of mixing, she thought she needed "sirenes", like in Greek mythology. She called up an English choir to record these sounds. The singer had done an arrangement for piano on the computer that was impossible for a piano to play, and she got them to sing it. Then, she also called up beatboxer Shlomo, who was recommended to her as "the new bright hope of the hip hop scene". He went to record the next day and Bjrk asked him to do a techno tango beat, which he did. Recalling her work on the song until her last day of mixing, she commented, "That was the most fun part, in the end. Sometimes it's good for you to work with a gun against your head and just go for it, because you can sometimes sit too long with ideas. Sometimes adrenaline is a good thing."[3]
The song was written at the ocean's point of view, detailing the human's evolution.[4] According to Jason Killingsworth from Paste magazine, it calls listeners' attention to "Mother Oceania" from which the singer believes all life emerged, whilst she sings: "You have done well for yourselves / Since you left my wet embrace / And crawled ashore []". The song anchors the midsection of Medlla, "jubilantly punctuated with bubbling synth and propelled by the rolling, spitfire cadence of Rahzel's beatbox", according to the reviewer.[5] The last line from the song, "Your sweat is salty/ And I am why/ Your sweat is salty/ And I am why", is about how "we were all little jellyfish or whatever before we made it on to land", according to the singer.[1] Elthan Brown from New York magazine considered these lyrics as "frank sensuality".[6] "Oceania" also features The London Choir.[7]Entertainment Weekly's writer Chris Willman commented that "the computer-enhanced choir behind Bjrk [suggests] a cosmic harem of pleased dolphins. Here she imagines herself as the sea itself, proud of all the belegged creatures she's spit out onto land over the last hundred million years. It's the nearest evolutionists have come to having their own gospel tune".[8]
A remix version of "Oceania" featuring additional lyrics and vocals by American singer Kelis was recorded. She explained they were set to perform on Fashion Rocks concert in London the previous year, and their dressing rooms were right next to each other. Bjrk had an album by Canadian singer Peaches that was skipping, then Kelis gave her the copy of the album she had. They started talking and eventually hung out and exchanged numbers after the show, and later Bjrk contacted Kelis to work together, which she agreed. Then, Kelis recorded her vocals at Electric Lady Studios in New York City,[9] and wrote her own words in the song, from the point of view of the continents.[10] Originally not intended to be commercially released, the remix leaked after being played on BBC Radio 1's The Breezeblock, but was then included on the "Who Is It" single as a B-side. According to The Guardian, "it's a brilliant fusing together of two distinct voices, Kelis handling the breathy first verse, as layers of her chopped-up vocals form the rhythm track, while Bjrk at first comes across as restrained, allowing Kelis' ad-libs to soar before unleashing a song-stopping, wordless roar that heralds the song's dramatic final coda".[11]
A piano version also appeared on the DVD single, which was assisted in its creation by Nico Muhly. During an interview he stated, "When Bjrk asked me to play piano on Oceania, she sent me the music, and it was as complicated and layered as any piece of classical music I've played. I spent a few days figuring out how to make her vision of 'dueling lounge-lizard pianists' physically possible, and in the session, we ran through those quickly. Then, she experimented with different ways to space the progression of chords that runs through the piece - I suggested big, Brahmsy blocks - as well as the ending, for which we tried diaphanous, Debussy-like arpeggios".[12] Bjrk decided to stick with the album's vocal concept and use electronically tweaked choral voices. Before some last-minute polishing by Mark Bell, this version of "Oceania" was the last track to be worked for Medlla.[13]
"Oceania" received generally positive reviews from music critics. Jennifer Vineyard from MTV News called the song "one of those polarizing songs, with its Ethel Merman-like synchronized vocal sweeps that do suggest the aquatic, in a 1950s sort of way".[14]Entertainment Weekly's Chris Willman labeled the track as a "strikingly beautiful" song.[8] Alex Ross, reporter writing for The New Yorker stated that with "Oceania", Bjrk "confirmed her status as the ultimate musical cosmopolitan", acquainted with Karlheinz Stockhausen and the Wu-Tang Clan.[13] Matthew Gasteier from Prefix magazine called the track "the best song on the album", whilst complimenting "its swooping chorus [which] recalls the migration of birds or the time-elapsed drifting of icebergs, a swirl of beauty and power crashing down onto and then rising above the mix. It culminates in the near screech that leads into the sexy-spooky coda".[15] According to Andy Battaglia from The A.V. Club, in a positive review, "the electronic flourish strays from her organic vocal focus, but Bjrk summons the same kind of tingle with choral language" in the song, "which finds The London Choir reacting to what sounds like a thrilling slow-motion circus act".[7]
"Oceania" was "spoilt by some overenthusiastic vocal whoopings", according to David Hooper from BBC Music.[16]The Guardian's writer David Peschek said that when the singer sings in the song, "choral swoops [explodes] like fireworks behind her".[17]AllMusic's Heather Phares noted that the song, along with Medlla's lead single "Who Is It", "have an alien quality that is all the stranger considering that nearly all of their source material is human (except for the odd keyboard or two)".[18] Dominique Leone of Pitchfork thought "Oceania" was hardly the most obvious choice for a promotional single release, despite its "bizarre, swooping soprano lines and cyclical chord progression outlined by a chorus of Wyatt vocal samples".[19] Jeremy D. Larson from Time magazine provided a mixed review to the song, stating that it was the best Olympic theme song, but during the Olympics performance, "when she sang 'Every pearl is a lynx is a girl' we think you could hear the world collectively sigh, 'Where's Celine Dion?'".[4] In 2005, the song was nominated for Best Female Pop Vocal Performance at the 47th Grammy Awards but lost out to Norah Jones' "Sunrise".[20]
The accompanying music video for "Oceania" was directed by Lynn Fox, and was premiered on August 13, 2004 through Bjrk's official site.[21] According to Lynn Fox, Bjrk gave the team the initial sketch of the track in January 2004. Whilst they were doing scribbles for it, they had several phone conversations with the singer and emailed her images to keep her up to date with the progress of the work. For "Oceania", initial animations took six weeks, then had couple of days preparing for the shoot in Iceland and a few more days after to put all the shots together.[22] Like in the song, in the music video Bjrk is depicted as "Mother Oceania". The video opens with the surface of a body of water appearing yellowish and bright. Camera pans down to darker, deeper waters. Bjrk appears out of the dark background, singing and covered with sparkling jewels. As the second verse begins, images of sea anemones, representing the continents (her children) are thrown from Bjrk's hands.[23]
During the third verse they swim around and away from their mother, carried by the currents, which move in time with the song. In the bridge section, new sea flowers, with brilliant colors, emerge from the background, in contrast to the muted and darker colors of previous scenes. As the fifth verse continues, the camera pans back up to the much lighter surface, not seen since the beginning of the video. All sorts of marine life are swimming about the surface. Shortly after the sixth verse begins, Bjork is shown in deep, dark water. Several seconds later, the lighter surface of the water is shown without her. When she begins to sing "Your sweat is salty", a somewhat rapid alternation of images ensues: the light surface is shown for one second, followed by Bjrk singing in the deep water; these scenes alternate until she stops singing during the coda. Bjrk's vocal repetition ceases at the same time the visual alternation stops. The surface scene recedes, and Bjrk in the deep water comes to the fore, slowing. At the end of the video, she stands and smiles.[23]
At the 2004 Summer Olympics Opening Ceremony, where Bjrk premiered the song, she wore a very large dress which unfolded during her performance of "Oceania" to eventually occupy the entire stadium, and showed a map of the world in sign of union.[4] Additionally, Bjrk wore "bluish-purple glittery eye shadow across her lids. Her dark hair dangled in tiny twists that framed her pixieish, freckled face".[24] Immediately after the performance at the Olympics opening ceremony, the song was downloaded more than 11,000 times on the iTunes Store.[25] Jake Coyle from Today commented that her dress was "reminiscent in its uniqueness to the infamous swan dress she wore to the Oscars in 2001".[26] According to Jeremy D. Larson from Time, if it weren't for the fireworks at the end of the song, he was legitimately unsure if people in the audience would have cheered.[4] Dominique Leone of Pitchfork was surprised by the committee's choice of bringing Bjrk to perform at the ceremony, and stated: "They could have had anyone-- say, a reassuring Celine Dion or a physically ideal Beyonc-- but they chose a prickly, decidedly uncomfortable Icelandic woman. On aesthetic grounds, I can't argue with their choice, but I continue to wonder about Bjrk's significance".[19] "Oceania" was also performed during the Volta Tour (200708).[27]
The song was sampled by E-40 in the track "Spend the Night" featuring Laroo, The DB'z, Droop-E and B-Slimm on his 2010 Revenue Retrievin': Night Shift album.[28] SPIRITWO and singer Yael Claire covered "Oceania" with a Middle Eastern theme for the 2012 London Olympic games.[29] Aspirant singer Srbuhi Hovhannisyan also covered the song on The Voice of Armenia in 2014.[30] "Oceania" covers also appear on the albums by Beliss, Harmen Fraanje Quintet, Murphy's Law and Serena Fortebraccio.[31]
Credits adapted from Medlla liner notes.[33]
See the rest here:
Posted in Oceania
Comments Off on Oceania (song) – Wikipedia, the free encyclopedia
Nations of Nineteen Eighty-Four – Wikipedia, the free …
Posted: at 11:08 pm
Oceania, Eurasia and Eastasia are the three fictional superstates in George Orwell's futuristic dystopian novel Nineteen Eighty-Four.
The history of how the world evolved into these three states is vague. They appear to have emerged from nuclear warfare and civil dissolution over 20 years between 1945 (the end of the Second World War) and 1965. Eurasia was likely formed first, followed closely afterwards by Oceania, with Eastasia emerging a decade later, possibly in the 1960s.
Oceania is the superstate where protagonist Winston Smith dwells. It is believed to be composed of the Americas, the British Isles (called "Airstrip One" in the novel), Iceland, Australia, New Zealand, and southern Africa below the River Congo. It also controlsto different degrees and at various times during the course of its perpetual war with either Eurasia or Eastasiathe polar regions, India, Indonesia and the islands of the Pacific. Oceania lacks a single capital city, although London and apparently New York City may be regional capitals. In the novel, Emmanuel Goldstein, Oceania's declared public enemy number one, describes it in the fictional book The Theory and Practice of Oligarchical Collectivism as a result of the United States having absorbed the British Empire. Goldstein's book also states that Oceania's primary natural defense is the sea surrounding it.
The ruling doctrine of Oceania is Ingsoc, the Newspeak euphemism for English Socialism. Its nominal leader is Big Brother, believed by the masses to have been the leader of the revolution and still used as an icon by the party. The personality cult is maintained through Big Brother's function as a focal point for love, fear, and reverence, more easily felt towards an individual than towards an organization.
The unofficial language of Oceania is English (officially called Oldspeak), and the official language is Newspeak. The restructuring of the language is intended to eliminate unorthodox political and social thought, by eliminating the words needed to express it.
The society of Oceania is sharply stratified into three groups: the small ruling Inner Party, the more numerous and highly indoctrinated Outer Party, and the large body of politically meaningless Proles. Except for certain rare exceptions like Hate Week, the proles remain essentially outside Oceania's political control and are placated by trivial sports and other entertainment; the Thought Police easily manage any Prole socially aware enough to be a problem.
Oceania's national anthem is Oceania, Tis For Thee which, in one of the three film versions of the book, takes the form of a crescendo of organ music along with operatic lyrics. The lyrics are sung in English, and the song is reminiscent of God Save the Queen and My Country 'Tis of Thee.
Even the names of countries, and their shapes on the map, had been different. Airstrip One, for instance, had not been so called in those days: it had been called England, or Britain, though London, he felt fairly certain, had always been called London.[1]
Like Europe as a whole, Britain was hit by atomic weapons in the conflicts before the revolutions in Oceania and then elsewhere. One British town, Colchester, is referenced specifically as having been destroyed; flashbacks to Smith's childhood also include scenes of Londoners taking refuge in the city's underground transit tunnels in the midst of the bombing.
It is stated that Eurasia was formed when the Soviet Union annexed the rest of continental Europe, creating a single polity stretching from Portugal to the Bering Strait. Orwell frequently describes the face of the standard Eurasian as "mongolic" in the novel. The only soldiers other than Oceanians that appear in the novel are the Eurasians. When a large number of captured soldiers are executed in Victory Square, some Slavs are mentioned, but the stereotype of the Eurasian maintained by the Party is Mongoloid, like O'Brien's servant, Martin. This implies that the Party uses racism to avert sympathy toward an enemy.
According to Goldstein's book, Eurasia's main natural defense is its vast territorial extent, while the ruling ideology of Eurasia is identified as "Neo-Bolshevism", a variation of the Oceanian "Ingsoc".
Eastasia's borders are not as clearly defined as those of the other two superstates, but it is known that they encompass most of modern-day China, Japan, Taiwan and Korea. Eastasia repeatedly captures and loses Indonesia, New Guinea, and the various Pacific archipelagos. Its political ideology is, according to the novel, "called by a Chinese name usually translated as Death-worship, but perhaps better rendered as 'Obliteration of the Self'". Orwell does not appear to have based this on any existing Chinese word or phrase.[2]
Not much information about Eastasia is given in the book. It is known that it is the newest and smallest of the three superstates. According to Goldstein's book, it emerged a decade after the establishment of the other two superstates, placing it somewhere in the 1960s, after years of "confused fighting" among its predecessor nations. (At the time of writing, the victory of Mao Zedong's Communists in the Chinese Civil War was not yet taken as a foregone conclusion. The Korean War had also not yet occurred, but Korea was already being administered by two competing governments. Japan was still under military occupation and, at least until shortly before Orwell completed the book, by several different powers. Power in the real life nations that make up the fictional Eastasia was, therefore, very much in flux.) It is also said in the book that the industriousness and fecundity of the people of Eastasia allows them to overcome their territorial inadequacy in comparison to the other two powers. At the time Orwell wrote the book, East Asians, including the Japanese, all had birth rates higher than those of Europeans.[citation needed]
The "disputed area", which lies "between the frontiers of the super-states", is "a rough quadrilateral with its corners at Tangier, Brazzaville, Darwin, and Hong Kong".[3] This area is fought over during the perpetual war among the three great powers, with one power sometimes exerting control over vast swathes of the disputed territory, only to lose it again. The reason three super-countries seek to control this area is to harness the large population and vast resources within the region. Control of the islands in the Pacific and the polar regions is also constantly shifting, though none of the three superpowers ever gains a lasting hold on these regions. The inhabitants of the area, having no allegiance to any nation, live in constant slavery under whichever power controls them at that time.
Eastasia and Eurasia fight over "a large but fluctuating portion of Manchuria, Mongolia, and Tibet".
At one point during the novel, Julia procures tea to share with Winston, and remarks that she thinks Oceania recently captured India (or perhaps parts of India) but such "control" is usually transient.
The world of Nineteen Eighty-Four exists in a state of perpetual war among the three major powers. At any given time, two of the three states are aligned against the third; for example Oceania and Eurasia against Eastasia or Eurasia and Eastasia against Oceania. However, as Goldstein's book points out, each Superstate is so powerful that even an alliance of the other two cannot destroy it, resulting in a continuing stalemate. From time to time, one of the states betrays its ally and sides with its former enemy. In Oceania, when this occurs, the Ministry of Truth rewrites history to make it appear that the current state of affairs is the way it has always been, and documents with contradictory information are destroyed in the memory hole.
Goldstein's book states that the war is not a war in the traditional sense, but simply exists to use up resources and keep the population in line. Victory for any side is not attainable or even desirable, but the Inner Party, through an act of doublethink, believes that such victory is in fact possible. Although the war began with the limited use of atomic weapons in a limited atomic war in the 1950s, none of the combatants use them any longer for fear of upsetting the balance of power. Relatively few technological advances have been made (the only two mentioned are the replacement of bombers with "rocket bombs" and of traditional capital ships with the immense "floating fortresses").
Almost all of the information about the world beyond London is given to the reader through government or Party sources, which by the very premise of the novel are unreliable. Specifically, in one page Julia brings up the idea that the war is fictional and that the rocket bombs falling from time to time on London are fired by the government of Oceania itself, in order to maintain the war atmosphere among the population (better known as a false flag operation). The protagonists have no means of proving or disproving this theory. However, during preparations for Hate Week, rocket bombs fell at an increasing rate, hitting places such as playgrounds and crowded theatres, causing mass casualties and increased hysteria and hatred for the party's enemies. War is also a convenient pretext for maintaining a huge militaryindustrial complex in which the state is committed to developing and acquiring large and expensive weapons systems which almost immediately become obsolete and require replacement. Finally, according to Goldstein's book, war makes handing over power to a small caste easier, and gives pretext to do so.
Because of this ambiguity, it is entirely possible that the geopolitical situation described in Goldstein's book is entirely fictitious; perhaps The Party controls the whole world, or possibly its power is limited to just Great Britain as a lone and desperate rogue nation using fanaticism and hatred of the outside world to compensate for political impotence. It's also possible that a genuine and large-scale resistance movement exists, or that Oceania is indeed under a large-scale attack by outside forces.
Read more:
Posted in Oceania
Comments Off on Nations of Nineteen Eighty-Four – Wikipedia, the free …
Oceania ecozone – Wikipedia, the free encyclopedia
Posted: at 11:08 pm
The Oceania ecozone is one of the World Wildlife Fund-WWF ecozones, and is unique in not including any continental land mass. It is the smallest in land area of the WWF ecozones.
This ecozone includes the islands of the Pacific Ocean in: Micronesia, the Fijian Islands, the Hawaiian islands, and Polynesia (with the exception of New Zealand).
New Zealand, Australia, and most of Melanesia including New Guinea, Vanuatu, the Solomon Islands, and New Caledonia are included within the Australasia ecozone.
Oceania is geologically the youngest ecozone. While other ecozones include old continental land masses or fragments of continents, Oceania is composed mostly of volcanic high islands and coral atolls that arose from the sea in geologically recent times, many of them in the Pleistocene. They were created either by hotspot volcanism, or as island arcs pushed upward by the collision and subduction of tectonic plates. The islands range from tiny islets, sea stacks and coral atolls to large mountainous islands, like Hawaii and Fiji.
The climate of Oceania's islands is tropical or subtropical, and range from humid to seasonally dry. Wetter parts of the islands are covered by Tropical and subtropical moist broadleaf forests, while the drier parts of the islands, including the leeward sides of the islands and many of the low coral islands, are covered by Tropical and subtropical dry broadleaf forests and Tropical and subtropical grasslands, savannas, and shrublands. Hawaii's high volcanoes, Mauna Kea and Mauna Loa, are home to some rare tropical Montane grasslands and shrublands.
Since the islands of Oceania were never connected by land to a continent, the flora and fauna of the islands originally reached them from across the ocean (though at the height of the last ice age sea levels were much lower than today and many current seamounts were islands, so some now isolated islands were once less isolated). Once they reached the islands, the ancestors of Oceania's present flora and fauna adapted to life on the islands.
Larger islands with diverse ecological niches encouraged floral and faunal adaptive radiation, whereby multiple species evolved from a common ancestor, each species adapted to a different ecological niche; the various species of Hawaiian honeycreepers (Family Drepanididae) are a classic example. Other adaptations to island ecologies include gigantism, dwarfism, and among birds, loss of flight. Oceania has a number of endemic species; Hawaii in particular is considered a global 'center of endemism', with its forest ecoregions having one of the highest percentages of endemic plants in the world.
Land plants disperse by several different means. Many plants, mostly ferns and mosses but also some flowering plants, disperse on the wind, relying on tiny spores or feathery seeds that can remain airborne over long distances notably Metrosideros trees from New Zealand spread on the wind across Oceania. Other plants, notably coconut palms and mangroves, produce seeds that can float in salt water over long distances, eventually washing up on distant beaches, and thus Cocos trees are ubiquitous across Oceania. Birds are also an important means of dispersal; some plants produce sticky seeds that are carried on the feet or feathers of birds, and many plants produce fruits with seeds that can pass through the digestive tracts of birds. Pandanus trees are fairly ubiquitous across Oceania.
Botanists generally agree that much of the flora of Oceania is derived from the Malesian Flora of the Malay Peninsula, Indonesia, the Philippines, and New Guinea, with some plants from Australasia and a few from the Americas, particularly in Hawaii. Easter Island has some plants from South America such as the totora reed.
Dispersal across the ocean is difficult for most land animals, and Oceania has relatively few indigenous land animals compared to other ecozones. Certain types of animals that are ecologically important on the continental ecozones, like large land predators and grazing mammals, were entirely absent from the islands of Oceania until humans brought them. Birds are relatively common, including many seabirds and some species of land birds whose ancestors may have been blown out to sea by storms. Some birds evolved into flightless species after their ancestors arrived, including several species of rails. A number of islands have indigenous lizards, including geckoes and skinks, whose ancestors probably arrived on floating rafts of vegetation washed out to sea by storms. With the exception of bats, which live on most of the island groups, there are few if any indigenous mammal species in Oceania.
Many animal and plant species have been introduced by humans in two main waves.
Malayo-Polynesian settlers brought pigs, dogs, chickens and polynesian rats to many islands; and had spread across the whole of Oceania by 1200 CE. From the seventeenth century onwards European settlers brought other animals, including cats, cattle, horses, small Asian mongoose (Herpestes javanicus), sheep, goats, and the brown rat (Rattus norvegicus). These and other introduced species, in addition to overhunting and deforestation, have dramatically altered the ecology of many of Oceania's islands, pushing many species to extinction or near-extinction, or confining them to small islets uninhabited by humans.
The absence of predator species caused many bird species to become 'naive', losing the instinct to flee from predators, and to lay their eggs on the ground, which makes them vulnerable to introduced predators like cats, dogs, mongooses, and rats. The arrival of humans on these island groups often resulted in disruption of the indigenous ecosystems and waves of species extinctions (see Holocene extinction event). Easter Island, the easternmost island in Polynesia, shows evidence of a human-caused ecosystem collapse several hundred years ago, which contributed (along with slave raiding and European diseases) to a 99% decline in the human population of the island. The island, once lushly forested, is now mostly windswept grasslands. More recently, Guam's native bird and lizard species were decimated by the introduction of the brown tree snake (Boiga irregularis) in the 1940s.
See original here:
Posted in Oceania
Comments Off on Oceania ecozone – Wikipedia, the free encyclopedia
Oceania Cruises : Huge Discounts on Oceania Vacations …
Posted: at 11:08 pm
Any Destination Alaska Bahamas Caribbean Europe Mexico Africa Alaska - All Alaska - Gulf of Alaska Alaska - Inside Passage Antarctica Asia Australia/New Zealand Bahamas Baltic Bermuda Black Sea Canada Caribbean - All Caribbean - Eastern Caribbean - Southern Caribbean - Western Central America Cruise to Nowhere Cuba Europe - All Europe - Northern Europe - Western Greek Isles Hawaii Mediterranean Mexico Middle East Panama Canal South America Tahiti Transatlantic USA - All USA - New England USA - Pacific USA - Southeast World Cruise
Any Date August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019
Any Cruise Line Azamara Carnival Celebrity Celestyal Costa Crystal Cunard Disney Fathom Holland America MSC Norwegian Oceania Paul Gauguin Ponant Princess Regent Royal Caribbean Seabourn SeaDream Yacht Club Silversea Star Clippers Swan Hellenic Viking Voyages of Discovery Windstar
Any Length 1-3 Nights 4-5 Nights 6-7 Nights 8-12 Nights 13+ Nights
Any Departure Port Ft Lauderdale, FL Galveston, TX Miami, FL Los Angeles, CA New York, NY Port Canaveral, NY Seattle, WA Vancouver, BC Aalborg, Denmark Abu Dhabi, U.A.E. Acapulco, Mexico Accra, Ghana Adelaide, Australia Aghios Nikolaos, Greece Ajaccio, France Akaroa, New Zealand Alexandria, Egypt Alicante, Spain Amman, Jordan Amsterdam, Holland Anadyr, Russia Anchorage, Alaska Ancona, Italy Ankara, Turkey Antalya, Turkey Antwerp, Belgium Apia, Western Samoa Apra, Guam Aqaba, Jordan Aruba Ashdod, Israel Athens, Greece Auckland, New Zealand Balboa, Panama Bali, Indonesia Balikpapan (Borneo), Indonesia Baltimore, Maryland Baltra, Galapagos Bangkok, Thailand Barbados Barcelona, Spain Bari, Italy Basse Terre, Guadeloupe Bayonne, New Jersey Beijing, China Benoa, Bali Bergen, Norway Berlin, Germany Bilbao, Spain Bodrum, Turkey Bombay, India Bordeaux, France Boston, Massachusetts Bremerhaven, Germany Bridgetown, Barbados Brindisi, Italy Brisbane, Australia Broome, Australia Budapest, Hungary Buenos Aires, Argentina Busan, South Korea Cabo San Lucas, Mexico Cadiz, Spain Cagliari, Italy Cairns, Australia Cairns, Australia Cairo, Egypt Caldera, Costa Rica Calgary, Canada Callao, Peru Cannes, France Cape Liberty, New Jersey Cape Town, South Africa Cartagena, Colombia Casa de Campo, Dom Rep. Casablanca, Morocco Catania, Sicily Centro De Interpretation Cesme, Turkey Chalon-Sur-Saone, France Charleston, South Carolina Charlotte, St. Thomas Charlotte, Virgin Islands Cherbourg, France Christchurch, New Zealand Churchill, Canada Cienfuegos, Cuba Civitavecchia, Italy Colombo, Sri Lanka Colon, Panama Como, Italy Copenhagen, Denmark Corfu, Greece Cozumel, Mexico Dakar, Senegal Dalian, China Dar Es Salaam, Tanzania Darwin, Australia Dover, England Dubai, U.A.E. Dublin, Ireland Dubrovnik, Croatia Dundee, Scotland Dunedin, New Zealand Durban, South Africa Easter Island Edinburgh, Scotland Ensenada, Mexico Fairbanks, Alaska Fort-De-France, Martinique Fremantle, Australia Ft. Lauderdale, Florida Fuerte Amador, Panama Funchal, Portugal Galveston, Texas Galway, Ireland Gdansk, Poland Genoa, Italy Greenock, Scotland Greenwich, England Guadeloupe, Leeward Islands Guayaquil, Ecuador Gustavia, St. Barts Haifa, Israel Hakata (Fukuoka), Japan Hakodate, Japan Halifax, Canada Hamburg, Germany Hamilton, Bermuda Hanga Roa, Chile Harwich, England Havana, Cuba Helsinki, Finland Heraklion, Greece Ho Chi Minh City, Vietnam Hobart, Australia Honfleur, France Hong Gai Hanoi, Vietnam Hong Kong, China Honiara, Solomon Islands Honolulu, Hawaii Houston, Texas Hualien, Taiwan IJmuiden, Netherlands Inchon Seoul, South Korea Iquitos, Peru Istanbul, Turkey Izmir, Turkey Jacksonville, Florida Juneau, Alaska Kanazawa, Japan Kangerlussauq, Greenland Kaohsiung, Taiwan Katakolon, Greece Keelung, Taiwan Kiel, Germany Kings Wharf, Bermuda Kirkenes, Norway Kobe, Japan Kolkata, India Koror, Palau Kos, Greece Kota Kinabalu, Malaysia Kusadasi, Turkey La Goulette Tunis, Tunisia La Romana, Dom Republic La Spezia, Italy La Valetta, Malta Laem Chabang, Thailand Larnaca, Cyprus Las Palmas, Spain Lautoka, Fiji Islands Le Havre, France Leith, Scotland Lima, Peru Limassol, Cyprus Lisbon, Portugal Liverpool, England Livorno, Italy London (Tilbury), England London, England Longyearbyen, Norway Los Angeles (Long Beach) Los Angeles (San Pedro) Lubeck, Germany Luxor, Egypt Madras, India Madrid, Spain Mahe, Africa Mahon, Spain Maizuru, Japan Malacca Melaka, Malaysia Malaga, Spain Maldives, Male Male, Maldives Mallorca, Spain Manado, Indonesia Manaus, Brazil Manila, Philippines Marigot, Saint Martin Marseille, France Melbourne, Australia Messina, Italy Miami, Florida Milan, Italy Milford Sound, New Zealand Mobile, Alabama Mombasa, Kenya Monte Carlo, Monaco Montego Bay, Jamaica Montevideo, Uruguay Montreal, Canada Mumbai Bombay, India Muscat, Oman Mykonos, Greece Nagasaki, Japan Nairobi, Kenya Naples, Italy New Orleans, Louisiana New York (Brooklyn), NY New York (Manhattan), NY Nice, France Nome, Alaska Norfolk, Virginia Noumea, New Caledonia Nynashamn, Sweden Odessa, Ukraine Okinawa, Japan Olbia, Italy Oostende, Belgium Oranjestad, Aruba Osaka, Japan Oslo, Norway Otaru Sapporo, Japan Padang Bali, Indonesia Palamos, Spain Palermo, Italy Palm Beach, FL Palma De Mallorca, Spain Panama City, Panama Papeete, French Polynesia Paris, France Passau, Germany Patmos, Greece Perth, Australia Petropavlovsk, Russia Pevek, Russia Philadelphia, Pennsylvania Philipsburg, St. Maarten Phuket, Thailand Piraeus, Greece Pointe Des Galets, Reunion Is. Pointe-A-Pitre, Guadeloupe Port Canaveral, Florida Port Everglades, Florida Port Kelang, Malaysia Port Louis, Mauritius Port Said, Egypt Port Vila, Vanuatu Portimao, Portugal Porto Praia, Cape Verde Portsmouth, England Puerto Caldera, Costa Rica Punta Arenas, Chile Puntarenas, Costa Rica Pusan, South Korea Quebec, Canada Queensferry, Scotland Quingdao, China Quito, Ecuador Ravenna, Italy Recife, Brazil Reykjavik, Iceland Rhodes, Greece Rio De Janeiro, Brazil Rome, Italy Rostock, Germany Rotterdam, Holland Rouen, France Safaga, Egypt Salerno, Italy Salvador De Bahia, Brazil Samos, Greece San Cristobal, Ecuador San Diego, California San Francisco, California San Jose, Costa Rica San Juan Del Sur, Nicaragua San Juan, Puerto Rico Santa Cruz de Palma, Spain Santa Cruz Tenerife, Spain Santiago, Chile Santo Domingo, Dom Rep. Santorini, Greece Santos (Sao Paulo), Brazil Sapporo, Japan Savona, Italy Seattle, Washington Seville, Spain Seward, Alaska Seychelles, Male Shanghai, China Shannon, Ireland Sharm El Sheikh, Egypt Siem Reap, Cambodia Singapore, Singapore Skagway, Alaska Sokhna, Egypt Southampton, England St. Denis, Reunion St. Georges, Bermuda St. John, Canada St. Johns, Antigua St. Maarten St. Martin, French Antilles St. Nazaire, France St. Petersburg, Russia Fed. St. Thomas, Virgin Islands St. Tropez, France Stockholm, Sweden Sydney, Australia Sydney, Canada Syros, Greece Talcahuano, Chile Tampa, Florida Tarragona, Spain Tel Aviv Tema, Ghana Tenerife, Canary Islands Tenerife, Spain Tianjin, China Tokyo, Japan Toronto, Canada Toulon, France Trapani, Italy Travemunde, Germany Trieste, Italy Tromso, Norway Tromso, Norway Ushuaia, Argentina Valencia, Spain Valletta, Malta Valparaiso, Chile Vancouver, Canada Venice, Italy Victoria, British Columbia Vigo, Spain Villefranche, France Walvis Bay, Namibia Warnemunde, Germany Washington DC Waterford, Ireland Wellington, New Zealand Whittier, Alaska Willemstad, Curacao Xiamen, China Xingang, China Yangon Rangoon, Myanmar Yokohoma, Japan Zurich, Switzerland
Specify Residence Alabama Alaska Alberta American Samoa Arizona Arkansas Armed Forces Europe Armed Forces Pacific Australian Cap. Terr. British Columbia California Christmas Island Cocos Islands Colorado Connecticut Delaware District Of Columbia Fed. St. of Micronesia Florida Georgia Guam Hawaii Idaho Illinois Indiana Iowa Jervis Bay Kansas Kentucky Louisiana Maine Manitoba Marshall Islands Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana N. Mariana Islands Nebraska Nevada New Brunswick New Hamsphire New Jersey New Mexico New South Wales New York Newfoundland and Lab. North Carolina North Dakota Northern Territory Northwest Territories Nova Scotia Nunavut Ohio Oklahoma Ontario Oregon Palau Pennsylvania Prince Edward Island Puerto Rico Quebec Queensland Rhode Island Saskatchewan South Australia South Carolina South Dakota Tasmania Tennessee Texas Utah Vermont Victoria Virgin Islands Virginia Washington West Virginia Western Australia Wisconsin Wyoming Yukon Territory
Read the original here:
Posted in Oceania
Comments Off on Oceania Cruises : Huge Discounts on Oceania Vacations …
Artificial intelligence (video games) – Wikipedia, the free …
Posted: at 11:03 pm
In video games, artificial intelligence is used to generate intelligent behaviors primarily in non-player characters (NPCs), often simulating human-like intelligence. The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.
Since game AI for NPCs is centered on appearance of intelligence and good gameplay within environment restrictions, its approach is very different from that of traditional AI; workarounds and cheats are acceptable and, in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPCs' otherwise perfect aiming would be beyond human skill.
Game playing was an area of research in AI from its inception. One of the first examples of AI is the computerised game of Nim made in 1951 and published in 1952. Despite being advanced technology in the year it was made, 20 years before Pong, the game took the form of a relatively small box and was able to regularly win games even against highly skilled players of the game.[1] In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[2] These were among the first computer programs ever written. Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[3] Work on checkers and chess would culminate in the defeat of Garry Kasparov by IBM's Deep Blue computer in 1997.[4] The first video games developed in the 1960s and early 1970s, like Spacewar!, Pong, and Gotcha (1973), were games implemented on discrete logic and strictly based on the competition of two players, without AI.
Games that featured a single player mode with enemies started appearing in the 1970s. The first notable ones for the arcade appeared in 1974: the Taito game Speed Race (racing video game) and the Atari games Qwak (duck hunting light gun shooter) and Pursuit (fighter aircraft dogfighting simulator). Two text-based computer games from 1972, Hunt the Wumpus and Star Trek, also had enemies. Enemy movement was based on stored patterns. The incorporation of microprocessors would allow more computation and random elements overlaid into movement patterns.
It was during the golden age of video arcade games that the idea of AI opponents was largely popularized, due to the success of Space Invaders (1978), which sported an increasing difficulty level, distinct movement patterns, and in-game events dependent on hash functions based on the player's input. Galaxian (1979) added more complex and varied enemy movements, including maneuvers by individual enemies who break out of formation. Pac-Man (1980) introduced AI patterns to maze games, with the added quirk of different personalities for each enemy. Karate Champ (1984) later introduced AI patterns to fighting games, although the poor AI prompted the release of a second version. First Queen (1988) was a tactical action RPG which featured characters that can be controlled by the computer's AI in following the leader.[5][6] The role-playing video game Dragon Quest IV (1990) introduced a "Tactics" system, where the user can adjust the AI routines of non-player characters during battle, a concept later introduced to the action role-playing game genre by Secret of Mana (1993).
Games like Madden Football, Earl Weaver Baseball and Tony La Russa Baseball all based their AI on an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity. Madden, Weaver and La Russa all did extensive work with these game development teams to maximize the accuracy of the games.[citation needed] Later sports titles allowed users to "tune" variables in the AI to produce a player-defined managerial or coaching strategy.
The emergence of new game genres in the 1990s prompted the use of formal AI tools like finite state machines. Real-time strategy games taxed the AI with many objects, incomplete information, pathfinding problems, real-time decisions and economic planning, among other things.[7] The first games of the genre had notorious problems. Herzog Zwei (1989), for example, had almost broken pathfinding and very basic three-state state machines for unit control, and Dune II (1992) attacked the players' base in a beeline and used numerous cheats.[8] Later games in the genre exhibited more sophisticated AI.
Later games have used bottom-up AI methods, such as the emergent behaviour and evaluation of player actions in games like Creatures or Black & White. Faade (interactive story) was released in 2005 and used interactive multiple way dialogs and AI as the main aspect of game.
Games have provided an environment for developing artificial intelligence with potential applications beyond gameplay. Examples include Watson, a Jeopardy!-playing computer; and the RoboCup tournament, where robots are trained to compete in soccer.[9]
Purists complain that the "AI" in the term "game AI" overstates its worth, as game AI is not about intelligence, and shares few of the objectives of the academic field of AI. Whereas "real" AI addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal of strong AI that can reason, "game AI" often consists of a half-dozen rules of thumb, or heuristics, that are just enough to give a good gameplay experience.[citation needed] Historically, academic game-AI projects have been relatively separate from commercial products because the academic approaches tended to be simple and non-scalable. Commercial game AI has developed its own set of tools, which have been sufficient to give good performance in many cases.[10]
Game developers' increasing awareness of academic AI and a growing interest in computer games by the academic community is causing the definition of what counts as AI in a game to become less idiosyncratic. Nevertheless, significant differences between different application domains of AI mean that game AI can still be viewed as a distinct subfield of AI. In particular, the ability to legitimately solve some AI problems in games by cheating creates an important distinction. For example, inferring the position of an unseen object from past observations can be a difficult problem when AI is applied to robotics, but in a computer game a NPC can simply look up the position in the game's scene graph. Such cheating can lead to unrealistic behavior and so is not always desirable. But its possibility serves to distinguish game AI and leads to new problems to solve, such as when and how to use cheating.[citation needed]
The major limitation to strong AI is the inherent depth of thinking and the extreme complexity of the decision making process. This means that although it would be then theoretically possible to make "smart" AI the problem would take considerable processing power.[citation needed]
Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although scripting is currently the most common means of control. Pathfinding is another common use for AI, widely seen in real-time strategy games. Pathfinding is the method for determining how to get a NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war". Beyond pathfinding, navigation is a sub-field of Game AI focusing on giving NPCs the capability to navigate in their environment, finding a path to a target while avoiding collisions with other entities (other NPC, players...) or collaborating with them (group navigation).
The concept of emergent AI has recently been explored in games such as Creatures, Black & White and Nintendogs and toys such as Tamagotchi. The "pets" in these games are able to "learn" from actions taken by the player and their behavior is modified accordingly. While these choices are taken from a limited pool, it does often give the desired illusion of an intelligence on the other side of the screen.
Many contemporary video games fall under the category of action, first person shooter, or adventure. In most of these types of games there is some level of combat that takes place. The AI's ability to be efficient in combat is important in these genres. A common goal today is to make the AI more human, or at least appear so.
One of the more positive and efficient features found in modern-day video game AI is the ability to hunt. AI originally reacted in a very black and white manner. If the player were in a specific area then the AI would react in either a complete offensive manner or be entirely defensive. In recent years, the idea of "hunting" has been introduced; in this 'hunting' state the AI will look for realistic markers, such as sounds made by the character or footprints they may have left behind.[11] These developments ultimately allow for a more complex form of play. With this feature, the player can actually consider how to approach or avoid an enemy. This is a feature that is particularly prevalent in the stealth genre.
Another development in recent game AI has been the development of "survival instinct". In-game computers can recognize different objects in an environment and determine whether it is beneficial or detrimental to its survival. Like a user, the AI can "look" for cover in a firefight before taking actions that would leave it otherwise vulnerable, such as reloading a weapon or throwing a grenade. There can be set markers that tell it when to react in a certain way. For example, if the AI is given a command to check its health throughout a game then further commands can be set so that it reacts a specific way at a certain percentage of health. If the health is below a certain threshold then the AI can be set to run away from the player and avoid it until another function is triggered. Another example could be if the AI notices it is out of bullets, it will find a cover object and hide behind it until it has reloaded. Actions like these make the AI seem more human. However, there is still a need for improvement in this area.
Another side-effect of combat AI occurs when two AI-controlled characters encounter each other; first popularized in the id Software game Doom, so-called 'monster infighting' can break out in certain situations. Specifically, AI agents that are programmed to respond to hostile attacks will sometimes attack each other if their cohort's attacks land too close to them.[citation needed] In the case of Doom, published gameplay manuals even suggest taking advantage of monster infighting in order to survive certain levels and difficulty settings.
Georgios N. Yannakakis suggests that academic AI developments can play roles in game AI beyond the traditional paradigm of AI controlling NPC behavior.[10] He highlights four other potential application areas:
In the context of artificial intelligence in video games, cheating refers to the programmer giving agents actions and access to information that would be unavailable to the player in the same situation.[12] In a simple example, if the agents want to know if the player is nearby they can either be given complex, human-like sensors (seeing, hearing, etc.), or they can cheat by simply asking the game engine for the player's position. Common variations include giving AIs higher speeds in racing games to catch up to the player or spawning them in advantageous positions in first person shooters. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for this advantage. Cheating is often implemented for performance reasons where in many cases it may be considered acceptable as long as the effect is not obvious to the player. While cheating refers only to privileges given specifically to the AIit does not include the inhuman swiftness and precision natural to a computera player might call the computer's inherent advantages "cheating" if they result in the agent acting unlike a human player.[12]Sid Meier stated that he omitted multiplayer alliances in Civilization because he found that the computer was almost as good as humans in using them, which caused players to think that the computer was cheating.[13]
Creatures is an artificial life program where the user "hatches" small furry animals and teaches them how to behave. These "Norns" can talk, feed themselves, and protect themselves against vicious creatures. It's the first popular application of machine learning into an interactive simulation. Neural networks are used by the creatures to learn what to do. The game is regarded as a breakthrough in artificial life research, which aims to model the behavior of creatures interacting with their environment.[14]
A first-person shooter where the player assumes the role of the Master Chief, battling various aliens on foot or in vehicles. Enemies use cover very wisely, and employ suppressive fire and grenades. The squad situation affects the individuals, so certain enemies flee when their leader dies. A lot of attention is paid to the little details, with enemies notably throwing back grenades or team-members responding to you bothering them. The underlying "behavior tree" technology has become very popular in the games industry (especially since Halo 2).[14]
A first-person shooter where the player helps contain supernatural phenomenon and armies of cloned soldiers. The AI uses a planner to generate context-sensitive behaviors, the first time in a mainstream game. This technology used as a reference for many studios still today. The enemies are capable of using the environment very cleverly, finding cover behind tables, tipping bookshelves, opening doors, crashing through windows, and so on. Squad tactics are used to great effect. The enemies perform flanking maneuvers, use suppression fire, etc.[14]
A first-person shooter survival horror game where the player must face man-made experiments, military soldiers, and mercenaries known as Stalkers. The various encountered enemies (if the difficulty level is set to its highest) use combat tactics and behaviours such as healing wounded allies, giving orders, out-flanking the player or using weapons with pinpoint accuracy.[citation needed]
A first-person shooter where the player fights off numerous mercenaries and assassinates faction leaders. The AI is behavior based and uses action selection, essential if an AI is to multitask or react to a situation. The AI can react in an unpredictable fashion in many situations. The enemies respond to sounds and visual distractions such as fire or nearby explosions and can be subject to investigate the hazard, the player can utilize these distractions to his own advantage. There are also social interfaces with an AI but however not in the form of direct conversation but more reactionary, if the player gets too close or even nudges an AI, the player is subject to getting shoved off or sworn at and by extent getting aimed at. Other social interfaces between AI exist when in combat, or neutral situations, if an enemy AI is injured on the ground, he will shout out for help, release emotional distress, etc.[citation needed]
Read the original post:
Artificial intelligence (video games) - Wikipedia, the free ...
Posted in Ai
Comments Off on Artificial intelligence (video games) – Wikipedia, the free …
History of artificial intelligence – Wikipedia, the free …
Posted: at 11:03 pm
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."
The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" The term 'Artificial Intelligence' was created at a conference held at Dartmouth College in 1956.[2]Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with $50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5]
In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.
McCorduck (2004) writes "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockwork automatons.
Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots), and speculation, such as Samuel Butler's "Darwin among the Machines." AI has continued to be an important element of science fiction into the present.
Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[11]Hero of Alexandria,[12]Al-Jazari and Wolfgang von Kempelen.[14] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it."[15][16]
Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor "formal"reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), Muslim mathematician al-Khwrizm (who developed algebra and gave his name to "algorithm") and European scholastic philosophers such as William of Ockham and Duns Scotus.[17]
Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[18] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[19] Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[20]
In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[21]Hobbes famously wrote in Leviathan: "reason is nothing but reckoning".[22]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate."[23] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.
In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole's The Laws of Thought and Frege's Begriffsschrift. Building on Frege's system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell's success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: "can all of mathematical reasoning be formalized?"[17] His question was answered by Gdel's incompleteness proof, Turing's machine and Church's Lambda calculus.[17][24] Their answer was surprising in two ways.
First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[17][26]
Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine "might compose elaborate and scientific pieces of music of any degree of complexity or extent".[27] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)
The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[28] and developed by John von Neumann.[29]
In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.
The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. Claude Shannon's information theory described digital signals (i.e., all-or-nothing signals). Alan Turing's theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[30]
Examples of work in this vein includes robots such as W. Grey Walter's turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[31]
Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[32] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[33]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.
In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[34] He noted that "thinking" is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was "thinking". This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition.[35] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.
In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[36]Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[37]Game AI would continue to be used as a measure of progress in AI throughout its history.
When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[38]
In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the "Logic Theorist" (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica, and find new and more elegant proofs for some.[39] Simon said that they had "solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind."[40] (This was an early statement of the philosophical position John Searle would later call "Strong AI": that machines can contain minds just as human bodies do.)[41]
The Dartmouth Conference of 1956[42] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it".[43] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[44] At the conference Newell and Simon debuted the "Logic Theorist" and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field.[45] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[46]
The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply "astonishing":[47] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all.[48] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[49] Government agencies like ARPA poured money into the new field.[50]
There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:
Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called "reasoning as search".[51]
The principal difficulty was that, for many problems, the number of possible paths through the "maze" was simply astronomical (a situation known as a "combinatorial explosion"). Researchers would reduce the search space by using heuristics or "rules of thumb" that would eliminate those paths that were unlikely to lead to a solution.[52]
Newell and Simon tried to capture a general version of this algorithm in a program called the "General Problem Solver".[53] Other "searching" programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and SAINT, written by Minsky's student James Slagle (1961).[54] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[55]
An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow's program STUDENT, which could solve high school algebra word problems.[56]
A semantic net represents concepts (e.g. "house","door") as nodes and relations among concepts (e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[57] and the most successful (and controversial) version was Roger Schank's Conceptual dependency theory.[58]
Joseph Weizenbaum's ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[59]
In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[60]
This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd's SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[61]
The first generation of AI researchers made these predictions about their work:
In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[66]DARPA made similar grants to Newell and Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[67] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[68] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[69]
The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them.[70] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[71] but this "hands off" approach would not last.
In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[72] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky's devastating criticism of perceptrons.[73] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[74]
In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys".[75] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[76]
The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[84] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its "grandiose objectives" and led to the dismantling of AI research in that country.[85] (The report specifically mentioned the combinatorial explosion problem as a reason for AI's failings.)[86]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[87] By 1974, funding for AI projects was hard to find.
Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many researchers were caught up in a web of increasing exaggeration."[88] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research". Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[89]
Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[90]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little "symbol processing" and a great deal of embodied, instinctive, unconscious "know how".[91][92]John Searle's Chinese Room argument, presented in 1980, attempted to show that a program could not be said to "understand" the symbols that it uses (a quality called "intentionality"). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as "thinking".[93]
These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference "know how" or "intentionality" made to an actual computer program. Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored."[94] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me."[95]Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he "deliberately made it plain that theirs was not the way to treat a human being."[96]
Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[97]
A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages." An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt's predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[73]
Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[98] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[99] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[100] Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[101]
Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[102] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[103]
Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[104]Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[105]
In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[106] Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.
In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.
An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[107]
Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[108]
In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[109] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[110]
The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. "AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,"[111] writes Pamela McCorduck. "[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay".[112]Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[113]
The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[114]
Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[115]
In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[116] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[117]
Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or "MCC") to fund large scale projects in AI and information technology.[118][119]DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[120]
In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a "Hopfield net") could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called "backpropagation" (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[119][121]
The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[119][122]
The business community's fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.
The term "AI winter" was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[123] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.
The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[124]
Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[125]
In the late 80s, the Strategic Computing Initiative cut funding to AI "deeply and brutally." New leadership at DARPA had decided that AI was not "the next wave" and directed funds towards projects that seemed more likely to produce immediate results.[126]
By 1991, the impressive list of goals penned in 1981 for Japan's Fifth Generation Project had not been met. Indeed, some of them, like "carry on a casual conversation" had not been met by 2010.[127] As with other AI projects, expectations had run much higher than what was actually possible.[127]
In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[128] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec's paradox). They advocated building intelligence "from the bottom up."[129]
The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr's work would be cut short by leukemia in 1980.)[130]
In a 1990 paper, "Elephants Don't Play Chess,"[131] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough."[132] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[133]
The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence".[134] AI was both more cautious and more successful than it had ever been.
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[135] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[136]
In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[137] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[138] In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[139]
These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[140] In fact, Deep Blue's computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[141] This dramatic increase is measured by Moore's law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of "raw computer power" was slowly being overcome.
A new paradigm called "intelligent agents" became widely accepted during the 90s.[142] Although earlier researchers had proposed modular "divide and conquer" approaches to AI,[143] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI.[144] When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.
An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are "intelligent agents", as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as "the study of intelligent agents". This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[145]
The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[144][146]
AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[147] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous "scientific" discipline. Russell & Norvig (2003) describe this as nothing less than a "revolution" and "the victory of the neats".[148][149]
Judea Pearl's highly influential 1988 book[150] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for "computational intelligence" paradigms like neural networks and evolutionary algorithms.[148]
Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[151] and their solutions proved to be useful throughout the technology industry,[152] such as data mining, industrial robotics, logistics,[153]speech recognition,[154] banking software,[155] medical diagnosis[155] and Google's search engine.[156]
The field of AI receives little or no credit for these successes. Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[157]Nick Bostrom explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[158]
Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research, as the New York Times reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[159][160][161]
In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[162]
Marvin Minsky asks "So the question is why didn't we get HAL in 2001?"[163] Minsky believes that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blames the qualification problem.[164] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicts that machines with human-level intelligence will appear by 2029.[165]Jeff Hawkins argues that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[166] There are many other explanations and for each there is a corresponding research program underway.
.
Go here to read the rest:
History of artificial intelligence - Wikipedia, the free ...
Posted in Ai
Comments Off on History of artificial intelligence – Wikipedia, the free …
Ai (poet) – Wikipedia, the free encyclopedia
Posted: at 11:03 pm
Ai Ogawa (October 21, 1947 March 20, 2010),[1][2][3][4] born as Florence Anthony, was an American poet and educator. She won the 1999 National Book Award for Poetry for Vice: New and Selected Poems.[5] Ai is known for her mastery of the dramatic monologue as a poetic form, as well as for taking on dark, controversial topics in her work. [1]
Ai, who described herself as half 1/2 Japanese, 1/8 Choctaw-Chickasaw,1/4 Black,1/16 Irish, and Southern Cheyenne, and Comanche, was born in Albany, Texas[1][2][3][4][6][7] in 1947, and she grew up in Tucson, Arizona. She was also raised in Los Angeles, Las Vegas, and San Francisco, with her mother and second stepfather, Sutton Hayes. In 1959, a couple years after the her mother's divorce from Hayes, they moved back to Tucson, Arizona where she completed high school and attended college at the University of Arizona, where she majored in English and Oriental Studies with a concentration in Japanese and a minor in Creative Writing, which she would fully commit to toward the end of her degree.[8] Before starting college, one night during dinner with her mother and third stepfather, Ai learned her biological father was Japanese. Known as Florence Hayes throughout her childhood and undergrad years, it was not until graduate school, when Ai was going to switch her last name back to Anthony that her mother finally told her more details about her past, learning that she had an affair with a Japanese man, Michael Ogawa, after meeting him at a streetcar stop. Learning of the affair had led Ai's first stepfather, whose last name was "Anthony," to beat her mother until family intervened and she was taken to Texas, where her stepfather eventually followed after Ai's birth. Because her mother was still legally married to Anthony at the time, his last name was put on Ai's birth certificate.[9]
The poverty Ai experienced during her childhood affected her and her writing.[10] Ai credits her first writing experience to an assignment in her Catholic school English class to write a letter from the perspective of martyr. Two years after that experience, she began actively writing at the age of 14.[8] History had been one of her many interests since high school.[9]
From 1969 to 1971, Ai attended the University of California at Irvine's M.F.A program where she worked under the likes of Charles Wright and Donald Justice.[8][9] She is the author of "No Surrender," (2010), which was posthumously published after her death, Dread (W. W. Norton & Co., 2003); Vice (1999), which won the National Book Award;[5]Greed (1993); Fate (1991); Sin (1986), which won an American Book Award from the Before Columbus Foundation; Killing Floor (1979), which was the 1978 Lamont Poetry Selection of the Academy of American Poets; and Cruelty (1973).
She also received awards from the Guggenheim Foundation, the National Endowment for the Arts, the Bunting Fellowship Program at Radcliffe College and from various universities. She was a visiting instructor at Binghamton University, State University of New York for the 1973-74 academic year. After winning the National Book Award for "Vice" she became a tenured professor and the vice president of the Native American Faculty and Staff Association at Oklahoma State University and lived in Stillwater, Oklahoma until her death.[11][12]
Ai had considered herself as "simply a writer" rather than a spokesperson for any particular group.[13]
In 1973, she legally changed her last name to Ogawa and her middle name to "Ai" (), translates to "love" in Japanese, which she had been using as a pen name since 1969.[9]
Ai was checked into the hospital on March 17, 2010 for pneumonia. Three days later, Ai died on March 20, 2010 at age 62, in Stillwater, Oklahoma[14] from what turned out to be complications of an advanced, and previously undiagnosed, breast cancer.[15][16]
More here:
Posted in Ai
Comments Off on Ai (poet) – Wikipedia, the free encyclopedia
AI file extension – Open, view and convert .ai files
Posted: at 11:03 pm
The ai file extension is associated with Adobe Illustrator the well known vector graphics editor for the Macintosh and Windows platforms.
AI file format is a widely used format for the exchange of 2D objects. Basic files in this format are simple to write, but files created by applications implementing the full AI specification can be quite large and complex and may be too slow to render.
Simple *.ai files are easy to construct, and a program can create files that can be read by any AI reader or can be printed on any PostScript printer software. Reading AI files is another matter entirely. Certain operations may be very difficult for a rendering application to implement or simulate. In light of this, developers often choose not to render the image from the PostScript-subset line data in the file. However almost all of the image can usually be reconstructed using simple operations.implementation of the PostScript language.
The *.ai files consist of a series of ASCII lines, which may be comments, data, commands, or combinations of commands and data. This data is based on the PDF language specification and older versions of Adobe Illustrator used format which is variant of Adobe Encapsulated PostScirpt (EPS) format.
If The EPS is a slightly limited subset of full PostScript, then Adobe Illustrator AI format is a strictly limited, highly simplified subset of EPS. While EPS can contain virtually any PS command that's not on the verboten list and can include elaborate program flow logic that determines what gets printed when, an AI file is limited to a much smaller number of drawing commands and it contains no programming logic at all. For all practical purposes, each unit of "code" in an AI file represents a drawing object. The program importing the AI reads each object in sequence, start to finish, no detours, no logical side-trips.
MIME: application/postscript
Go here to see the original:
Posted in Ai
Comments Off on AI file extension – Open, view and convert .ai files