The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: March 14, 2024
Among the A.I. Doomsayers – The New Yorker
Posted: March 14, 2024 at 12:11 am
Katja Graces apartment, in West Berkeley, is in an old machinists factory, with pitched roofs and windows at odd angles. It has terra-cotta floors and no central heating, which can create the impression that youve stepped out of the California sunshine and into a duskier place, somewhere long ago or far away. Yet there are also some quietly futuristic touches. High-capacity air purifiers thrumming in the corners. Nonperishables stacked in the pantry. A sleek white machine that does lab-quality RNA tests. The sorts of objects that could portend a future of tech-enabled ease, or one of constant vigilance.
Grace, the lead researcher at a nonprofit called A.I. Impacts, describes her job as thinking about whether A.I. will destroy the world. She spends her time writing theoretical papers and blog posts on complicated decisions related to a burgeoning subfield known as A.I. safety. She is a nervous smiler, an oversharer, a bit of a mumbler; shes in her thirties, but she looks almost like a teen-ager, with a middle part and a round, open face. The apartment is crammed with books, and when a friend of Graces came over, one afternoon in November, he spent a while gazing, bemused but nonjudgmental, at a few of the spines: Jewish Divorce Ethics, The Jewish Way in Death and Mourning, The Death of Death. Grace, as far as she knows, is neither Jewish nor dying. She let the ambiguity linger for a moment. Then she explained: her landlord had wanted the possessions of the previous occupant, his recently deceased ex-wife, to be left intact. Sort of a relief, honestly, Grace said. One set of decisions I dont have to make.
She was spending the afternoon preparing dinner for six: a yogurt-and-cucumber salad, Impossible beef gyros. On one corner of a whiteboard, she had split her pre-party tasks into painstakingly small steps (Chop salad, Mix salad, Mold meat, Cook meat); on other parts of the whiteboard, shed written more gnomic prompts (Food area, Objects, Substances). Her friend, a cryptographer at Android named Paul Crowley, wore a black T-shirt and black jeans, and had dyed black hair. I asked how they knew each other, and he responded, Oh, weve crossed paths for years, as part of the scene.
It was understood that the scene meant a few intertwined subcultures known for their exhaustive debates about recondite issues (secure DNA synthesis, shrimp welfare) that members consider essential, but that most normal people know nothing about. For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationistsor, when theyre feeling especially panicky, A.I. doomers. They find one another online and often end up living together in group houses in the Bay Area, sometimes even co-parenting and co-homeschooling their kids. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian row houses, were associated with staid domesticity. Last year, referring to A.I. hacker houses, the San Francisco Standard semi-ironically called the area Cerebral Valley.
A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves effective accelerationists, or e/accs (pronounced e-acks), and they believe A.I. will usher in a utopian futureinterstellar travel, the end of diseaseas long as the worriers get out of the way. On social media, they troll doomsayers as decels, psyops, basically terrorists, or, worst of all, regulation-loving bureaucrats. We must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars, a leading e/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)
Graces dinner parties, semi-underground meetups for doomers and the doomer-curious, have been described as a nexus of the Bay Area AI scene. At gatherings like these, its not uncommon to hear someone strike up a conversation by asking, What are your timelines? or Whats your p(doom)? Timelines are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, the point at which a machine can do any cognitive task that a person can do. (Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet. For years, even in Bay Area circles, such speculative conversations were marginalized. Last year, after OpenAI released ChatGPT, a language model that could sound uncannily natural, they suddenly burst into the mainstream. Now there are a few hundred people working full time to save the world from A.I. catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of A.I. safety, approaching it as a set of complex math problems; Grace works at a kind of think tank that produces research on high-level questions, such as What roles will AI systems play in society? and Will they pursue goals? When theyre not lobbying in D.C. or meeting at an international conference, they often cross paths in places like Graces living room.
The rest of her guests arrived one by one: an authority on quantum computing; a former OpenAI researcher; the head of an institute that forecasts the future. Grace offered wine and beer, but most people opted for nonalcoholic canned drinks that defied easy description (a fermented energy drink, a hopped tea). They took their Impossible gyros to Graces sofa, where they talked until midnight. They were courteous, disagreeable, and surprisingly patient about reconsidering basic assumptions. You can condense the gist of the worry, seems to me, into a really simple two-step argument, Crowley said. Step one: Were building machines that might become vastly smarter than us. Step two: That seems pretty dangerous.
Are we sure, though? Josh Rosenberg, the C.E.O. of the Forecasting Research Institute, said. About intelligence per se being dangerous?
Grace noted that not all intelligent species are threatening: There are elephants, and yet mice still seem to be doing just fine.
Cartoon by Erika Sjule and Nate Odenkirk
Rabbits are certainly more intelligent than myxomatosis, Michael Nielsen, the quantum-computing expert, said.
Crowleys p(doom) was well above eighty per cent. The others, wary of committing to a number, deferred to Grace, who said that, given my deep confusion and uncertainty about thiswhich I think nearly everyone has, at least everyone whos being honest, she could only narrow her p(doom) to between ten and ninety per cent. Still, she went on, a ten-per-cent chance of human extinction is obviously, if you take it seriously, unacceptably high.
They agreed that, amid the thousands of reactions to ChatGPT, one of the most refreshingly candid assessments came from Snoop Dogg, during an onstage interview. Crowley pulled up the transcript and read aloud. This is not safe, cause the A.I.s got their own minds, and these motherfuckers are gonna start doing their own shit, Snoop said, paraphrasing an A.I.-safety argument. Shit, what the fuck? Crowley laughed. I have to admit, that captures the emotional tenor much better than my two-step argument, he said. And then, as if to justify the moment of levity, he read out another quote, this one from a 1948 essay by C.S. Lewis: If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human thingspraying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of dartsnot huddled together like frightened sheep.
Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include Harry Potter and the Methods of Rationality, a piece of fan fiction running to more than six hundred thousand words, and The Sequences, a gargantuan series of essays about how to sharpen ones thinking. The informal collective that grew up around these writingsfirst in the comments, then in the physical worldbecame known as the rationalist community, a small subculture devoted to avoiding the typical failure modes of human reason, often by arguing from first principles or quantifying potential risks. Nathan Young, a software engineer, told me, I remember hearing about Eliezer, who was known to be a heavy guy, onstage at some rationalist event, asking the crowd to predict if he could lose a bunch of weight. Then the big reveal: he unzips the fat suit he was wearing. Hed already lost the weight. I think his ostensible point was something about how its hard to predict the future, but mostly I remember thinking, What an absolute legend.
Yudkowsky was a transhumanist: human brains were going to be uploaded into digital brains during his lifetime, and this was great news. He told me recently that Eliezer ages sixteen through twenty assumed that A.I. was going to be great fun for everyone forever, and wanted it built as soon as possible. In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help hasten the A.I. revolution. Still, he decided to do some due diligence. I didnt see why an A.I. would kill everyone, but I felt compelled to systematically study the question, he said. When I did, I went, Oh, I guess I was wrong. He wrote detailed white papers about how A.I. might wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or MIRI.
The existential threat posed by A.I. had always been among the rationalists central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, the founders of the effective-altruism movement, which studied how to do the most good for humanity (and, by extension, how to avoid ending it). The boundaries between the movements increasingly blurred. Yudkowsky, Grace, and a few others flew around the world to E.A. conferences, where you could talk about A.I. risk without being laughed out of the room.
Philosophers of doom tend to get hung up on elaborate sci-fi-inflected hypotheticals. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about scheming AIs that might convince their human handlers theyre safe, then proceed to take over. He smiled bashfully as he expounded on a thought experiment in which a hypothetical person is forced to stack bricks in a desert for a million years. This can be a lot, I realize, he said. Yudkowsky argues that a superintelligent machine could come to see us as a threat, and decide to kill us (by commandeering existing autonomous weapons systems, say, or by building its own). Or our demise could happen in passing: you ask a supercomputer to improve its own processing speed, and it concludes that the best way to do this is to turn all nearby atoms into silicon, including those atoms that are currently people. But the basic A.I.-safety arguments do not require imagining that the current crop of Verizon chatbots will suddenly morph into Skynet, the digital supervillain from Terminator. To be dangerous, A.G.I. doesnt have to be sentient, or desire our destruction. If its objectives are at odds with human flourishing, even in subtle ways, then, say the doomers, were screwed.
Read the original here:
Posted in Artificial General Intelligence
Comments Off on Among the A.I. Doomsayers – The New Yorker
Artificial Superintelligence Could Arrive by 2027, Scientist Predicts – Futurism
Posted: at 12:11 am
We may not have reached artificial general intelligence (AGI) yet, but as one of the leading experts in the theoretical field claims, it may get here sooner rather than later.
During his closing remarks at this year's Beneficial AGI Summit in Panama, computer scientist and haberdashery enthusiast Ben Goertzel said that although people most likely won't build human-level or superhuman AI until 2029 or 2030, there's a chance it could happen as soon as 2027.
After that, the SingularityNET founder said, AGI could then evolve rapidly into artificial superintelligence (ASI), which he defines as an AI with all the combined knowledge of human civilization.
"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there," Goertzel told the conference audience. "I mean, there are known unknowns and probably unknown unknowns."
"On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," he added.
To be fair, Goertzel is far from alone in attempting to predict when AGI will be achieved.
Last fall, for instance, Google DeepMind co-founder Shane Legg reiterated his more than decade-old prediction that there's a 50/50 chance that humans invent AGI by the year 2028. In a tweet from May of last year, "AI godfather" and ex-Googler Geoffrey Hinton said he now predicts, "without much confidence," that AGI is five to 20 years away.
Best known as the creator of Sophia the humanoid robot, Goertzel has long theorized about the date of the so-called "singularity," or the point at which AI reaches human-level intelligence and subsequently surpasses it.
Until the past few years, AGI, as Goertzel and his cohort describe it, seemed like a pipe dream, but with the large language model (LLM) advances made by OpenAI since it thrust ChatGPT upon the world in late 2022, that possibility seems ever close although he's quick to point out that LLMs by themselves are not what's going to lead to AGI.
"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI unless the AGI threatens to throttle its own development out of its own conservatism," the AI pioneer added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level."
"It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion," he added, presumably referring to the singularity.
Naturally, there are a lot of caveats to what Goertzel is preaching, not the least of which being that by human standards, even a superhuman AI would not have a "mind" the way we do. Then there's the assumption that the evolution of the technology would continue down a linear pathway as if in a vacuum from the rest of human society and the harms we bring to the planet.
All the same, it's a compelling theory and given how rapidly AI has progressed just in the past few years alone, his comments shouldn't be entirely discredited.
More on AGI:Amazon AGI Team Say Their AI is Showing "Emergent Properties"
See original here:
Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism
Posted in Artificial General Intelligence
Comments Off on Artificial Superintelligence Could Arrive by 2027, Scientist Predicts – Futurism
OpenAI, Salesforce and Others Boost Efforts for Ethical AI – PYMNTS.com
Posted: at 12:10 am
In a shift toward ethical technology use, companies across the globe are intensifying their efforts to develop responsible artificial intelligence (AI) systems, aiming to ensure fairness, transparency and accountability in AI applications.
OpenAI, Salesforce and other tech companies recently signed an open letter highlighting a collective responsibility to maximize AIs benefits and mitigate the risks to society. Its the tech industrys latest effort to call for building AI responsibly.
The concept of responsible AI is gaining attention followingElon Musks recent lawsuit against OpenAI. He accuses the ChatGPT creator of breaking its original promise to operate as a nonprofit, alleging a breach of contract. Musks concern was that the potential dangers of AI should not be managed by profit-driven giants like Google.
OpenAI has responded aggressively to the lawsuit. The company has released a sequence of emails between Musk and top executives, revealing his initial support for the startups transition to a profit-making model. Musks lawsuit accuses OpenAI of violating their original agreement with Microsoft, which went against the startups nonprofit AI research foundation. When Musk helped launch OpenAI in 2015, his aim was to create a nonprofit organization that could balance Googles dominance in AI, especially after its acquisition of DeepMind. Musks concern was that the potential dangers of AI should not be managed by profit-driven giants like Google.
The AI firm said in ablog postthat it remains committed to a mission to ensure AGI [artificial general intelligence] benefits all of humanity. The companys mission includes building safe and beneficial AI and helping to create broadly distributed benefits.
The goals of responsible AI are ambitious but vague.Mistral AI, one of the letters signatories, wrotethat the company strives to democratize data and AI to all organizations and users and talks about ethical use, accelerating data-driven decision making and unlocking possibilities across industries .
Some observers say there is a long way to go before the goals of responsible AI are broadly achieved.
Unfortunately, companies will not attain it by adopting many of the responsible AI frameworks available today, Kjell Carlsson, head of AI strategy at Domino Data Lab, told PYMNTS in an interview.
Most of these provide idealistic language but little else. They are frequently disconnected from real-world AI projects, often flawed, and typically devoid of implementable advice.
Carlsson said that building responsible AI involves developing and improving AI models to ensure that they perform accurately and safely and comply with relevant data and AI regulations. The process entails appointing leaders in AI responsibility and training team members on ethical AI practices, including model validation, bias mitigation, and change monitoring.
It involves establishing processes for governing data, models and other artifacts and ensuring that appropriate steps are taken and approved at each stage of the AI lifecycle, he added. And critically, it involves implementing the technology capabilities that enable practitioners to leverage responsible AI tools and automate the necessary governance, monitoring and process orchestration at scale.
While the aims of responsible AI can be a bit fuzzy, the technology can have a tangible impact on lives, Kate Kalcevich of the digital accessibility company Fablepointed out in an interview with PYMNTS.
She said that if not used responsibly and ethically, AI technologies could create barriers to people with disabilities. For example, she questioned whether it would be ethical to use a video avatar that isnt disabled to represent a person with a disability.
My biggest concern would be access to critical services such as healthcare, education and employment, she added. For example, if AI-based chat or phone programs are used to book medical appointments or for job interviews, people with communication disabilities could be excluded if the AI tools arent designed with access needs in mind.
Link:
OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com
Posted in Artificial General Intelligence
Comments Off on OpenAI, Salesforce and Others Boost Efforts for Ethical AI – PYMNTS.com