The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: October 18, 2023
AI challenge to deliver better healthcare | Western Australian … – Government of Western Australia
Posted: October 18, 2023 at 2:23 am
The Cook Government's State's Future Health Research and Innovation (FHRI) Fund is challenging innovators to look at Generative artificial intelligence (AI) for ways to improve healthcare delivery in Western Australia.
Grants worth a total of $937,312 have been awarded to fund 19 projects exploring the opportunities and benefits of AI in healthcare from reducing administrative burden to increasing the accuracy of diagnosis and treatment.
Foresight Medical's Professor Yogesan Kanagasingam is working on a non-invasive test for an early diagnosis of Alzheimer's disease. Currently, the disease is definitively diagnosed through brain imaging techniques that rely on significant visible brain changes to occur before they can be detected.
The new approach will use Generative AI tomeasure the constriction and dilation of the pupil in response to light flashes a process which is altered in people with Alzheimer's due to the degeneration of cholinergic neurons.
This test can be carried out using a smart phone and has the potential to revolutionise early detection of Alzheimer's enabling earlier interventions and potentially changing cognitive decline.
South Metropolitan Health Service's Dr Jafri Kuthubutheen is looking to use AI to improve access to ear health care in rural and remote WA.
This project involves developinga Generative AI model to identify ear conditions based on otoscopic images taken via a video otoscope improving access to care for patients who need specialist ENT services in the Kimberley.
It's a potential game-changer for people living in the remote regions of the State where ear disease is disproportionately represented in Aboriginal and Torres Strait Islander populations and access to timely and accurate specialist care is limited.
In the first stage of the challenge, successful applicants will each receive up to $50,000 to do a feasibility study using Generative AI to address challenges across health and medical research, health and medical innovation, healthcare service delivery and health and medical education and training.
Stage 1 grant recipients can then apply for Stage 2 with four innovators awarded up to $500,000 to fully develop their Generative AI solution and implement it in WA.
The FHRI Fund is administered through the Department of Health's Office of Medical Research and Innovation with the full list of recipients available on the FHRI Fund website.
Comments attributed to Medical Research Minister Stephen Dawson:
"The Cook Government is committed to supporting and developing important health and medical innovation and I will be interested to see what opportunities Generative AI can bring to the sector.
"This technology involves uncertainties and risks, but also has the potential to dramatically increase efficiency, improve the quality of care, and create value for health care organisations.
"It is important to explore and assess whether this cutting-edge technology will help us to anticipate public health needs and interventions and improve the way we execute our healthcare programs here in WA."
Follow this link:
AI challenge to deliver better healthcare | Western Australian ... - Government of Western Australia
Posted in Ai
Comments Off on AI challenge to deliver better healthcare | Western Australian … – Government of Western Australia
Henry Kissinger: The Path to AI Arms Control – Foreign Affairs Magazine
Posted: at 2:23 am
This year marks the 78th anniversary of the end of the deadliest war in history and the beginning of the longest period in modern times without great-power war. Because World War I had been followed just two decades later by World War II, the specter of World War III, fought with weapons that had become so destructive they could theoretically threaten all of humankind, hung over the decades of the Cold War that followed. When the United States atomic destruction of Hiroshima and Nagasaki compelled Japans immediate unconditional surrender, no one thought it conceivable that the world would see a de facto moratorium on the use of nuclear weapons for the next seven decades. It seemed even more improbable that almost eight decades later, there would be just nine nuclear weapons states. The leadership demonstrated by the United States over these decades in avoiding nuclear war, slowing nuclear proliferation, and shaping an international order that provided decades of great-power peace will go down in history as one of Americas most significant achievements.
Today, as the world confronts the unique challenges posed by another unprecedented and in some ways even more terrifying technologyartificial intelligenceit is not surprising that many have been looking to history for instruction. Will machines with superhuman capabilities threaten humanitys status as master of the universe? Will AI undermine nations monopoly on the means of mass violence? Will AI enable individuals or small groups to produce viruses capable of killing on a scale that was previously the preserve of great powers? Could AI erode the nuclear deterrents that have been a pillar of todays world order?
At this point, no one can answer these questions with confidence. But as we have explored these issues for the last two years with a group of technology leaders at the forefront of the AI revolution, we have concluded that the prospects that the unconstrained advance of AI will create catastrophic consequences for the United States and the world are so compelling that leaders in governments must act now. Even though neither they nor anyone else can know what the future holds, enough is understood to begin making hard choices and taking actions todayrecognizing that these will be subject to repeated revision as more is discovered.
As leaders make these choices, lessons learned in the nuclear era can inform their decisions. Even adversaries racing to develop and deploy an unprecedented technology that could kill hundreds of millions of people nonetheless discovered islands of shared interests. As duopolists, both the United States and the Soviet Union had an interest in preventing the rapid spread of this technology to other states that could threaten them. Both Washington and Moscow recognized that if nuclear technology fell into the hands of rogue actors or terrorists within their own borders, it could be used to threaten them, and so each developed robust security systems for their own arsenals. But since each could also be threatened if rogue actors in their adversarys society acquired nuclear weapons, both found it in their interests to discuss this risk with each other and describe the practices and technologies they developed to ensure this did not happen. Once the arsenals of their nuclear weapons reached a level at which neither could attack the other without triggering a response that would destroy itself, they discovered the paradoxical stability of mutual assured destruction (MAD). As this ugly reality was internalized, each power learned to limit itself and found ways to persuade its adversary to constrain its initiatives in order to avoid confrontations that could lead to a war. Indeed, leaders of both the U.S. and the Soviet government came to realize that avoiding a nuclear war of which their nation would be the first victim was a cardinal responsibility.
The challenges presented by AI today are not simply a second chapter of the nuclear age. History is not a cookbook with recipes that can be followed to produce a souffl. The differences between AI and nuclear weapons are at least as significant as the similarities. Properly understood and adapted, however, lessons learned in shaping an international order that has produced nearly eight decades without great-power war offer the best guidance available for leaders confronting AI today.
At this moment, there are just two AI superpowers: the United States and China are the only countries with the talent, research institutes, and mass computing capacity required to train the most sophisticated AI models. This offers them a narrow window of opportunity to create guidelines to prevent the most dangerous advances and applications of AI. U.S. President Joe Biden and Chinese President Xi Jinping should seize this opportunity by holding a summitperhaps immediately after the Asia-Pacific Economic Cooperations meeting in San Francisco in Novemberwhere they could hold extended, direct, face-to-face discussions on what they should see as one of the most consequential issues confronting them today.
After atomic bombs devastated Japanese cities in 1945, the scientists who had opened Pandoras atomic box saw what they had created and recoiled in horror. Robert Oppenheimer, the principal scientist of the Manhattan Project, recalled a line from the Bhagavad Gita: Now I am become Death, the destroyer of worlds. Oppenheimer became such an ardent advocate of radical measures to control the bomb that he was stripped of his security clearance. The Russell-Einstein Manifestosigned in 1955 by 11 leading scientists including not just Bertrand Russell and Albert Einstein but also Linus Pauling and Max Bornwarned of the frightening powers of nuclear weapons and implored world leaders never to use them.
Although U.S. President Harry Truman never expressed second thoughts about his decision, neither he nor members of his national security team had a viable view of how this awesome technology could be integrated into the postwar international order. Should the United States attempt to maintain its monopoly position as the sole atomic power? Was that even feasible? In pursuit of the objective, could the United States share its technology with the Soviet Union? Did survival in a world with this weapon require leaders to invent some authority superior to national governments? Henry Stimson, Trumans secretary of war (who had just helped achieve victory over both Germany and Japan), proposed that the United States share its monopoly of the atomic bomb with Soviet leader Joseph Stalin and British Prime Minister Winston Churchill to create a great-power condominium that would prevent the spread of nuclear weapons. Truman created a committee, chaired by U.S. Undersecretary of State Dean Acheson, to develop a strategy for pursuing Stimsons proposal.
Acheson essentially agreed with Stimson: the only way to prevent a nuclear arms race ending in catastrophic war would be to create an international authority that would be the sole possessor of atomic weapons. This would require the United States to share its nuclear secrets with the Soviet Union and other members of the UN Security Council, transfer its nuclear weapons to a new UN atomic development authority, and forbid all nations from developing weapons or constructing their own capability to produce weapons-grade nuclear material. In 1946, Truman sent the financier and presidential adviser Bernard Baruch to the UN to negotiate an agreement to implement Achesons plan. But the proposal was categorically rejected by Andrei Gromyko, the Soviet representative to the UN.
Three years later, when the Soviet Union succeeded in its crash effort to build its own bomb, the United States and the Soviet Union entered what people were starting to call the Cold War: a competition by all means short of bombs and bullets. A central feature of this competition was the drive for nuclear superiority. At their heights, the two superpowers nuclear arsenals included more than 60,000 weapons, some of them warheads with more explosive power than all the weapons that had been used in all the wars in recorded history. Experts debated whether an all-out nuclear war would mean the end of every living soul on earth.
Over the decades, Washington and Moscow have spent trillions of dollars on their nuclear arsenals. The current annual budget for the U.S. nuclear enterprise exceeds $50 billion. In the early decades of this race, both the United States and the Soviet Union made previously unimaginable leaps forward in the hope of obtaining a decisive advantage. Increases in weapons explosive power required the creation of new metrics: from kilotons (equivalent to the energy released by 1,000 tons of TNT) for the original fission weapons to megatons (equivalent to that released by one million tons) for hydrogen fusion bombs. The two sides invented intercontinental missiles capable of delivering warheads to targets on the other side of the planet in 30 minutes, satellites circling the globe at a height of hundreds of miles with cameras that could identify the coordinates of targets within inches, and defenses that could in essence hit a bullet with a bullet. Some observers seriously imagined defenses that would render nuclear weapons, as President Ronald Reagan put it, impotent and obsolete.
In attempting to shape these developments, strategists developed a conceptual arsenal that distinguished between first and second strikes. They clarified the essential requirements for a reliable retaliatory response. And they developed the nuclear triadsubmarines, bombers, and land-based missilesto ensure that if an adversary were to discover one vulnerability, other components of the arsenal would remain available for a devastating response. Perception of risks of accidental or unauthorized launches of weapons spurred the invention of permissive action linkselectronic locks embedded in nuclear weapons that prevented them from being activated without the right nuclear launch codes. Redundancies were designed to protect against technological breakthroughs that might jeopardize command-and-control systems, which motivated the invention of a computer network that evolved into the Internet. As the strategist Herman Kahn famously put it, they were thinking about the unthinkable.
At the core of nuclear strategy was the concept of deterrence: preventing an adversary from attacking by threatening costs out of proportion to any conceivable benefit. Successful deterrence, it came to be understood, required not just capability but also credibility. The potential victims needed not only the means to respond decisively but also the will. Strategists refined this basic idea further with concepts such as extended deterrence, which sought to employ a political mechanisma pledge of protection via allianceto persuade key states not to build their own arsenals.
By 1962, when U.S. President John F. Kennedy confronted Soviet leader Nikita Khrushchev over nuclear-tipped missiles that the Soviets had placed in Cuba, the U.S. intelligence community estimated that even if Kennedy launched a successful first strike, the Soviet retaliatory response with their existing capabilities might kill 62 million Americans. By 1969, when Richard Nixon became president, the United States needed to rethink its approach. One of us, Kissinger, later described the challenge: Our defense strategies formed in the period of our superiority had to be reexamined in the harsh light of the new realities. . . . No bellicose rhetoric could obscure the fact that existing nuclear stockpiles were enough to destroy mankind. . . . There could be no higher duty than to prevent the catastrophe of nuclear war.
To make this condition vivid, strategists had created the ironic acronym MAD, the essence of which was summarized by Reagans oft-repeated one-liner: A nuclear war cannot be wonand must therefore never be fought. Operationally, MAD meant mutual assured vulnerability. While both the United States and the Soviet Union sought to escape this condition, they eventually recognized that they were unable to do so and had to fundamentally reconceptualize their relationship. In 1955, Churchill had noted the supreme irony in which safety will be the sturdy child of terror, and survival the twin brother of annihilation. Without denying differences in values or compromising vital national interests, deadly rivals had to develop strategies to defeat their adversary by every means possible except all-out war.
One pillar of these strategies was a series of both tacit and explicit constraints now known as arms control. Even before MAD, when each superpower was doing everything it could to achieve superiority, they discovered areas of shared interests. To reduce the risk of mistakes, the United States and the Soviet Union agreed in informal discussions not to interfere with the others surveillance of their territory. To protect their citizens from radioactive fallout, they banned atmospheric testing. To avoid crisis instabilitywhen one side feels the need to strike first in the belief that the other side is about tothey agreed in the 1972 Anti-Ballistic Missile Treaty to limit missile defenses. In the Intermediate-Range Nuclear Forces Treaty, signed in 1987, Reagan and Soviet leader Mikhail Gorbachev agreed to eliminate intermediate-range nuclear forces. The Strategic Arms Limitation Talks, which resulted in treaties signed in 1972 and 1979, limited increases in missile launchers, and later, the Strategic Arms Reduction Treaty (START), signed in 1991, and the New START, signed in 2010, reduced their numbers. Perhaps most consequentially, the United States and the Soviet Union concluded that the spread of nuclear weapons to other states posed a threat to both of them and ultimately risked nuclear anarchy. They brought about what is now known as the nonproliferation regime, the centerpiece of which is the 1968 Nuclear Nonproliferation Treaty, through which 186 countries today have pledged to refrain from developing their own nuclear arsenals.
In current proposals about ways to contain AI, one can hear many echoes of this past. The billionaire Elon Musks demand for a six-month pause on AI development, the AI researcher Eliezer Yudkowskys proposal to abolish AI, and the psychologist Gary Marcuss demand that AI be controlled by a global governmental body essentially repeat proposals from the nuclear era that failed. The reason is that each would require leading states to subordinate their own sovereignty. Never in history has one great power fearing that a competitor might apply a new technology to threaten its survival and security forgone developing that technology for itself. Even close U.S. allies such as the United Kingdom and France opted to develop their own national nuclear capabilities in addition to relying on the U.S. nuclear umbrella.
To adapt lessons from nuclear history to address the current challenge, it is essential to recognize the salient differences between AI and nuclear weapons. First, whereas governments led the development of nuclear technology, private entrepreneurs, technologists, and companies are driving advances in AI. Scientists working for Microsoft, Google, Amazon, Meta, OpenAI, and a handful of smaller startups are far ahead of any analogous effort in government. Furthermore, these companies are now locked in a gladiatorial struggle among themselves that is unquestionably driving innovation, but at a cost. As these private actors make tradeoffs between risks and rewards, national interests are certain to be underweighted.
Second, AI is digital. Nuclear weapons were difficult to produce, requiring a complex infrastructure to accomplish everything from enriching uranium to designing nuclear weapons. The products were physical objects and thus countable. Where it was feasible to verify what the adversary was doing, constraints emerged. AI represents a distinctly different challenge. Its major evolutions occur in the minds of human beings. Its applicability evolves in laboratories, and its deployment is difficult to observe. Nuclear weapons are tangible; the essence of artificial intelligence is conceptual.
A screen showing Chinese and U.S. flags, Beijing, July 2023
Third, AI is advancing and spreading at a speed that makes lengthy negotiations impossible. Arms control developed over decades. Restraints for AI need to occur before AI is built into the security structure of each societythat is, before machines begin to set their own objectives, which some experts now say is likely to occur in the next five years. The timing demands first a national, then an international, discussion and analysis, as well as a new dynamic in the relationship between government and the private sector.
Fortunately, the major companies that have developed generative AI and made the United States the leading AI superpower recognize that they have responsibilities not just to their shareholders but also to the country and humanity at large. Many have already developed their own guidelines for assessing risk before deployment, reducing bias in training data, and restricting dangerous uses of their models. Others are exploring ways to circumscribe training and impose know your customer requirements for cloud computing providers. A significant step in the right direction was the initiative the Biden administration announced in July that brought leaders of seven major AI companies to the White House for a joint pledge to establish guidelines to ensure safety, security, and trust.
As one of us, Kissinger, has pointed out in The Age of AI, it is an urgent imperative to create a systematic study of the long-range implications of AIs evolving, often spectacular inventions and applications. Even while the United States is more divided than it has been since the Civil War, the magnitude of the risks posed by the unconstrained advance of AI demands that leaders in both government and business act now. Each of the companies with the mass computing capability to train new AI models and each company or research group developing new models should create a group to analyze the human and geopolitical implications of its commercial AI operations.
The challenge is bipartisan and requires a unified response. The president and Congress should in that spirit establish a national commission consisting of distinguished nonpartisan former leaders in the private sector, Congress, the military, and the intelligence community. The commission should propose more specific mandatory safeguards. These should include requirements to assess continuously the mass computing capabilities needed to train AI models such as GPT-4 and that before companies release a new model, they stress test it for extreme risks. Although the task of developing rules will be demanding, the commission would have a model in the National Security Commission on Artificial Intelligence. Its recommendations, released in 2021, provided impetus and direction for the initiatives that the U.S. military and U.S. intelligence agencies are undertaking in the AI rivalry with China.
Even at this early stage, while the United States is still creating its own framework for governing AI at home, it is not too early to begin serious conversations with the worlds only other AI superpower. Chinas national champions in the tech sectorBaidu (the countrys top search engine), ByteDance (the creator of TikTok), Tencent (the maker of WeChat), and Alibaba (the leader in e-commerce)are building proprietary Chinese-language analogues of ChatGPT, although the Chinese political system has posed particular difficulties for AI. While China still lags in the technology to make advanced semiconductors, it possesses the essentials to power ahead in the immediate future.
Biden and Xi should thus meet in the near future for a private conversation about AI arms control. Novembers Asia-PacificEconomic Cooperation meeting in San Francisco offers that opportunity. Each leader should discuss how he personally assesses the risks posed by AI, what his country is doing to prevent applications that pose catastrophic risks, and how his country is ensuring that domestic companies are not exporting risks. To inform the next round of their discussions, they should create an advisory group consisting of U.S. and Chinese AI scientists and others who have reflected on the implications of these developments. This approach would be modeled on existing Track II diplomacy in other fields, where groups are composed of individuals chosen for their judgment and fairness although not formally endorsed by their government. From our discussions with key scientists in both governments, we are confident that this can be a highly productive discussion.
U.S. and Chinese discussions and actions on this agenda will form only part of the emerging global conversation on AI, including the AI Safety Summit, which the United Kingdom will host in November, and the ongoing dialogue at the UN. Since every country will be seeking to employ AI to enhance the lives of its citizens while ensuring the safety of its own society, in the longer run, a global AI order will be required. Work on it should begin with national efforts to prevent the most dangerous and potentially catastrophic consequences of AI. These initiatives should be complemented by dialogue between scientists of various countries engaged in developing large AI models and members of the national commissions such as the one proposed here. Formal governmental negotiations, initially among countries with advanced AI programs, should seek to establish an international framework, along with an international agency comparable to the International Atomic Energy Agency.
If Biden, Xi, and other world leaders act now to face the challenges posed by AI as squarely as their predecessors did in addressing nuclear threats in earlier decades, will they be as successful? Looking at the larger canvas of history and growing polarization today, it is difficult to be optimistic. Nonetheless, the incandescent fact that we have now marked 78 years of peace among the nuclear powers should serve to inspire everyone to master the revolutionary, inescapable challenges of our AI future.
Loading... Please enable JavaScript for this site to function properly.
Originally posted here:
Henry Kissinger: The Path to AI Arms Control - Foreign Affairs Magazine
Posted in Ai
Comments Off on Henry Kissinger: The Path to AI Arms Control – Foreign Affairs Magazine