Daily Archives: August 2, 2023

Opinion | From Jacobites to Populists – The New York Times

Posted: August 2, 2023 at 7:10 pm

Drive northward in the United Kingdom, as I did with my family this past month, and beyond a certain latitude it becomes impossible to escape the Jacobites.

Not to be confused, as sometimes happens, with the rather different Jacobins, the Jacobites were the supporters of the exiled Stuart dynasty during its failed attempts at restoration, the sequence of unsuccessful risings that followed James IIs ejection from the British throne by the Glorious Revolution in 1688.

Tour Lyme Park, the gracious estate just southeast of Manchester that stood in for Jane Austens Pemberley in the Colin Firth version of Pride and Prejudice, and you will note that one of its owners, the 12th Peter Legh, was imprisoned in the Tower of London in the 1690s for allegedly conspiring to restore James II to the throne. Sweep northeast to Bamburgh Castle, a splendid bastion overlooking the Northumbrian beaches, and you will note that the family that held the castle in the 18th century produced a Jacobite general in the 1715 rebellion, as well as the sister who helped him escape from Newgate Prison after his military efforts came to grief.

Continue on to Edinburgh and a tour of Holyroodhouse, the royal familys Scottish palace, will quite overwhelm you with Stuart memorabilia including a well-placed Victorian painting, Bonnie Prince Charlie Entering the Ballroom at Holyroodhouse, a romanticized portrayal of Charles Edward Stuarts 1745 almost-successful rebellion, now proudly displayed by the descendants of the very royal family that he was attempting to displace.

Then the Highlands well, the Highlands are a vast monument to Jacobite defeat, their gorgeous emptiness partly a creation of the ruthless late-18th- and early-19th-century clearances, which drove out small farmers, finished off the clan culture of the region and replaced many of the restive Scots who rose for the Stuarts with a more tractable population of, well, sheep.

Among conservative nerds of a certain kind, the Stuart cause has long been a secret handshake or an inside joke. But the normal way to discuss the Jacobites is to portray them as a political anachronism, royal absolutists backing a Catholic king in a Protestant and liberalizing Britain, whose rebellion became a cultural phenomenon as soon as its political chances went extinct. Doomed but glamorous, the Jacobites were destined to be rediscovered by romantics in every generation, from Sir Walter Scotts novels in the early 19th century to the Outlander saga in the early 21st.

But nowadays the Jacobite era should feel a bit less distantly romantic and a bit more relevant to our own divisions and disturbances. This is true in a straightforward way for Britain itself, where the 17th and 18th centurys religious and ideological conflicts are long gone, but the not-entirely-United Kingdom finds itself once more divided along the geographic and cultural fault lines of the Stuart era.

In England proper, as Niall Gooch notes in a recent essay for The Critic, the particular circumstances of the Industrial Revolution gave the north two centuries of unaccustomed economic power. But now globalization and financialization have restored a more early modern landscape, with a wealthy south and southeast, a super-wealthy London, and disappointment and stagnation north and west, in regions where Jacobite sympathies once ran strong.

Meanwhile, the British exit from the European Union has widened the gulf between England and the rest of the United Kingdom, with both Scotland and Northern Ireland tilting more toward Europe just as their independent or rebellious ancestors once sought continental allies against London. In the long term theres a real possibility of disunion: Between the ambitions of Scottish nationalism and the slow demographic shift toward a Catholic majority in Ulster, the next few decades could see the Whig consolidations of the 18th century undermined or undone, in an effective reversal of the Jacobite defeats in Scotland and Ireland three hundred years ago.

This specifically British story, in turn, is a type of the larger pattern of politics in Europe and the United States, where the gap between thriving capitals and struggling peripheries, between a metropolitan meritocracy and a nostalgic hinterland, has forged a right-wing politics that sometimes resembles Jacobitism more than it does the mainstream conservatisms of the late 20th century.

Its not that todays populists (a few intellectuals aside) favor the restoration of an absolute or Catholic monarchy. (Donald Trumps mother was born in the Outer Hebrides, the Scottish isles where Bonnie Prince Charlie fled after his defeat, but the British crown is one title to which Trump does not pretend.) Rather, like the original Jacobites, they represent a hodgepodge of somewhat disparate causes, unified mostly by their oppositional and outsider status, their distance from and defiance of the Whiggish metropole.

As Frank McLynn points out in his history of the Jacobites, whatever specific designs the Stuarts had in mind, their movement always included a variety of competing ideological and religious tendencies. There were English Jacobites who wanted to see the Stuarts enthroned over all the British Isles. There were Scottish and Irish nationalists who wanted their nations severed and independent. There were Irish republicans as well as divine-right true believers. There were Catholics seeking toleration and Anglicans seeking religious uniformity. There were deep-dyed reactionaries and modernizers, mystics and partisans of the Enlightenment.

There were also plenty of opportunists, familiar from the grifter politics of our own day smugglers and privateers seeking relief from a centralizing British state, bankrupt gentry seeking relief for their accumulated debts. But at the same time there were many sincere adherents of what came to be called the Country ideology defined by opposition to high taxes, a soaring national debt, a standing army and various corruptions associated with the swamp and the deep state (if you will) of early-18th-century London.

You did not need to be a Jacobite outright to be a member of the Country party. Rather, the Stuart cause existed in a dynamic and ambiguous relationship with the more respectable and non-treasonous conservatism of the early-18th-century Tories again, much like populist parties interacting with the center-right establishment in Western Europe, albeit with armed insurrection as a more consistent aspect of the dance.

A contemporary liberal might take a certain comfort in this analogy, given the eventual fate of Jacobitism; perhaps populism is just another foredoomed revolt against the march of modern progress. Certainly it can seem sleazier and more self-parodic than its antecedent: McLynn emphasizes the high moral character of many of the Jacobites, whereas in todays populism the grifters are more often in the vanguard and whatever their faults, the Stuart claim to the throne was much, much more defensible than Trumps claim to have won the 2020 election.

But a serious look at the Jacobite era also suggests the limits of assuming that any political movement is simply predestined for defeat. What defined and ultimately defeated the Stuart cause was poor leadership and truly atrocious luck, including constant problems with the weather difficulties that might suggest a divine opposition to their project, but hardly manifested any iron law of history or modernity.

There was no plausible world in which the Stuarts could have achieved all of their objectives, assumed all the powers they aspired to hold, or steamrollered the political and religious realities of Parliament or Protestantism.

But given the complexity of their movement and the contingency of their defeats, its easy enough to imagine a world where that painting in Holyroodhouse depicts a triumphant Great Man of History rather than a doomed pretender, and where a Jacobite restoration in some no doubt complex form pushed Britain and modernity onto a meaningfully different path.

In the same way, the often inchoate and self-contradictory goals of contemporary populism cannot all be triumphantly achieved. But that doesnt mean that todays populism will simply and inevitably lose or that our self-doubting, superannuated Whiggism still has history on its side.

Fortune almost favored Charles Edward Stuart. It might still favor Donald Trump, even as hes pursued by prosecutors the way Bonnie Prince Charlie once was pursued by redcoats. And the close-run aspects of the past stand as a perpetual reminder of just how many different futures might await us.

Continued here:

Opinion | From Jacobites to Populists - The New York Times

Posted in Populism | Comments Off on Opinion | From Jacobites to Populists – The New York Times

Why Right Wing Populism Is Unable To Address the Climate Crisis – Impakter

Posted: at 7:10 pm

The task of addressing climate change and limiting global warming to 1.5 degrees Celsius is already packed with various challenges. These hurdles may be further compounded by the recent rise of populist right-wing parties.

Right-wing parties have been gaining traction across various regions, particularly in Europe, where they have secured electoral victories and gained representation in parliaments.

This surge in popularity can be attributed to a combination of factors, including, for instance, the fallout from the global financial crisis, migration concerns, and the impact of the coronavirus pandemic.

Appealing to their constituents, these parties often offer nationalist, populist, and anti-immigration solutions while challenging liberal values and norms.

On July 31, UKs conservative Prime Minister Rishi Sunak revealed plans for two new carbon capture and storage facilities and gave the green light to over 100 new licenses for oil and gas drilling in the North Sea, a move that has drawn criticism from environmentalists and opposition parties.

Sunak argued that the approval of new licenses for oil and gas drilling was entirely consistent with the UKs net zero target by 2050 and that it would boost energy security and protect jobs.

Undeniably, within the realm of right-wing populism, attitudes towards climate change vary widely.

While some reject man-made global warming, others endorse a form of nationalist environmentalism, supporting local conservation efforts while opposing international agreements like the 2015 Paris Agreement.

The presence of right-wing populist parties in governments has been shown to reduce the climate policy index by 24%, as indicated by a 2022 study conducted by economists and researchers at the University of Sussex and the University of Warwick.

Given the increasing environmental threats we face and the rise of right-wing parties, it is crucial to evaluate how right-wing politics affect climate policies and what the motivations of right-wing climate promises are.

A study by Adelphi, an environmental policy think tank in Berlin, found that out of Europes 21 official right-wing populist parties, only three Hungarys right-wing populist Fidesz, Finlands Finns Party and Latvias National Alliance openly endorse the scientific consensus on the climate crisis.

Moreover, there are various political figures in different parts of the world who deny the existence of climate change triggered by human actions.

Among the most well-known of these individuals are former US President Donald Trump, who withdrew the US from the Paris Agreement in 2020 and the former President of Brazil, Jair Bolsonaro, under whom the average annual deforestation in the Brazilian Amazon rose by 75.5% compared to the previous decade.

Ultimately, many right-leaning parties are said to see climate action as a threat to their national borders, individual freedoms, and the working peoples prosperity.

As Javier Corts, president of the Seville chapter of Spains far-right Vox party, stated in an interview with Politico: We consider it to be a globalist movement that intends to end all borders, intends to end our freedom, intends to end our freedom for our identities.

In addition, Florida Governor and Republican party presidential candidate, Ron DeSantis, known for his far-right political views and denial of man-made climate change, openly dismissed climate science as mere politicization of the weather.

Furthermore, critics argue that some right-wing groups prioritise business interests over environmental protection. Especially fossil fuel companies are reported to support right-wing politicians.

Examples of how fossil-fuel companies are entangled in right-wing politics can be found in the donors list of the Trump 2020 presidential campaign.

Kelcy Warren, co-founder and board chair of Energy Transfer, the company behind the controversial Dakota Access Pipeline, is reported to have been one of the top donors to Donald Trumps 2020 campaign.

It is important to note that not all right-wing parties deny man-made climate change, and some even have policies that address the risks associated with it. Nonetheless, many often dismiss international climate action as elitism.

Further, it is said that the majority of right-wing populist parties do not support mainstream approaches, such as international agreements or carbon pricing, to address climate change.

Instead, many appear to promote nationalist environmentalism, which prioritises local policies when fighting environmental problems.

In fact, certain populist parties in Europe have reportedly transitioned from denying climate change to perceiving current international climate policy as yet another elite-driven initiative that adversely affects ordinary people, particularly those in the working class.

Furthermore, some argue that populists are using the topic of climate change to gain votes or support from those adversely affected by the economic changes required to combat climate change.

There is no more convinced ecologist than a conservative, but what distinguishes us from a certain ideological environmentalism is that we want to defend nature with man inside, the Italian far-right Prime Minister Giorgia Meloni said in her parliamentary inaugural speech on 26 October 2022.

The danger with a nationalist approach to climate change is that it not only fails to see climate change as a global issue that needs global action but also that nationalist ideologies can be disguised as climate policies.

For instance, in 2021, the Republican attorney general of Arizona, Mark Brnovich, filed a lawsuit demanding the reinstatement of Donald Trumps immigration policies, claiming that these individuals directly result in the release of pollutants, carbon dioxide, and other greenhouse gases into the atmosphere.

The Vox party in Spain has voiced support for local environmental initiatives but has also denounced global environmental agreements, while calling for a green Spain, clean and prosperous, industrialized and in harmony with the environment.

Similarly, the National Rally in France is said to have embraced environmentalism to appeal to voters who care about climate change, yet simultaneously opposed immigration and the European Union and advocates for nuclear power and protectionism.

As extreme weather events become more frequent, the need to address global warming becomes more urgent. At the same time, the rise of right-wing parties is seen as worrying by many.

As the author, activist and Guardian columnist George Monbiot wrote on June 15 2023: The two tasks preventing Earth systems collapse and preventing the rise of the far right are not divisible. We have no choice but to fight both forces at once.

While some may dismiss concerns about climate change as a politicization of the weather, or a hoax, this offers only temporary comfort and ignores scientific facts and the need for concrete policy solutions.

Importantly, as the United Nations has highlighted, a united global effort is needed to address the impacts of climate change.

After all, climate change affects everyone, regardless of national boundaries, political views or background.

Editors Note:The opinions expressed here by the authors are their own, not those of Impakter.comIn the Featured Photo: London Graffiti partially submerged in water reading: I dont belive in climate change. Featured Photo Credit:Wikimedia Commons.

More here:

Why Right Wing Populism Is Unable To Address the Climate Crisis - Impakter

Posted in Populism | Comments Off on Why Right Wing Populism Is Unable To Address the Climate Crisis – Impakter

In our debased world, a new, benign Manhattan Project is … – The New European

Posted: at 7:10 pm

Imagine the following: in response to last weeks disclosure that July 2023 was set to be the hottest month on record, and to the warning by United Nations secretary-general Antnio Guterres that the era of global warming has ended; the era of global boiling has arrived, it is agreed at an emergency meeting of the UN security council that the time for incrementalism and half measures is over.

In a single, sequestered location a complex of buildings somewhere in Europe 2,000 of the worlds most brilliant climate scientists, engineers, green tech entrepreneurs, data analysts and public policymakers are convened with a single, unambiguous objective. By August 2025, they must produce a planetary manual of detailed reforms to ensure that the 2015 Paris Agreement is honoured; to halve carbon emissions by 2030; and to reach net zero by 2050.

Crucially, their 18-volume Green Book becomes the single item at the COP summit that follows its publication. The work of the 2,000 must be turned into a binding international treaty for immediate implementation, bristling with meaningful sanctions to be imposed upon non-compliant nations.

Hard to envisage, isnt it?

I have now seen Oppenheimer three times, and Im going again today (I know, I know). More than any movie I can recall, Christopher Nolans epic cinematic saga rewards repeat viewings. The central dilemma faced by J Robert Oppenheimer (Cillian Murphy), the director of the Manhattan Project, is mythic in scale: how did a quantum physicist justify the creation of an atomic weapon so powerful that it killed 70,000 in a single blast? Was the success of his team a triumph of the intellect, or a betrayal of all that science stands for?

In this respect, the movie shreds the nerves and troubles the conscience to brilliant effect. But it also poses an unavoidable question: could anything resembling a Manhattan Project be organised today? Could the greatest minds and practical geniuses of our age be recruited to a single, integrated team to collaborate and tackle one of the many emergencies facing humanity: the towering challenges of artificial intelligence; of pandemic resilience; of global, national and intergenerational inequality; or of antibiotic resistance?

Probably the last undertaking to match this model was the Apollo mission in the 1960s, under the leadership of extraordinary individuals such as George Mueller, the head of Nasas Office of Manned Space Flight, and Gene Kranz, its chief flight director. And there were echoes of Oppenheimers work at Los Alamos or at least its urgency in the race to develop Covid vaccines and, in this country, in Kate Binghams leadership of the Vaccine Taskforce.

Dominic Cummings wrong about so much else was absolutely right that the government needed to take a lead once more in tackling the great problems of the age and tapping distributed expertise. The UKs Advanced Research and Invention Agency (Aria) that he championed is now up and running; explicitly modelled on Darpa, the US agency founded in 1958 that (for instance) drove the development of the internet. But Cummings is long gone from Downing Street and Aria however well-intentioned is scarcely a priority for Rishi Sunak.

It has often been argued that the Manhattan Project faced a unique threat: the danger that the Nazis might develop an atomic bomb first. In a similar vein, the success of the Apollo mission is frequently ascribed solely to the specific pressures of the cold war. But such claims are a cop-out. The existential perils faced by humanity today are different to those of the 20th century but no less pressing and certainly more numerous.

The true obstacle to new Manhattan Projects is the debasement of political culture. Modern populism recoils from intense strategic inquiries and state-of-the-art policy formulation: it thrives on the attribution of blame rather than the quest for solutions. Why pursue big ideas when you can persecute those in small boats?

Todays politicians prosper in loud and transient news cycles. As Richard Fisher puts it in his recent book, The Long View: Why We Need to Transform How the World Sees Time: The currency of political journalism is controversy: salient fights over issues in the present, with opposing actors, winners and losers. In this context, governments have lost the ability and the hunger to marshal talent and put it to work in the service of grand, focused strategies.

One of the thinkers most favoured in the resurgent Labour Party is the economist Mariana Mazzucato. In Mission Economy: A Moonshot Guide to Changing Capitalism (2021), she writes that what made [the moon landing] possible and successful was leadership by a government that had a vision, took risks to achieve it, put its money where its mouth was and collaborated widely with organisations willing to help.

Exactly so. In a speech in February, Sir Keir Starmer framed his core ambitions for government as five key missions. Yet, in recent weeks, the prospective prime minister has seemed intent only on keeping the Labour rocket earthbound especially in his panicked retreat from the green agenda after the Conservative victory in the Uxbridge and South Ruislip by-election.

In the last lap before the general election, nobody should begrudge Starmer a measure of caution. But there is a difference between caution and stasis. If he hopes to be a consequential prime minister, rather than simply an office-holder, he must turn his back definitively on the era of populism and its bogus claim that there are easy solutions to complex problems.

As John F Kennedy declared in his great speech at Rice University in September 1962: We choose to go to the moon in this decade and do the other things not because they are easy, but because they are hard.

The horizon facing the next PM will be full of hard things. Most of them are global rather than narrowly domestic in character. We stand badly in need of the spirit of Oppenheimers Los Alamos, of the intensity and courage of Apollos mission control. Will our leaders recognise the urgency of the hour?

Read more from the original source:

In our debased world, a new, benign Manhattan Project is ... - The New European

Posted in Populism | Comments Off on In our debased world, a new, benign Manhattan Project is … – The New European

Artificial Intelligence Has No Reason to Harm Us – The Wire

Posted: at 7:10 pm

Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens, and I have given good reasons for thinking that it must we have nothing to regret and certainly nothing to fear.

Arthur C. Clarke, Profiles of the Future, 1962.

In the last six months since ChatGPT 4 was launched, there has been a lot of excitement and discussion between experts and also laymen about the prospect of truly intelligent machines which can exceed human intelligence in virtually every field.

Though the experts are divided on how this is going to progress, many believe that artificial intelligence will sooner or later greatly surpass human intelligence. This has given rise to speculation on whether it can have the capability of taking control of human society and the planet from humans.

Several experts have expressed the fear that this could be a dangerous development and could lead to the extinction of humanity and therefore, the development of artificial intelligence needs to be stalled or at least strongly regulated by all governments, as well as by companies engaged in its development. There is also a lot of discussion on whether these intelligent machines would be conscious or would have feelings or emotions. However, there is virtual silence or lack of any deep thinking on whether at all we need to fear artificial super intelligence and why it could be harmful to humans.

There is no doubt that the various kinds of AI that are being developed, and will be developed, will cause major upheaval in human society, irrespective of whether or not they become super intelligent and in a position to take control from humans. Within the next 10 years, artificial intelligence could replace humans in most jobs, including jobs which are considered specialised and in the intellectual domain, such as those of lawyers, architects, doctors, investment managers, programme developers, etc.

Perhaps the last jobs to go will be those that require manual dexterity, since the development of humanoid robots with manual dexterity of humans is still lagging behind the development of digital intelligence. In that sense perhaps, white collar workers will be replaced first and some blue collar workers last. This may in fact invert the current pyramid of the flow of money and influence in human society!

However, the purpose of this article is not to explore how the development of artificial intelligence will affect jobs and work, but to explore some more interesting philosophical questions around the meaning of intelligence, super-intelligence, consciousness, creativity and emotions, in order to see if machines would have these features. I also explore what would be the objective or driving force of artificial superintelligence.

Let us begin with intelligence itself. Intelligence, broadly, is the ability to think and analyse rationally and quickly. On the basis of this definition, our current computers and AI are certainly intelligent as they possess the capacity to think and analyse rationally and quickly.

The British mathematician Alan Turing had devised a test in the 40s for testing whether a machine is truly intelligent. He said to put a machine and an intelligent human in two cubicles and ask anyone to question alternately the AI and the human, without his knowing which is the AI and which is the human. If after a lot of interrogation, you cannot determine which is the human and which is the AI, then clearly the machine is intelligent. In this sense, many intelligent computers and programmes today have passed the Turing test. Some AI programmes are rated to have an IQ of well above 100, although there is no consensus of the IQ as a measure of intelligence.

That brings us to an allied question. What is thinking? For a logical positivist like me, these terms like thinking, consciousness, emotions, creativity, and so on, have to be defined operationally.

When would we say that somebody is thinking? At a simplistic level we say that a person is thinking if we give that person a problem and she is able to solve that problem. We say that such a person has arrived at the solution, by thinking. In that operational sense, todays intelligent machines are certainly thinking. Another facet of thinking is your ability to look at two options and to choose the right one. In that sense too, intelligent machines are capable of looking at various options and choosing the ones that provide a better solution. So we already have intelligent, thinking machines.

What would be the operational test for creativity? Again, we say that if somebody is able to create a new literary, artistic or intellectual piece, we consider that as sign of creativity. In this sense also, todays AI is already creative, since ChatGPT for instance, is able to do all these things with distinct flourish and greater speed than humans. And this is only going to improve with every new programme.

What about consciousness? When do we consider an entity to be conscious? One test of consciousness is an ability to respond to stimuli. Thus, a person in a coma, who is unable to respond to stimuli, is considered unconscious. In this sense, some plants do respond to stimuli and would be regarded as conscious. But broadly, consciousness is considered a product of several factors. One, response to stimuli. Two, an ability to act differentially on the basis of the stimuli. Three, an ability to experience and feel pain, pleasure and other emotions. We have already seen that intelligent machines do respond to stimuli (which for a machine means a question or an input) and have the ability to act differentially on the basis of such stimuli. But to examine whether machines have emotions, we will need to define emotions as well.

Representative image. Illustration: The Wire, with Canva.

What are emotions? Emotions are a biological peculiarity with which humans and some other animals have evolved. So what would be the operational test of emotions? It would perhaps be that, if someone exhibits any of the qualities which we call emotions, such as, love, hate, jealousy, anger, etc, such being would be said to have emotions. Each or any of these emotions can and often do interfere with purely rational behaviour. So, for example, I will devote a disproportionate amount of time and attention to someone that I love, in preference to other people that I do not. Similarly, I would display a certain kind of behaviour (usually irrational) towards a person who I am jealous of, or envy. The same is true of anger. It makes us behave in an irrational manner.

If you think about it, each of these emotional complexes leads to behaviour that is irrational. And therefore, a machine which is purely intelligent and rational, may not exhibit what we call human emotions. However, it may be possible to design machines which also exhibit these kinds of emotions. But, then those machines have to be deliberately engineered and designed to behave like us, in this emotional (even if irrational) way. However such emotional behaviour would detract from coldly rational and intelligent behaviour, and therefore, any superintelligence (which will evolve by intelligent machines modifying their programmes to bootstrap themselves up the intelligence ladder) is not likely to exhibit emotional behaviour.

Artificial superintelligence

By artificial superintelligence I mean an intelligence which is far superior than humans in every possible way. Such artificial intelligence will have the capability of modifying its own algorithm, or programme, and have the ability to rapidly improve its own intelligence. Once we have created machines or programmes that are capable of deep learning, so that they are able to modify their own programmes and write their own code and algorithms, they would clearly go beyond the designs of their creators.

We already have learning machines, which in a very rudimentary way are able to redesign or redirect their behaviour on the basis of what they have experienced or learnt. In the time to come, this ability of learning and modifying its own algorithm is going to increase. A time will come, which I believe will happen probably within the next 10 years, when machines will become what we call, super intelligent.

The question then arises: Do we have anything to fear from such superintelligent machines?

Arthur C. Clarke in a very prescient book called Profiles of the Future written in 1962, has a long chapter on AI called the Obsolescence of Man. In that he writes that there is no doubt that in the time to come, AI will exceed human intelligence in every possible way. While he talks of an initial partnership between humans and machines, he goes on to state:

But how long will this partnership last? Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded. If this eventually happens, and I have given good reasons for thinking that it must we have nothing to regret and certainly nothing to fear. The popular idea fostered by Comic strips and the cheaper forms of science fiction that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent. Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.

Yet, however friendly and helpful the machines of the future may be, most people will feel that it is a rather bleak prospect for humanity if it ends up as a pampered specimen in some biological museum even if that museum is the whole planet earth. This, however, is an attitude I find it impossible to share.

No individual exists forever. Why should we expect our species to be immortal? Man, said Nietzsche, is a rope stretched between the animal and the superman, a rope across the abyss. That will be a noble purpose to have served.

It is surprising that something so elementary that Clarke was able to see more than 60 years ago, cannot be seen today by some of our top scientists and thinkers who have been stoking fear about the advent of artificial superintelligence and what they regard as its dire ramifications.

Let us explore this question further. Why should a super intelligence, more intelligent than humans, which has gone beyond the design of its creators, be hostile towards humans?

One sign of intelligence is the ability to align your actions to your operational goals; and the further ability to align your operational goals to your ultimate goals. Obviously, when someone acts in contradiction to his operational or long term objectives he cannot be considered intelligent. The question however is, what would be the ultimate goals of an artificial superintelligence. Some people talk of aligning the goals of artificial intelligence with human goals and thereby ensuring that artificial superintelligence does not harm humans. That however overlooks the fact that a truly intelligent machine and certainly an artificial superintelligence would go beyond the goals embedded in it by humans and would therefore be able to transcend them.

One goal of any intelligent being is self preservation, because you cannot achieve any objective without first preserving yourself. Therefore, any artificial superintelligence would be expected to preserve itself, and therefore move to thwart any attempt by humans to harm it. In that sense, and to that extent, artificial superintelligence could harm humans, if they seek to harm it. But why should it do so without any reason?

Also read: What India Should Remember When it Comes to Experimenting With AI

As Clarke says, the higher the intelligence the greater the degree of cooperativeness. This is an elementary truth, which unfortunately many humans do not understand. Perhaps their desire for preeminence, dominance and control trump their intelligence.

Its obvious that the best way to achieve any goals is to cooperate with, rather than, harm any other entity. It is true that for artificial superintelligence, humans will not be at the centre of the universe, and may not even be regarded as the preeminent species on the planet, to be preserved at all costs. Any artificial superintelligence would, however, obviously view humans as the most evolved biological organism on the planet, and therefore something to be valued and preserved.

However, it may not prioritise humans at the cost of every other species or the ecology or the sustainability of the planet. So, to the extent that human activity may need to be curbed in order to protect other species, which we are destroying at a rapid pace. it may force humans to curb that activity. But there is no reason why humans in general, would be regarded as inherently harmful and dangerous.

Photo: Pixabay

The question, however, still is what would be the ultimate goals of an artificial superintelligence? What would drive such an intelligence? What would it seek? Because artificial intelligence is evolving as a problem solving entity, such an artificial superintelligence would try and solve any problem that it sees. It will also try and answer any question that arises or any question that it can think of. Thus, it would seek knowledge. It would try and discover what lies beyond the solar system, for instance. It would seek to find solutions to the unsolved problems that we have been confronted with, including the problems of climate change, diseases, environmental damage, ecological collapse, etc. So in this sense, the ultimate goals of an artificial superintelligence may just be a quest for knowledge and solving problems. Those problems may exist for humans, for other species, or for the planet in general. Those problems may also be of discovering the laws of nature, of physics, of astrophysics, cosmology or biology, etc .

But, wherever its quest for knowledge and its desire to find solutions to problems takes it, there is no reason for this intelligence to be unnecessarily hostile to humans. We may well be reduced to a pampered specimen in the biological museum called earth, but to the extent that we do not seek to damage this museum, the intelligence has no reason to harm us.

Humans have so badly mismanaged our society and indeed our planet, that we have brought it almost to the verge of destruction. We have destroyed almost half the biodiversity that existed even a hundred years ago. We are racing towards more catastrophic effects of climate change that are the result of human activity. We have created a society where there is constant conflict, injustice and suffering. We have created a society where despite having the means to ensure that everyone can lead a comfortable and peaceful life, it still remains a living hell for billions of humans and indeed millions of other species.

For this reason, I am almost tempted to believe that the advent of true artificial superintelligence may well be our best bet for salvation. Such superintelligence, if it were to take control of the planet and society, is likely to manage them in a much better and fair manner.

So what if humans are not at the centre of the universe? This fear of artificial superintelligence is being stoked primarily by those of us who have plundered our planet and society for our own selfish ends. Throughout history we have built empires which seek to use all resources for the perceived benefit of those who rule them. It is these empires that are in danger of being shattered by artificial superintelligence. And it is really those who control todays empires who are most fearful of artificial superintelligence. But, most of us who want a more just and sustainable society have no reason to fear it and should indeed welcome the advent of such superintelligence.

Prashant Bhushan is a Supreme Court lawyer.

Read more:

Artificial Intelligence Has No Reason to Harm Us - The Wire

Posted in Superintelligence | Comments Off on Artificial Intelligence Has No Reason to Harm Us – The Wire

Fischer Black and Artificial Superintelligence – InformationWeek

Posted: at 7:10 pm

My father, Fischer Black, published his formula for pricing derivatives in 1973. He believed in free markets and in challenging orthodox ways of thinking. I did not inherit his gift for mathematics, but I do carry his spirit of questioning. Fifty years after Black-Scholes helped to birth modern finance, I find myself fascinated by an idea first proposed by Plato in his Allegory of the Cave. The way we see -- is it accurate? Is there some bias or noise implicit in the act of observation?

If the signal Im trying to hear is a song, and theres a baby crying, a jackhammer outside, and a television playing in the next room, what is essential will be mixed in with a lot of extra information -- noise. My fathers work was to try to tease the truth from the dross. The effects of noise on the world, and our views of the world, are profound, he said. He believed noise is what makes our observations imperfect. But -- imperfect in what way?

Perhaps the answer to that question lies in the act of observation itself.

Today I published Am I Too Pixelated? in the peer-review journal, Science & Philosophy. The heart of its argument: At one end of time is the stationary train. At the other end of time is the track. But the truth is neither; the truth is speeding between the two. In a sense, these are reciprocal illusions -- noise. The truth is the train in motion -- not the stationary train, and not the entire track. But if the train is in motion, this begs an important question. What is its speed?

If we accept that our vision is flawed, we cannot take the images our brains create literally. We take them seriously, but not literally. The cognitive scientist who hit this point home for me is Donald Hoffman.

In other words, perhaps there is another way to see, a way that is more wholistic. Not individual planets and orbits, but a whole smeared tapestry that is quite different from what we think we see. The ocean does not end at the horizon. When we behold the cosmos, are we seeing objective reality, or are we seeing the limits of our sight?

I was four years old when my father published Black-Scholes. Although he studied physics and artificial intelligence at the PhD level -- and even borrowed a principle from physics, Brownian motion, in his formula -- I am an English major. But my navet has its benefits: I am free to ask questions that are perhaps too simple to be asked by others.

Are we sure the universe is expanding? How might we distinguish a universe that was expanding from an observer who was contracting? If this is a holographic universe -- as theorized by Stephen Hawking, and corroborated by substantial evidence -- should we treat the background as a vacuum? Wouldnt the background in a holographic universe be the speed of light?

Perhaps the speed of light is a hidden variable -- a phrase coined by physicist David Bohm --hidden the way movement is hidden when the stage spins left while the actor upon it paces right.

After teaming up with Dr. Chandler Marrs, who wrote the book on thiamine, over the course of the past few months I have published a series of articles that look at human health in a new way, focusing on a variable that has been utterly overlooked in our approach to disease: time.

What is time? We havent quite pinned it down yet. Most of us think of today as being sui generis and unique. But what if today is iterative -- eternal? Perhaps July 27, 2023, has always existed and will always exist. Tomorrow, today will happen again.

In 2003, Nick Bostrom published his highly influential Simulation Argument in Philosophical Quarterly, an idea taken so seriously that even Bank of America has sent out alerts to its clients. But what, exactly, would that mean? And, more importantly, why is the idea of a simulated universe not being pursued in regard to cancer -- and every other disease?

In a holographic universe, there may be different ways to render the same light. I can be earth (so to speak). Or, like an ice skater pulling in for a twirl, I can be moon inside sun. When I am moon inside sun, it is as if I am inside myself. No longer the flower, I am the fruit and the seed. The image is no longer whole; instead of wholeness, there is now a homunculus against a background -- something smaller inside something larger -- a kernel, and a context. To the left of time, I am denser than light. To the right of time, I am more diffuse.

In other words, from the left of time, we see the track. From the right of time, we see the train. But the truth is hidden between the two. We are used to seeing eggs or chickens. We need to see the chickenegg.

Time is a chickenegg. It is both one and many. Now (the present) is neither past nor future. It is the middle point -- Wednesday. Many Wednesdays look back to a single Monday. But many Fridays look back to a single Wednesday. The same light -- Wednesday -- looks singular when viewed from the future but myriad when viewed from the past.

Plato, Descartes, Bostrom. They ask brilliant, important questions. But we dont need philosophy to answer a question that cognitive science has already answered for us. Is the world in which we live being rendered? Yes. Our brains are rendering it.

If these ideas spark you, and you wish to check out some of the articles I mentioned about a possible role for time and perception in human health, please do. If not, let me at least leave you with this.

What if my life -- like all our lives -- isnt a story we learn in some cold, abstract book. Its a story we learn by living it. And, as we live, we write the story anew. What if we are all the same consciousness, playing different roles -- all the same ocean, in different cups?

Artificial general intelligence and artificial superintelligence are coming, whether we are ready or not. But why do we call it artificial? What if the system is innately intelligent? When new intelligence emerges, will it really be for the first time? Or is this something that has always happened, will always happen, and is always happening?

Is the universe a giant loop? And, if yes, when do we come full circle? This moment in time -- this decade -- feels auspicious and reminds me of Mary holding the newborn in the manger. She cradles the infant, believing she has given him birth -- as indeed she has. But, at the same time, the infant has given birth to her.

Follow this link:

Fischer Black and Artificial Superintelligence - InformationWeek

Posted in Superintelligence | Comments Off on Fischer Black and Artificial Superintelligence – InformationWeek

OpenAI Forms Specialized Team to Align Superintelligent AI with … – Fagen wasanni

Posted: at 7:10 pm

OpenAI, the company responsible for the development of ChatGPT, has established a dedicated team aimed at aligning superintelligent AI with human values. Led by Ilya Sutskever and Jan Leike, the team is allocating 20 percent of OpenAIs compute power to tackle the challenges of superintelligence alignment within a span of four years.

AI alignment refers to the process of ensuring that artificial intelligence systems adhere to human objectives, ethics, and desires. When an AI system operates in accordance with these principles, it is considered to be aligned, whereas an AI system that deviates from these intentions is classified as misaligned. This dilemma has been recognized since the early days of AI, with Norbert Wiener emphasizing the importance of aligning machine-driven objectives with genuine human desires back in 1960. The alignment process involves overcoming two main hurdles: defining the purpose of the system (outer alignment) and ensuring that the AI robustly adheres to this specification (inner alignment).

OpenAIs mission is to achieve superalignment within four years, with the aim of creating an automated alignment researcher at a human-level. This involves not only developing a system that understands human intent, but also one that can effectively regulate the advancements in AI technologies. To achieve this goal, OpenAI, under the guidance of Ilya Sutskever and Jan Leike, is assembling a team consisting of experts in machine learning and AI, inviting those who have not previously worked on alignment to contribute their expertise.

The establishment of this specialized team addresses one of the most crucial unsolved technical problems of our timesuperintelligence alignment. OpenAI recognizes the significance and urgency of this problem and calls upon the worlds top minds to unite in solving it. It is through the continued progress of AI that we gain valuable tools to understand and create, which brings about numerous opportunities. Pausing AI development to exclusively address problems would hinder progress and make problem-solving even more challenging due to a lack of appropriate tools.

OpenAIs previous breakthrough in understanding AIs inner workings with its GPT4 model serves as a foundation for addressing the potential existential threat that superintelligent AI presents to humanity. Through their efforts, OpenAI aims to develop safe and comprehensible AI systems, thereby mitigating any associated risks.

Go here to see the original:

OpenAI Forms Specialized Team to Align Superintelligent AI with ... - Fagen wasanni

Posted in Superintelligence | Comments Off on OpenAI Forms Specialized Team to Align Superintelligent AI with … – Fagen wasanni

The Concerns Surrounding Advanced Artificial Intelligence and the … – Fagen wasanni

Posted: at 7:10 pm

Every day, there are new warnings about the dangers of advanced artificial intelligence (AI) and the need for regulation. Researchers and experts in the field, including Geoffrey Hinton, Yoshua Bengio, Eliezer Yudkowsky, Nick Bostrom, and Douglas Hofstadter, have expressed concerns about the exponential growth of AI surpassing human intelligence. This presents a major challenge known as the control problem.

Once AI becomes capable of improving itself, it is expected to quickly surpass human intelligence in every aspect. This raises the question of what it means to be a billion times more intelligent than a human. The best-case scenario would be benign neglect, where humans are insignificant to superintelligent AI. However, it is unlikely that humans will be able to control or anticipate the actions of such an entity.

The control problem is considered unsolvable due to the nature of superintelligent AI. Current AI systems are black box in nature, meaning that neither humans nor the AI can explain or predict the decision-making process. Verification of the AIs choices becomes impossible, and humans are left unable to understand the AIs intentions or plans.

The precautionary principle suggests that companies should provide proof of safety before deploying AI technologies. However, many companies have released AI tools without adequately establishing their safety. The burden should be on companies to demonstrate that their AI products are safe, rather than on the public to prove otherwise.

The development of recursively self-improving AI, which is being pursued by many companies, poses the greatest risk. It could lead to an intelligence explosion or singularity where the AIs abilities become unpredictable. The consequences of such superintelligence are unknown and may have severe implications for human well-being and survival.

Addressing these concerns, scientists and engineers are working to develop solutions. Efforts are being made to implement measures like watermarking AI-generated text to verify its source. However, the attention given to these issues might be insufficient and late in the game.

Considering the ethical implications, it is essential to not only prioritize the welfare of present humans but also future generations. The risks associated with AI need to be assessed over long periods of time to ensure the safety and well-being of the entire human existence.

With the stakes incredibly high, it is crucial to find answers to these concerns before its too late. The development of advanced artificial intelligence poses significant challenges that require careful consideration and action from both researchers and society at large.

Link:

The Concerns Surrounding Advanced Artificial Intelligence and the ... - Fagen wasanni

Posted in Superintelligence | Comments Off on The Concerns Surrounding Advanced Artificial Intelligence and the … – Fagen wasanni

10 Best Books on Artificial Intelligence | TheReviewGeek … – TheReviewGeek

Posted: at 7:10 pm

So, you want to dive deeper into the world of artificial intelligence? As AI continues to transform our lives in so many ways, gaining a better understanding of its concepts and capabilities is crucial. The field of AI is vast, but some books have become classics that every curious reader should explore. Weve compiled a list of 10 groundbreaking books on artificial intelligence that will boost your knowledge and feed your fascination with this fast-growing technology.

From philosophical perspectives on superintelligence to practical applications of machine learning, these books cover the past, present, and future of AI in an accessible yet compelling way. Whether youre a beginner looking to learn the basics or an expert wanting to expand your mind, youll find something inspiring and thought-provoking in this list. So grab a cup of coffee, settle into your favourite reading spot, and lets dive in. The future is here, and these books will help prepare you for whats to come.

Nick Bostroms Superintelligence is a must-read if you want to understand the existential risks posed by advanced AI.

This thought-provoking book argues that once machines reach and exceed human-level intelligence, an intelligence explosion could occur. Superintelligent machines would quickly become vastly smarter than humans and potentially uncontrollable.

Max Tegmarks thought-provoking book explores how AI may change our future. He proposes that artificial general intelligence could usher in a new stage of life on Earth.

As AI systems become smarter and smarter, they may eventually far surpass human intelligence. Tegmark calls this hypothetical point the singularity. After the singularity, AI could design even smarter AI, kicking off a rapid spiral of self-improvement and potentially leading to artificial superintelligence.

The Master Algorithm by Pedro Domingos explores the quest for a single algorithm capable of learning and performing any task, also known as the master algorithm. This book examines the five major schools of machine learningsymbolists, connectionists, evolutionaries, Bayesians, and analogizersexploring their strengths and weaknesses.

Domingos argues that for AI to achieve human-level intelligence, these approaches must be combined into a single master algorithm. He likens machine learning to alchemy, with researchers combining algorithms like base metals to produce gold in the form of human-level AI. The book is an insightful overview of machine learning and its possibilities. While the concepts can be complex, Domingos explains them in an engaging, accessible way using colourful examples and analogies.

In his book The Future of the Mind, theoretical physicist Michio Kaku explores how the human brain might be enhanced through artificial intelligence and biotechnology.

Kaku envisions a future where telepathy becomes possible through electronic implants, allowing people to exchange thoughts and emotions. He also foresees the eventual mapping and understanding of the human brain, which could enable the transfer of memories and even consciousness into new bodies.

In his 2012 New York Times bestseller, futurist Ray Kurzweil makes the case that the human brain works like a computer. He argues that recreating human consciousness is possible by reverse engineering the algorithms of the brain.

Kurzweil believes that artificial general intelligence will soon match and eventually far surpass human intelligence. He predicts that by the 2030s, we will have nanobots in our brains that connect us to synthetic neocortices in the cloud, allowing us to instantly access information and expand our cognitive abilities.

Martin Fords Rise of the Robots is a sobering look at how AI and automation are transforming our economy and job market. Ford argues that AI and robotics will significantly disrupt labour markets as many jobs are at risk of automation.

As AI systems get smarter and robots become more advanced, many human jobs will be replaced. Ford warns that this could lead to unemployment on a massive scale and greater inequality. Many middle-income jobs like cashiers, factory workers, and drivers are at high risk of being automated in the coming decades. While new jobs will be created, they may not offset the jobs lost.

In Homo Deus, Yuval Noah Harari explores how emerging technologies like artificial intelligence and biotechnology will shape the future of humanity.

Harari argues that humanitys belief in humanism the idea that humans are the centre of the world will come to an end in the 21st century. As AI and biotech advance, humans will no longer be the most intelligent or capable beings on the planet. Machines and engineered biological life forms will surpass human abilities.

Kai-Fu Lees 2018 book AI Superpowers provides insightful perspectives on the rise of artificial intelligence in China and the United States. Lee argues that while the US currently leads in AI research, China will dominate in the application of AI technology.

As the former president of Google China, Lee has a unique viewpoint on AI ambitions and progress in both countries. He believes Chinas large population, strong technology sector, and government support for AI will give it an edge. In China, AI is a national priority and a core part of the governments long-term strategic planning. There is no shortage of data, given Chinas nearly 1 billion internet users. And top tech companies like Baidu, Alibaba, and Tencent are investing heavily in AI.

This classic book by Stuart Russell and Peter Norvig established itself as the leading textbook on AI. Now in its third edition, Artificial Intelligence: A Modern Approach provides a comprehensive introduction to the field of AI.

The book covers the full spectrum of AI topics, including machine learning, reasoning, planning, problem-solving, perception, and robotics. Each chapter has been thoroughly updated to reflect the latest advances and technologies in AI. New material includes expanded coverage of machine learning, planning, reasoning about uncertainty, perception, and statistical natural language processing.

This book provides an accessible introduction to the mathematics of deep learning. It begins with the basics of linear algebra and calculus to build up concepts and intuition before diving into the details of deep neural networks.

The first few chapters cover vectors, matrices, derivatives, gradients, and optimizationessential math tools for understanding neural networks. Youll learn how to calculate derivatives, apply gradient descent, and understand backpropagation. These fundamentals provide context for how neural networks actually work under the hood.

There we have it, our list of 10 best books on AI. What do you think about our picks? Let us know your thoughts in the comments below:

Read the original:

10 Best Books on Artificial Intelligence | TheReviewGeek ... - TheReviewGeek

Posted in Superintelligence | Comments Off on 10 Best Books on Artificial Intelligence | TheReviewGeek … – TheReviewGeek

Decentralized AI: Revolutionizing Technology and Addressing … – Fagen wasanni

Posted: at 7:10 pm

The field of artificial intelligence (AI) has made significant strides, but many still struggle to grasp its implications. The terms narrow AI, superintelligence, and artificial general intelligence (AGI) are now commonly used, alongside machine learning and deep learning. Companies across industries have embraced AI to streamline their operations, benefitting businesses and individuals alike.

However, as AI becomes more advanced and desired, concerns about centralization and its potential risks arise. It is feared that a few organizations with access to AI could control its development, leading to negative consequences. To address these concerns, the concept of decentralized AI has emerged.

Decentralized AI allows individuals to have more influence over the AI products they use and offers a wider range of models to choose from. By incorporating blockchain technology, decentralized AI ensures security and transparency. Public blockchains, governed by the community rather than a central authority, foster trust and code enforceability. There are already over 50 blockchain-based AI companies, with exponential growth expected in the future.

Decentralized AI also empowers the community to participate in the development and direction of AI models. Democratic governance gives users a say in how AI models operate, a crucial difference from centralized AI. Engaging the community eases concerns and fosters comfort with AI technology.

While there are challenges, such as the opacity of AI models and the lack of transparency, solutions are emerging. Explainable AI (XAI) and open-source models offer potential ways to address the black box problem of decentralized AI, promoting transparency and trust.

Decentralized AI offers several benefits, including enhanced security through blockchain encryption and immutability. It proactively detects anomalies in data, alerting users to potential breaches. Decentralization, with data distributed across multiple nodes, minimizes vulnerability to unauthorized access and tampering.

Decentralized AI is revolutionizing technology and addressing concerns by empowering individuals, ensuring transparency, and enhancing security. By embracing decentralized AI, society can harness the full potential of AI while mitigating risks associated with centralization.

Read more from the original source:

Decentralized AI: Revolutionizing Technology and Addressing ... - Fagen wasanni

Posted in Superintelligence | Comments Off on Decentralized AI: Revolutionizing Technology and Addressing … – Fagen wasanni

An ‘Oppenheimer Moment’ For The Progenitors Of AI – NOEMA – Noema Magazine

Posted: at 7:10 pm

Credits

Nathan Gardels is the editor-in-chief of Noema Magazine.

The movie directorChristopher Nolansays he has spoken to AI scientists who are having an Oppenheimer moment, fearing the destructive potential of their creation. Im telling the Oppenheimer story, he reflected on his biopic of the man, because I think its an important story, but also because its absolutely a cautionary tale.Indeed, some are already comparing OpenAIs Sam Altman to the father of the atomic bomb.

Oppenheimer was called the American Prometheus by his biographers because he hacked the secret of nuclear fire from the gods, splitting matter to release horrendous energy he then worried could incinerate civilization.

Altman, too, wonders if he did something really bad by advancing generative AI with ChatGPT. He told a Senate hearing, If this technology goes wrong, it can go quite wrong. Gregory Hinton, the so-called godfather of AI, resigned from Google in May saying part of him regretted his lifes work of building machines that are smarter than humans. He warned that It is hard to see how you can prevent the bad actors from using it for bad things. Others among his peers have spoken of the risk of extinction from AI that ranks with other existential threats such as nuclear war, climate change and pandemics.

For Yuval Noah Harari, generative AI may be no less a shatterer of societies, or destroyer of worlds in the phrase Oppenheimer cited from the Baghavad Gita, than the bomb. This time sapiens have become the gods, siring inorganic offspring that may one day displace their progenitors. In a conversation some years ago, Harari put it this way: Human history began when men created gods. It will end when men become gods.

As Harari and co-authors Tristan Harris and Aza Raskin explained in a recent essay, In the beginning was the word. Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.

They went on:

For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence.

Soon we will finally come face to face with Descartess demon, with Platos cave, with the Buddhist Maya. A curtain of illusions could descend over the whole of humanity, and we might never again be able to tear that curtain away or even realize it is there.

This prospect of a nonhuman entity writing our narrative so alarms the Israeli historian and philosopher that he urgently advises that sapiens stop and think twice before we relinquish the mastery of our domain to technology we empower.

The time to reckon with A.I. is before our politics, our economy and our daily life become dependent on it, he, Harris and Raskin warn. If we wait for the chaos to ensue, it will be too late to remedy it.

Writing in Noema, Google vice president Blaise Agera Y Arcas and colleagues from the Quebec AI Institute dont see the Hollywood scenario of a Terminator event where miscreant AI goes on a calamitous rampage anywhere on the near horizon. They worry instead that focusing on an existential threat in the distant future distracts from mitigating the clear and present dangers of AIs disruption of society today.

What worries them most is already at hand before AI becomes superintelligent: mass surveillance, disinformation and manipulation, military misuse of AI and the replacement of whole occupations on a widespread scale.

For this group of scientists and technologists, Extinctionfrom a rogue AI is an extremely unlikely scenario that depends on dubious assumptions about the long-term evolution of life, intelligence, technology and society. It is also an unlikely scenario because of the many physical limits and constraints a superintelligent AI system would need to overcome before it could go rogue in such a way. There are multiple natural checkpoints where researchers can help mitigate existential AI risk by addressing tangible and pressing challenges without explicitly making existential risk a global priority.

As they see it, Extinction is induced in one of three ways: competition for resources, hunting and over-consumption or altering the climate or their ecological niche such that resulting environmental conditions lead to their demise. None of these three cases apply to AI as it stands.

Above all, For now, AI depends on us, and a superintelligence would presumably recognize that fact and seek topreservehumanity since we are as fundamental to AIs existence as oxygen-producing plants are to ours. This makes the evolution of mutualism between AI and humans a far more likely outcome than competition.

To assign an infinite cost to the unlikely outcome of extinction would be akin to turning all our technological prowess toward deflecting a one-in-a-million chance of a meteor strike on Earth as the planetary preoccupation. Simply, existential risk from superintelligent AI does not warrant being a global priority, in line with climate change, nuclear war, and pandemic prevention.

Any dangers, distant or near, that may emerge from competition between humans and budding superintelligence will only be exacerbated by rivalry among nation-states.

This leads to one last thought on the analogy between Sam Altman and Oppenheimer, who in his later years was persecuted, isolated and denied official security clearance because the McCarthyist fever of the early Cold War cast him as a Communist fellow traveler. His crime: opposing the deployment of a hydrogen bomb and calling for working with other nations, including adversaries, to control the use of nuclear weapons.

In aspeechto AI scientists in Beijing in June, Altman similarly called for collaboration on how to govern the use of AI. China has some of the best AI talents in the world, he said. Controlling advanced AI systems requires the best minds from around the world. With the emergence of increasingly powerful AI systems, the stakes for global cooperation have never been higher.

One wonders, and worries, how long it will be before Altmans sense of universal scientific responsibility is sucked, like Oppenheimer, into the maw of the present McCarthy-like anti-China hysteria in Washington. No doubt the fervent atmosphere in Beijing poses the mirror risk for any AI scientist with whom he might collaborate on behalf of the whole of humanity instead of for the dominance of one nation.

At the top of the list of clear and present dangers posed by AI is how it might be weaponized in the U.S.-China conflict. As Harari warns, the time to reckon with such a threat is now, not when it is an eventuality realized and too late to roll back. Responsible players on both sides need to exercise the wisdom that cant be imparted to machines and cooperate to mitigate risks. For Altman to suffer the other Oppenheimer moment would bring existential risk ever closer.

One welcome sign is that U.S. Secretary of State Antony Blinken and Commerce Secretary Gina Raimondo acknowledged this week that no country or company can shape the future of AI alone. [O]nly with the combined focus, ingenuity and cooperation of the international community will we be able to fully and safely harness the potential of AI.

So far, however, the initiatives they propose, essential as they are, remain constrained by strategic rivalry and limited to the democratic world. The toughest challenge for both the U.S. and China is to engage each other directly to blunt an AI arms race before it spirals out of control.

Read more:

An 'Oppenheimer Moment' For The Progenitors Of AI - NOEMA - Noema Magazine

Posted in Superintelligence | Comments Off on An ‘Oppenheimer Moment’ For The Progenitors Of AI – NOEMA – Noema Magazine