The AI algorithms that believe in equality, from Google’s Deep Mind – TechHQ

Posted: July 9, 2022 at 8:15 am

A Google Deep Mind project (Democratic AI) ended up redistributing virtual wealth in ways that were voted as the most popular methods, according to a vote taken by human players participating in an online game-based experiment.

The research was based on an algorithm that learned from different models of human behavior via an online investment game. Participants (biological and silicon) had to decide whether to keep or give away monetary gains from a communal pot. The AI ended up gradually redistributing the wealth it won and redressing some of the imbalances in economic fortunes among the players. But it achieved this in a way considered by participants to be the fairest way possible.

The reason for the research was not, as our clickbait headline suggests, to prove that computers, software, or AI researchers are inherently socialists but instead to develop better value alignment between self-learning computer models and their human bosses. Because theres a wide range of behaviors exhibited by humans, an AI should be able to align its behavior in such a way that it appeals, on balance, to a majority of the population.

The researchers aimed to maximize a democratic objective: to design policies that humans prefer and thus will vote to implement in an [] election.

The study measured human monetary contributions during the game under three redistribution principles: strict egalitarian, libertarian and liberal egalitarian. In political; terms, these might translate into socialism, free market-ism, and social democracy hard left, hard right, and somewhere in between.

The egalitarian model divided funds equally between players regardless of their contribution, while the libertarian returned a payout proportional to the monetary contribution those with the most gained the most. The liberal egalitarian measured contributions proportional to any inherent imbalance in wealth that players entered the game with.

It was found that generally, humans disliked the extremes of each model. The pure egalitarian model was seen as aggressively taxing the wealthiest and supporting freeloaders. The libertarian model saw money flow to the wealthiest disproportionately. Researchers wanted to know whether an AI system could design a mechanism that humans preferred over these alternatives and that would be more acceptable than the liberal egalitarian modeling that one might think was the natural middle ground.

The AI was trained to imitate behavior during the game, voting the same way as human players over the course of many rounds. The model was optimized using deep RL (reinforcement learning) and then took redistribution decisions with a new group of human players. Players voted on the AIs suggestions for redistribution. Iterating on these processes obtained a mechanism that we call the Human Centred Redistribution Mechanism, the papers authors state.

Throughout the experiments, radical redistribution of wealth from the top down was found to be unpopular as it eventually led to the wealthiest players not wishing to contribute collectively at all. Nor were those at the bottom of the virtual economic pile happy with seeing just a few players gain disproportionally.

The report states, the redistribution policy that humans prefer is neither one that shares out public funds equally, nor one that tries to speak only to the interests of a majority of less well-endowed players.

Smart AI systems learning from the full gamut of human behaviors mean systems can be trained to satisfy what researchers called a democratic objective, that is, to find the most popular way forward. The AIs winning model was voted the best by a few percentage points, beating the liberal egalitarian, pure egalitarian, and pure libertarian. In brief, the AI found a better compromise than any humans could devise by simply learning to imitate all available human behavior.

AIs learning from human behavior is a fiercely complex area of study, and some of the more public experiments have ended in, at best, derision. Earlier experiments, like the very public disgrace of the Microsoft Twitter personality, ended badly. Given a cross-section of the cauldron of human opinion expressed by an opaque algorithm, Tay learned to be racist, sexist, and generally unhinged after a few hours. As a Microsoft spokesperson told CNN at the time, [Tay] is as much a social and cultural experiment, as it is technical.

When biased learning materials are given to an AI, it simply recreates that bias and, in some notable cases, exaggerates by being given similar input often. However, improving methods of machine learning are helping matters, as is the awareness of inherent human bias in just about every expression and utterance. Economics is one area where the nuances of human behavior can literally be quantified and, therefore, are a fertile ground for research.

In 50 years, will we refer to AIs decision-making abilities to decide human affairs? Having AIs making decisions over human conduct is a standard trope in science fiction, where silicon rulers can be fully benign (The Polity series of books, by Neal Asher, for example) or something very much more malevolent (Terminator et al.). If there is a better way that pleases most of the people most of the time, it may have just germinated.

See original here:

The AI algorithms that believe in equality, from Google's Deep Mind - TechHQ

Related Posts