Sbastien Bubeck, a machine learning researcher atMicrosoft, woke up one night last September thinking aboutartificial intelligenceand unicorns.
Bubeck had recently gotten early access toGPT-4, a powerful text generation algorithm fromOpenAI and an upgrade to the machine learning model at the heart of the wildly popular chatbotChatGPT. Bubeck was part of a team working to integrate the new AI system into MicrosoftsBing search engine. But he and his colleagues kept marveling at how different GPT-4 seemed from anything theyd seen before.
GPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use the statistical patterns in that corpus to predict the words that should be generated in reply to a piece of text input. But to Bubeck, the systems output seemed to do so much more than just make statistically plausible guesses.
View more
That night, Bubeck got up, went to his computer, and asked GPT-4 to draw a unicorn usingTikZ, a relatively obscure programming language for generating scientific diagrams. Bubeck was using a version of GPT-4 that only worked with text, not images. But the code the model presented him with, when fed into a TikZ rendering software, produced a crude yet distinctly unicorny image cobbled together from ovals, rectangles, and a triangle. To Bubeck, such a feat surely required some abstract grasp of the elements of such a creature. Something new is happening here, he says. Maybe for the first time we have something that we could call intelligence.
How intelligent AI is becomingand how much to trust the increasingly commonfeeling that a piece of software is intelligenthas become a pressing, almost panic-inducing, question.
After OpenAIreleased ChatGPT, then powered by GPT-3, last November, it stunned the world with its ability to write poetry and prose on a vast array of subjects, solve coding problems, and synthesize knowledge from the web. But awe has been coupled with shock and concern about the potential foracademic fraud,misinformation, andmass unemploymentand fears that companies like Microsoft are rushing todevelop technology that could prove dangerous.
Understanding the potential or risks of AIs new abilities means having a clear grasp of what those abilities areand are not. But while theres broad agreement that ChatGPT and similar systems give computers significant new skills, researchers are only just beginning to study these behaviors and determine whats going on behind the prompt.
While OpenAI has promoted GPT-4 by touting its performance on bar and med school exams, scientists who study aspects of human intelligence say its remarkable capabilities differ from our own in crucial ways. The models tendency to make things up is well known, but the divergence goes deeper. And with millions of people using the technology every day and companies betting their future on it, this is a mystery of huge importance.
Bubeck and other AI researchers at Microsoft were inspired to wade into the debate by their experiences with GPT-4. A few weeks after the system was plugged into Bing and its new chat feature was launched, the companyreleased a paper claiming that in early experiments, GPT-4 showed sparks of artificial general intelligence.
The authors presented a scattering of examples in which the system performed tasks that appear to reflect more general intelligence, significantly beyond previous systems such as GPT-3. The examples show that unlike most previous AI programs, GPT-4 is not limited to a specific task but can turn its hand to all sorts of problemsa necessary quality of general intelligence.
The authors also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another, such as from text to imagery. Given the breadth and depth of GPT-4s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system, the paper states.
Bubecks paper, written with 14 others, including Microsofts chief scientific officer, was met with pushback from AI researchers and experts on social media. Use of the term AGI, a vague descriptor sometimes used to allude to the idea of super-intelligent or godlike machines, irked some researchers, who saw it as a symptom of the current hype.
The fact that Microsoft has invested more than $10 billion in OpenAI suggested to some researchers that the companys AI experts had an incentiveto hype GPT-4s potential while downplaying its limitations. Others griped thatthe experiments are impossible to replicate because GPT-4 rarely responds in the same way when a prompt is repeated, and because OpenAI has not shared details of its design. Of course, people also asked why GPT-4 still makes ridiculous mistakes if it is really so smart.
Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, says Microsofts paper shows some interesting phenomena and then makes some really over-the-top claims. Touting systems that are highly intelligent encourages users to trust them even when theyre deeply flawed, she says. Ringer also points out that while it may be tempting to borrow ideas from systems developed to measure human intelligence, many have proven unreliable and even rooted in racism.
Bubek admits that his study has its limits, including the reproducibility issue, and that GPT-4 also has big blind spots. He says use of the term AGI was meant to provoke debate. Intelligence is by definition general, he says. We wanted to get at the intelligence of the model and how broad it isthat it covers many, many domains.
But for all of the examples cited in Bubecks paper, there are many that show GPT-4 getting things blatantly wrongoften on the very tasks Microsofts team used to tout its success. For example, GPT-4s ability to suggest a stable way to stack a challenging collection of objectsa book, four tennis balls, a nail, a wine glass, a wad of gum, and some uncooked spaghettiseems to point to a grasp of the physical properties of the world that is second nature to humans,including infants. However, changing the items and the requestcan result in bizarre failures that suggest GPT-4s grasp of physics is not complete or consistent.
Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. GPT-4 is not good at this, and maybe large language models in general will never be good at it, he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. If you want to say that intelligence is planning, then GPT-4 is not intelligent.
One thing beyond debate is that the workings of GPT-4 and other powerful AI language models do not resemble the biology of brains or the processes of the human mind. The algorithms must be fed an absurd amount of training dataa significant portion of all the text on the internetfar more than a human needs to learn language skills. The experience that imbues GPT-4, and things built with it, with smarts is shoveled in wholesale rather than gained through interaction with the world and didactic dialog. And with no working memory, ChatGPT can maintain the thread of a conversation only by feeding itself the history of the conversation over again at each turn. Yet despite these differences, GPT-4 is clearly a leap forward, and scientists who research intelligence say its abilities need further interrogation.
A team of cognitive scientists, linguists, neuroscientists, and computer scientists from MIT, UCLA, and the University of Texas, Austin, posted aresearch paper in January that explores how the abilities of large language models differ from those of humans.
The group concluded that while large language models demonstrate impressive linguistic skillincluding the ability to coherently generate a complex essay on a given themethat is not the same as understanding language and how to use it in the world. That disconnect may be why language models have begun to imitate the kind of commonsense reasoning needed to stack objects or solve riddles. But the systems still make strange mistakes when it comes to understanding social relationships, how the physical world works, and how people think.
The way these models use language, by predicting the words most likely to come after a given string, is very difference from how humans speak or write to convey concepts or intentions. The statistical approach can cause chatbots to follow and reflect back the language of users prompts to the point of absurdity.
Whena chatbot tells someone to leave their spouse, for example, it only comes up with the answer that seems most plausible given the conversational thread. ChatGPT and similar bots will use the first person because they are trained on human writing. But they have no consistent sense of self and can change their claimed beliefs or experiences in an instant. OpenAI also uses feedback from humans to guide a model toward producing answers that people judge as more coherent and correct, which may make the model provide answers deemed more satisfying regardless of how accurate they are.
Josh Tenenbaum, a contributor to the January paper and a professor at MIT who studies human cognition and how to explore it using machines, says GPT-4 is remarkable but quite different from human intelligence in a number of ways. For instance, it lacks the kind of motivation that is crucial to the human mind. It doesnt care if its turned off, Tenenbaum says. And he says humans do not simply follow their programming but invent new goals for themselves based on their wants and needs.
Tenenbaum says some key engineering shifts happened between GPT-3 and GPT-4 and ChatGPT that made them more capable. For one, the model was trained on large amounts of computer code. He and others have argued thatthe human brain may use something akin to a computer program to handle some cognitive tasks, so perhaps GPT-4 learned some useful things from the patterns found in code. He also points to the feedback ChatGPT received from humans as a key factor.
But he says the resulting abilities arent the same as thegeneral intelligence that characterizes human intelligence. Im interested in the cognitive capacities that led humans individually and collectively to where we are now, and thats more than just an ability to perform a whole bunch of tasks, he says. We make the tasksand we make the machines that solve them.
Tenenbaum also says it isnt clear that future generations of GPT would gain these sorts of capabilities, unless some different techniques are employed. This might mean drawing from areas of AI research that go beyond machine learning. And he says its important to think carefully about whether we want to engineer systems that way, as doing so could have unforeseen consequences.
Another author of the January paper, Kyle Mahowald, an assistant professor of linguistics at the University of Texas at Austin, says its a mistake to base any judgements on single examples of GPT-4s abilities. He says tools from cognitive psychology could be useful for gauging the intelligence of such models. But he adds that the challenge is complicated by the opacity of GPT-4. It matters what is in the training data, and we dont know. If GPT-4 succeeds on some commonsense reasoning tasks for which it was explicitly trained and fails on others for which it wasnt, its hard to draw conclusions based on that.
Whether GPT-4 can be considered a step toward AGI, then, depends entirely on your perspective. Redefining the term altogether may provide the most satisfying answer. These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is generalbut we have to be a little bit less, you know, hysterical about what AGI means, saysNoah Goodman, anassociate professor of psychology, computer science, and linguistics at Stanford University.
Unfortunately, GPT-4 and ChatGPT are designed to resist such easy reframing. They are smart but offer little insight into how or why. Whats more, the way humans use language relies on having a mental model of an intelligent entity on the other side of the conversation to interpret the words and ideas being expressed. We cant help but see flickers of intelligence in something that uses language so effortlessly. If the pattern of words is meaning-carrying, then humans are designed to interpret them as intentional, and accommodate that, Goodman says.
The fact that AI is not like us, and yet seems so intelligent, is still something to marvel at. Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self, Goodman says. That, to me, is just fascinating.
Read more:
Some Glimpse AGI in ChatGPT. Others Call It a Mirage - WIRED
- Following are the top foreign stories at 1700 hours - Press Trust of India [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- > U.S - Department of Defense [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- The jobs that will disappear by 2040, and the ones that will survive - inews [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- 35 Ways Real People Are Using A.I. Right Now - The New York Times [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Control over AI uncertain as it becomes more human-like: Expert - Anadolu Agency | English [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- How An AI Asked To Produce Paperclips Could End Up Wiping Out ... - IFLScience [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Fears of artificial intelligence overblown - Independent Australia [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Elon Musk says he will launch rival to Microsoft-backed ChatGPT - Reuters [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Denial of service threats detected thanks to asymmetric behavior in ... - Science Daily [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future of Video Conferencing: How AI and Big Data are ... - Analytics Insight [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI ... - Nvidia [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- 10 Jobs That Artificial Intelligence May Replace Soon - TechJuice [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Identity Security: A Super-Human Problem in the Era of Exponential ... - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Will AI revolutionize professional soccer recruitment? - Engadget [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- OpenAI aims to solve AI alignment in four years - Warp News [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Future AI: DishBrain Is Tech That Could Transform Tomorrow - CMSWire [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Artificial Intelligence Has No Reason to Harm Us - The Wire [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Working together to ensure the safety of artificial intelligence - The Jakarta Post [Last Updated On: August 18th, 2024] [Originally Added On: November 2nd, 2023]
- East Africa lawyers wary of artificial intelligence rise - The Citizen [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- AI and the law: Imperative need for regulatory measures - ft.lk [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov ... - Medium [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Most IT workers are still super suspicious of AI - TechRadar [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Assessing the Promise of AI in Oncology: A Diverse Editorial Board - OncLive [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News [Last Updated On: August 18th, 2024] [Originally Added On: January 20th, 2024]
- Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium [Last Updated On: August 18th, 2024] [Originally Added On: January 20th, 2024]