While AI is still a developing technology and not without its limitations, a robotic world domination is far from something we need to fear, writes Bappa Sinha.
THE UNPRECIDENTED popularity of ChatGPT has turbocharged the artificial intelligence (AI) hype machine. We are being bombarded daily by news articles announcing AI as humankinds greatest invention. AI is qualitatively different, transformational, revolutionary and will change everything, they say.
OpenAI, the company behind ChatGPT, announced a major upgrade of the technology behind ChatGPT called GPT4. Already, Microsoft researchers are claiming that GPT4 shows sparks of artificial general intelligence or human-like intelligence the holy grail of AI research. Fantastic claims are made about reaching the point of AI Singularity, of machines equalling and surpassing human intelligence.
The business press talks about hundreds of millions of job losses as AI would replace humans in a whole host of professions. Others worry about a sci-fi-like near future where super-intelligent AI goes rogue and destroys or enslaves humankind. Are these predictions grounded in reality, or is this just over-the-board hype that the tech industry and the venture capitalist hype machine are so good at selling?
The current breed of AI models are based on things called neural networks. While the term neural conjures up images of an artificial brain simulated using computer chips, the reality of AI is that neural networks are nothing like how the human brain actually works. These so-called neural networks have no similarity with the network of neurons in the brain. This terminology was, however, a major reason for the artificial neural networks to become popular and widely adopted despite its serious limitations and flaws.
Machine learning algorithms currently used are an extension of statistical methods that lack theoretical justification for extending them this way. Traditional statistical methods have the virtue of simplicity. It is easy to understand what they do, when and why they work. They come with mathematical assurances that the results of their analysis are meaningful, assuming very specific conditions.
Since the real world is complicated, those conditions never hold. As a result, statistical predictions are seldom accurate. Economists, epidemiologists and statisticians acknowledge this, then use intuition to apply statistics to get approximate guidance for specific purposes in specific contexts.
These caveats are often overlooked, leading to the misuse of traditional statistical methods. These sometimes have catastrophic consequences, as in the 2008 Global Financial Crisis or the Long-Term Capital Management blowup in 1998, which almost brought down the global financial system. Remember Mark Twains famous quote: Lies, damned lies and statistics.
Machine learning relies on the complete abandonment of the caution which should be associated with the judicious use of statistical methods. The real world is messy and chaotic, hence impossible to model using traditional statistical methods. So the answer from the world of AI is to drop any pretence at theoretical justification on why and how these AI models, which are many orders of magnitude more complicated than traditional statistical methods, should work.
Freedom from these principled constraints makes the AI model more powerful. They are effectively elaborate and complicated curve-fitting exercises which empirically fit observed data without us understanding the underlying relationships.
But its also true that these AI models can sometimes do things that no other technology can do at all. Some outputs are astonishing, such as the passages ChatGPT can generate or the images that DALL-E can create. This is fantastic at wowing people and creating hype. The reason they work so well is the mind-boggling quantities of training data enough to cover almost all text and images created by humans.
Even with this scale of training data and billions of parameters, the AI models dont work spontaneously but require kludgy ad hoc workarounds to produce desirable results.
Even with all the hacks, the models often develop spurious correlations. In other words, they work for the wrong reasons. For example, it has been reported that many vision models work by exploiting correlations pertaining to image texture, background, angle of the photograph and specific features. These vision AI models then give bad results in uncontrolled situations.
For example, a leopard print sofa would be identified as a leopard. The models dont work when a tiny amount of fixed pattern noise undetectable by humans is added to the images or the images are rotated, say in the case of a post-accident upside-down car. ChatGPT, for all its impressive prose, poetry and essays, is unable to do simple multiplication of two large numbers, which a calculator from the 1970s can do easily.
The AI models do not have any level of human-like understanding but are great at mimicry and fooling people into believing they are intelligent by parroting the vast trove of text they have ingested. For this reason, computational linguist Emily Bender called the large language models such as ChatGPT and Googles Bard and BERT Stochastic Parrots in a 2021 paper. Her Google co-authors Timnit Gebru and Margaret Mitchell were asked to take their names off the paper. When they refused, they were fired by Google.
This criticism is not just directed at the current large language models but at the entire paradigm of trying to develop artificial intelligence. We dont get good at things just by reading about them. That comes from practice, of seeing what works and what doesnt. This is true even for purely intellectual tasks such as reading and writing. Even for formal disciplines such as maths, one cant get good at it without practising it.
These AI models have no purpose of their own. They, therefore, cant understand meaning or produce meaningful text or images. Many AI critics have argued that real intelligence requires social situatedness.
Doing physical things in the real world requires dealing with complexity, non-linearly and chaos. It also involves practice in actually doing those things. It is for this reason that progress has been exceedingly slow in robotics. Current robots can only handle fixed repetitive tasks involving identical rigid objects, such as in an assembly line. Even after years of hype about driverless cars and vast amounts of funding for its research, fully automated driving still doesnt appear feasible in the near future.
Current AI development based on detecting statistical correlations using neural networks, which are treated as black boxes, promotes a pseudoscience-based myth of creating intelligence at the cost of developing a scientific understanding of how and why these networks work. Instead, they emphasise spectacles such as creating impressive demos and scoring in standardised tests based on memorised data.
The only significant commercial use cases of the current versions of AI are advertisements: targeting buyers for social media and video streaming platforms. This does not require the high degree of reliability demanded from other engineering solutions they just need to be good enough. Bad outputs, such as the propagation of fake news and the creation of hate-filled filter bubbles, largely go unpunished.
Perhaps a silver lining in all this is, given the bleak prospects of AI singularity, the fear of super-intelligent malicious AIs destroying humankind is overblown. However, that is of little comfort for those at the receiving end of AI decision systems. We already have numerous examples of AI decision systems the world over denying people legitimate insurance claims, medical and hospitalisation benefits, and state welfare benefits.
AI systems in the United States have been implicated in imprisoning minorities to longer prison terms. There have even been reports of withdrawal of parental rights to minority parents based on spurious statistical correlations, which often boil down to them not having enough money to properly feed and take care of their children. And, of course, on fostering hate speech on social media.
As noted linguist Noam Chomsky wrote in a recent article:
ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.
Bappa Sinha is a veteran technologist interested in the impact of technology on society and politics.
This article was produced by Globetrotter.
Support independent journalism Subscribeto IA.
Read the original:
Fears of artificial intelligence overblown - Independent Australia
- Following are the top foreign stories at 1700 hours - Press Trust of India [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- > U.S - Department of Defense [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- The jobs that will disappear by 2040, and the ones that will survive - inews [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- 35 Ways Real People Are Using A.I. Right Now - The New York Times [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Control over AI uncertain as it becomes more human-like: Expert - Anadolu Agency | English [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- How An AI Asked To Produce Paperclips Could End Up Wiping Out ... - IFLScience [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Some Glimpse AGI in ChatGPT. Others Call It a Mirage - WIRED [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Elon Musk says he will launch rival to Microsoft-backed ChatGPT - Reuters [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Denial of service threats detected thanks to asymmetric behavior in ... - Science Daily [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future of Video Conferencing: How AI and Big Data are ... - Analytics Insight [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI ... - Nvidia [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- 10 Jobs That Artificial Intelligence May Replace Soon - TechJuice [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Identity Security: A Super-Human Problem in the Era of Exponential ... - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Will AI revolutionize professional soccer recruitment? - Engadget [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- OpenAI aims to solve AI alignment in four years - Warp News [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Future AI: DishBrain Is Tech That Could Transform Tomorrow - CMSWire [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Artificial Intelligence Has No Reason to Harm Us - The Wire [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Working together to ensure the safety of artificial intelligence - The Jakarta Post [Last Updated On: August 18th, 2024] [Originally Added On: November 2nd, 2023]
- East Africa lawyers wary of artificial intelligence rise - The Citizen [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- AI and the law: Imperative need for regulatory measures - ft.lk [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov ... - Medium [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Most IT workers are still super suspicious of AI - TechRadar [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Assessing the Promise of AI in Oncology: A Diverse Editorial Board - OncLive [Last Updated On: August 18th, 2024] [Originally Added On: November 24th, 2023]
- Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News [Last Updated On: August 18th, 2024] [Originally Added On: January 20th, 2024]
- Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium [Last Updated On: August 18th, 2024] [Originally Added On: January 20th, 2024]