Ever since the 20th centurys earliest theories of artificial intelligence set the world on an apparently irreversible track toward the technology, the great promise of AIone thats been used to justify that march forwardis that it can help usher in social transformation and lead to human betterment.
With the arrival of so-called generative AI, such as OpenAIs endlessly amusing and problem-riddled ChatGPT, the decades long slow-roll of AI advancement has felt more like a quantum leap forward. That perceptive jump has some experts worried about the consequences of moving too quickly toward a world in which machine intelligence they say could become an all-powerful, humanity-destroying force la The Terminator.
But Northeastern experts, including Usama Fayyad, executive director for the Institute for Experiential Artificial Intelligence, maintain that those concerns dont reflect reality. That, in fact, AI is being integrated in ways that promote and necessitate human involvementwhat experts have coined human-in-the-loop.
On Tuesday, April 25, Northeastern will host a symposium of AI experts to discuss a range of topics related to the pace of AI development, and how progress is reshaping the workplace, education, health care and many other sectors.Northeastern Global News sat down with Fayyad to learn more about what next weeks conference will take up; the upside of generative AI; as well as broader developments in the space. The conversation has been edited for brevity and clarity.
Generative AI refers to the kind of AI that can, quite simply, generate outputs. Those outputs could be in the form of text like you see in what we call the large language models, such as ChatGPT (a chatbot on top of a large language model), or images, etc. If you are training [the AI] on text, text is what you will get out of it. If you are training it on images, you get images, or modifications of images, out of it. If you are training it on sounds or music, you get music out of it. If you train it on programming code, you get programs out, and so on.
Its also called generative AI because the algorithms have the ability to generate examples on their own. Its part of their training. Researchers would do things like have the algorithm challenge itself through generative adversarial networks, or algorithms that generate adversarial examples that could confuse the system to help strengthen its training. But since their development, researchers quickly realized that they needed human intervention. So most of these systems, including ChaptGPT, actually use and require human intervention. Human beings facilitate a lot of these challenges as part of the training through something called reinforcement learning, a machine learning technique designed to basically improve the systems performance.
We are seeing it applied in educationin higher education in particular. Higher education has taken noteincluding Northeastern, in a very big wayof the fact that these technologies have challenged the way we conduct, for example, standardized testing. Educators have realized that this is just another tool. At Northeastern we have many examples that we will cover in this upcoming workshop of people using it in the classroom. Be it in the College of Arts, Media and Design for things like [Salvador] Dal and LensaAI for images; or be it in writing classes, English classes, or in engineering.
Just like we transitioned from the slide ruler to the calculator, to the computer and then the whole web on your mobile phonethis is another tool, and the proper way to train our students to be ready for the new world is to figure out ways to utilize this technology as a tool.
Its too early to see real world applications at large scale. The technology is too new. But there are estimates that anywhere from 50-80%Im more in the 80% campof the tasks done by a knowledge worker can be accelerated by this technology. Not automated, accelerated. If youre a lawyer and drafting an agreement, you can have a first draft customized very quickly; but then you have to go in and edit or make changes. If youre a programmer, you can turn out an initial program. But it typically wont work well; it will have errors; its not customized to the target. Again, a human being, provided they understand what theyre doing, can go in and modify it, and save themselves 50-80% of the effort.
Its acceleration, not automation because we know the technology can hallucinate in horrible ways, in fact. It can make up stuff; it can try to defend points of view that you ask it to defend; you can make it lie, and you can lie to it and have it believe you.
They call this specific class of technology stochastic parrots, meaning parrots that havelets sayrandom variation. And I like the term parrots because it correctly describes the fact that they dont understand what theyre saying. So they say stuff, and the stuff may sound eloquent, or fluid. Thats one of the big points that we try to make: somehow we have learned in society to associate intelligence with eloquence and fluiditybasically someone who says things nicely. But in reality these algorithms are far from intelligent; they are basically doing autocomplete; they are repeating things theyve seen before, and often they are repeated incorrectly.
Why do I say all of this? Because it means you have a human-in-the-loop needed in doing this work, because you need to check all of this work. You remove a lot of the repetitive monotonous workthats great. You can accelerate itthats productive. You now can spend your time adding value instead of repeating the boring tasks. All of that I consider positive.
I like to use accounting as a good analogy. What did accounting look like 60-70 years ago? Well, you had to deal with these big ledgers; you had to have nice handwriting; you had to have good addition skills in your head; you had to manually verify numbers and go over sums and apply ratios. Guess what? None of those tasksnone, zeroare relevant today. Now, have we replaced accountants because weve now replaced everything they used to do with something that is faster, better, cheaper, repeatable? No. We actually have more accountants today than in the history of humanity.
What were doing with this workshop is were trying to cover the three areas that matter. What is the impact of ChatGPT and generative AI in the classroom, and how should we use it? We bring in folks who are doing this work at Northeastern to provide examples in one panel.
Second, how is the nature of work changing because of these technologies? That will be addressed during another panel where we think about different business applications. We will use the law and health care as the two running examples here.
The third panel is all about responsible use. How does one look out for the ethical traps, and how does one use this technology properly? We start the whole workshop by having one of our faculty members give an overview of what this technology is to help demystify the backbox, if you will.
The idea, basically, is to show that not only are we (Northeastern) aware of the technological developments taking place, but that we have some of the top experts in the world leading the way. And we are already using this stuff in the classroom as of last semester. Additionally, we want to communicate that were here and ready to work with companies, with organizations to learn ways to best utilize this technologyand to do so properly and responsibly.
Theres plenty of evidence now known that ChatGPT has a human-in-the-loop component. Sometimes humans are answering questions, especially when the algorithm gets in trouble. They review the answers and intervene. By the way, this is run-of-the-mill stuff for even Google Search engine. Many people dont know that when they use the Google Search engine, that the MLR, or the machine learning relevance algorithm that decides which page is relevant to which querythat gets retrained three or four times a day based primarily on human editorial input. Theres a lot of stuff that an algorithm cannot capturethat the stochastic parrot will never understand.
Those concerns are focusing on the wrong things. Let me say a few things. We did go through a bit of a phase transition around 2015 or 2016 with these kinds of technologies. Take handwriting recognition, for example. It had jumps over the years, but it took about 15 years to get there, with many revisions along the way. Speech recognition: the same thing. It took a long time, then it started accelerating; but it still took some time.
With these large language models, like reading comprehension and language compilation, we see major jumps that happened with the development of these large language models that are trained on these large bodies of literature or text. And by the way, what is not talked about a lot is that OpenAI had to spend a lot of money curating that text; making sure its balanced. If you train a large language model on two documents that have the same content by two different outcomes, how does the algorithm know which one is right? It doesnt. Either a human has to tell it, or it basically defaults to saying, Whatever I see more frequently must be right. That creates fertile ground for misinformation.
Now, to answer your question about this proposed moratorium. In my mind, its a little bit silly in its motivations. Many of the proponents of this come from a camp where they believe were at the risk of an artificial general intelligencethat is very far from true. Were very, very far from even getting close to that. Again, these algorithms dont know what they are doing. Now, we are in this risky zone of misusing it. There was a recent example from Belgium where someone committed suicide after six months of talking to a chatbot that, in the end, was encouraging him to do it. So there are a lot of dangers that we need to contend with. We know there are issues. However, stopping isnt going to make any difference. In fact, if people agreed to stop, only the good actors will; the bad actors continue on. What we need to start to do, again, is emphasize the fact that fluency, eloquence is not intelligence. This technology has limitations; lets demystify them. Lets put it to good use so we can realize what the bad uses are. That way we can learn how they should be controlled.
Tanner Stening is a Northeastern Global News reporter. Email him at t.stening@northeastern.edu. Follow him on Twitter @tstening90.
See more here:
Is 'Generative' AI the Way of the Future? - Northeastern University
- How would a Victorian author write about generative AI? - Verdict [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is the current regulatory system equipped to deal with AI? - The Hindu [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Tim Sweeney, CD Projekt, and Other Experts React to AI's Rise, and ... - IGN [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- GCHQ chiefs warning to ministers over risks of AI - The Independent [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Solving The Mystery Of How ChatGPT And Generative AI Can Surprisingly Pick Up Foreign Languages, Says AI Ethics And AI Law - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products - Yahoo Finance [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI Dangers Viewed Through the Perspective of Don't Look Up - BeInCrypto [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- ChatGPT, artificial intelligence, and the news - Columbia Journalism Review [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript - MSNBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- GPT-4 Passes the Bar Exam: What That Means for Artificial ... - Stanford Law School [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is artificial intelligence approaching science fiction? - The Trail - The Puget Sound Trail [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Towards Artificial General Intelligence, ChatGPT 5 is on Track - Analytics Insight [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- New AGI hardware in progress for artificial general intelligence - Information Age [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- ChatGPT is impressive, but it may slow the emergence of AGI - TechTalks [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- How smart is ChatGPT really and how do we judge intelligence in AIs? - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI Singapore and the Digital and Intelligence Service Sign ... - MINDEF Singapore [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath - DATAQUEST [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- China's State-Sponsored AI Claims it Will Surpass ChatGPT by End ... - Tom's Hardware [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- Opinion | We Need a Manhattan Project for AI Safety - POLITICO [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- As AutoGPT released, should we be worried about AI? - Cosmos [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- The Role of Artificial Intelligence in the Future of Media - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Threat Of Climate Misinformation Propagated by Generative AI ... - Unite.AI [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- AI and the Next Phase of Human Evolution: What Can We Expect? - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- IT pros mull observability tools, devx and generative AI - TechTarget [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The future of learning and skilling with AI in the picture - Chief Learning Officer [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Economic Case for Generative AI and Foundation Models - Andreessen Horowitz [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Exploring the future of AI: The power of decentralization - Cointelegraph [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Convergence of Brain-Inspired AI and AGI: Exploring the Path to ... - Newswise [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Past, Present, Future: AI, Geopolitics, and the Global Economy - Tech Policy Press [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Development of GPT-5: The Next Step in AI Technology - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Executive Q&A: Andrew Cardno, QCI - Indian Gaming [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future is Now: Understanding and Harnessing Artificial ... - North Forty News [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Twin Convergence: AGI And Superconductors Ushering Humanity's Inflection Point - Medium [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Moonshot: Coexisting with AI holograms - The Edge Malaysia [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- A large computing cluster at sea might have big implications for AI ... - XDA Developers [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- ChatGPT or not ChatGPT? That was the question, briefly, as ... - GeekWire [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The impact of AI and Language Models - Girton College [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Startup gaining investment traction for AI clinician productivity tool - Mobihealth News [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- What OpenAI's latest batch of chips says about the future of AI - Quartz [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How AI Ecosystems Are Transforming the Future of Business - Entrepreneur [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Understanding Artificial Intelligence: Definition, Applications, and ... - Medium [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Europe's weaknesses, opportunities facing the AI revolution - EURACTIV [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How the AI Executive Order and OMB memo introduce ... - Brookings Institution [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Artificial intelligence: the world is waking up to the risks - InCyber [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How to win the artificial general intelligence race and not end ... - The Strategist [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- AI 2023: risks, regulation & an 'existential threat to humanity' - RTE.ie [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Game-playing DeepMind AI can beat top humans at chess, Go and poker - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Sam Altman Seems to Imply That OpenAI Is Building God - Futurism [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - DataDrivenInvestor [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 - Medium [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- What Is Artificial Intelligence? From Software to Hardware, What You Need to Know - ExtremeTech [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Amazon reportedly preparing paid Alexa version powered by its own Titan AI model - SiliconANGLE News [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- Meta AI Head: ChatGPT Will Never Reach Human Intelligence - PYMNTS.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- What is artificial general intelligence, and is it a useful concept? - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- 22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - Livescience.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- Why AI Won't Take Over The World Anytime Soon - Bernard Marr [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- Responsible AI needs further collaboration - Chinadaily.com.cn - China Daily [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI says it's charting a "path to AGI" with its next frontier AI model - ITPro [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- The AI revolution is coming to robots: how will it change them? - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI revives its robotic research team, plans to build dedicated AI - Interesting Engineering [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]