ChatGPT, artificial intelligence, and the news – Columbia Journalism Review

When OpenAI, an artificial intelligence startup, released its ChatGPT tool in November, it seemed like little more than a toyan automated chat engine that could spit out intelligent-sounding responses on a wide range of topics for the amusement of you and your friends. In many ways, it didnt seem much more sophisticated than previous experiments with AI-powered chat software, such as the infamous Microsoft bot Taywhich was launched in 2016, and quickly morphed from a novelty act into a racism scandal before being shut downor even Eliza, the first automated chat program, which was introduced way back in 1966. Since November, however, ChatGPT and an assortment of nascent counterparts have sparked a debate not only over the extent to which we should trust this kind of emerging technology, but how close we are to what experts call Artificial General Intelligence, or AGI, which, they warn, could transform society in ways that we dont understand yet. Bill Gates, the billionaire cofounder of Microsoft, wrote recently that artificial intelligence is as revolutionary as mobile phones and the Internet.

The new wave of AI chatbots has already been blamed for a host of errors and hoaxes that have spread around the internet, as well as at least one death: La Libre, a Belgian newspaper, reported that a man died by suicide after talking with a chat program called Chai; based on statements from the mans widow and chat logs, the software appears to have encouraged the user to kill himself. (Motherboard wrote that when a reporter tried the app, which uses an AI engine powered by an open-source version of ChatGPT, it offered different methods of suicide with very little prompting.) When Pranav Dixit, a reporter at BuzzFeed, used FreedomGPTanother program based on an open source version of ChatGPT, which, according to its creator, has no guardrails around sensitive topicsthat chatbot praised Hitler, wrote an opinion piece advocating for unhoused people in San Francisco to be shot to solve the citys homeless crisis, [and] used the n-word.

The Washington Post has reported, meanwhile, that the original ChatGPT invented a sexual harassment scandal involving Jonathan Turley, a law professor at George Washington University, after a lawyer in California asked the program to generate a list of academics with outstanding sexual harassment allegations against them. The software cited a Post article from 2018, but no such article exists, and Turley said that hes never been accused of harassing a student. When the Post tried asking the same question of Microsofts Bing, which is powered by GPT-4 (the engine behind ChatGPT), it repeated the false claim about Turley, and cited an op-ed piece that Turley published in USA Today, in which he wrote about the false accusation by ChatGPT. In a similar vein, ChatGPT recently claimed that a politician in Australia had served prison time for bribery, which was also untrue. The mayor has threatened to sue OpenAI for defamation, in what would reportedly be the first such case against an AI bot anywhere.

According to a report in Motherboard, a different AI chat programReplika, which is also based on an open-source version of ChatGPTrecently came under fire for sending sexual messages to its users, even after they said they werent interested. Replika placed limits on the bots referencing of erotic roleplaybut some users who had come to depend on their relationship with the software subsequently experienced mental-health crises, according to Motherboard, and so the erotic roleplay feature was reinstated for some users. Ars Technica recently pointed out that ChatGPT, for its part, has invented books that dont exist, academic papers that professors didnt write, false legal citations, and a host of other fictitious content. Kate Crawford, a professor at the University of Southern California, told the Post that because AI programs respond so confidently, its very seductive to assume they can do everything, and its very difficult to tell the difference between facts and falsehoods.

Joan Donovan, the research director at the Harvard Kennedy Schools Shorenstein Center, told the Bulletin of the Atomic Scientists that disinformation is a particular concern with chatbots because AI programs lack any way to tell the difference between true and false information. Donovan added that when her team of researchers experimented with an early version of ChatGPT, they discovered that, in addition to sources such as Reddit and Wikipedia, the software was also incorporating data from 4chan, an online forum rife with conspiracy theories and offensive content. Last month, Emily Bell, the director of Columbias Tow Center for Digital Journalism, wrote in The Guardian that AI-based chat engines could create a new fake news frenzy.

As I wrote for CJR in February, experts say that the biggest flaw in a large language model like the one that powers ChatGPT is that, while the engines can generate convincing text, they have no real understanding of what they are writing about, and so often insert what are known as hallucinations, or outright fabrications. And its not just text: along with ChatGPT and other programs have come a similar series of AI image generators, including Stable Diffusion and Midjourney, which are capable of producing believable images, such as the recent photos of Donald Trump being arrestedwhich were actually created by Eliot Higgins, the founder of the investigative reporting outfit Bellingcatand a viral image of the Pope wearing a stylish puffy coat. (Fred Ritchin, a former photo editor at the New York Times, spoke to CJRs Amanda Darrach about the perils of AI-created images earlier this year.)

Three weeks ago, in the midst of all these scares, a body called the Future of Life Institutea nonprofit organization that says its mission is to reduce global catastrophic and existential risk from powerful technologiespublished an open letter calling for a six-month moratorium on further AI development. The letter suggested that we might soon see the development of AI systems powerful enough to endanger society in a number of ways, and stated that these kinds of systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. More than twenty thousand people signed the letter, including a number of AI researchers and Elon Musk. (Musks foundation is the single largest donor to the institute, having provided more than eighty percent of its operating budget. Musk himself was also an early funder of OpenAI, the company that created ChatGPT, but he later distanced himself after an attempt to take over the company failed, according to a report from Semafor. More recently, there have been reports that Musk is amassing servers with which to create a large language model at Twitter, where he is the CEO.)

Some experts found the letter over the top. Emily Bender, a professor of linguistics at the University of Washington and a co-author of a seminal research paper on AI that was cited in the Future of Life open letter, said on Twitter that the letter misrepresented her research and was dripping with #Aihype. In contrast to the letters vague references to some kind of superhuman AI that might pose profound risks to society and humanity, Bender said that her research focuses on how large language models, like the one that powers ChatGPT, can be misused by existing oppressive systems and governments. The paper that Bender co-published in 2021 was called On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? It asked whether enough thought had been put into the potential risks of such models. After the paper came out, two of Benders co-authors were fired from Googles AI team. Some believe that Google made that decision because AI is a major focus for the companys future.

As Chloe Xiang noted for Motherboard, Arvind Narayanan, a professor of computer science at Princeton and the author of a newsletter called AI Snake Oil, also criticized the open letter for making it harder to tackle real AI harms, and characterized many of the questions that the letter asked as ridiculous. In an essay for Wired, Sasha Luccioni, a researcher at the AI company Hugging Face, argued that a pause on AI research is impossible because it is already happening around the world, meaning there is no magic button that would halt dangerous AI research while allowing only the safe kind. Meanwhile, Brian Merchant, at the LA Times, argued that the all doom-and-gloom about the risks of AI may spring from an ulterior motive: apocalyptic doomsaying about the terrifying power of AI makes OpenAIs technology seem important, and therefore valuable.

Are we really in danger from the kind of artificial intelligence behind services like ChatGPT, or are we just talking ourselves into it? (I would ask ChatGPT, but Im not convinced I would get a straight answer.) Even if its the latter, those talking themselves into it now include regulators both in the US and around the world. Earlier this week, the Wall Street Journal reported that the Biden administration has started examining whether some kind of regulation needs to be applied to tools such as ChatGPT, due to the concerns that the technology could be used to discriminate or spread harmful information. Officials in Italy already banned ChatGPT for alleged privacy violations. (They later stated that the chatbot could return if it meets certain requirements.) And the software is facing possible regulation in a number of other European countries.

As governments are working to understand this new technology and its risks, so, too, are media companies. Often, they are doing so behind the scenes. But Wired recently published a policy statement on how and when it plans to use AI tools. Gideon Lichfield, Wireds global editorial director, told the Bulletin of the Atomic Scientists that the guidelines are designed both to give our own writers and editors clarity on what was an allowable use of AI, as well as for transparency so our readers would know what they were getting from us. The guidelines state that the magazine will not publish articles written or edited by AI tools, except when the fact that its AI-generated is the whole point of the story.

On the other side of the ledger, a number of news organizations seem more concerned that chatbots are stealing from them. The Journal reported recently that publishers are examining the extent to which their content has been used to train AI tools such as ChatGPT, how they should be compensated and what their legal options are.

Other notable stories:

ICYMI: Free Evan, prosecute the hostage takers

Continue reading here:

ChatGPT, artificial intelligence, and the news - Columbia Journalism Review

Related Posts

Comments are closed.