OpenAI’s GPT-3 Algorithm Is Now Producing Billions of Words a Day – Singularity Hub

When OpenAI released its huge natural-language algorithm GPT-3 last summer, jaws dropped. Coders and developers with special access to an early API rapidly discovered new (and unexpected) things GPT-3 could do with naught but a prompt. It wrote passable poetry, produced decent code, calculated simple sums, and with some edits, penned news articles.

All this, it turns out, was just the beginning. In a recent blog post update, OpenAI said that tens of thousands of developers are now making apps on the GPT-3 platform.

Over 300 apps (and counting) use GPT-3, and the algorithm is generating 4.5 billion words a day for them.

Obviously, thats a lot of words. But to get a handle on how many, lets try a little back-of-the-napkin math.

Each month, users publish about 70 million posts on WordPress, which is, hands down, the dominant content management system online.

Assuming an average article is 800 words longwhich is speculation on my part, but not super long or shortpeople are churning out some 56 billion words a month or 1.8 billion words a day on WordPress.

If our average word count assumption is in the ballpark, then GPT-3 is producing over twice the daily word count of WordPress posts. Even if you make the average more like 2,000 words per article (which seems high to me) the two are roughly equivalent.

Now, not every word GPT-3 produces is a word worth reading, and its not necessarily producing blog posts (more on applications below). But in either case, just nine months in, GPT-3s output seems to foreshadow a looming torrent of algorithmic content.

So, how exactly are all those words being used? Just as the initial burst of activity suggested, developers are building a range of apps around GPT-3.

Viable, for example, surfaces themes in customer feedbacksurveys, reviews, and help desk tickets, for instanceand provides short summaries for companies aiming to improve their services. Fable Studio is bringing virtual characters in interactive stories to life with GPT-3-generated dialogue. And Algolia uses GPT-3 to power an advanced search tool.

In lieu of code, developers use prompt programming by providing GPT-3 a few examples of the kind of output theyre hoping to generate. More advanced users can fine-tune things by giving the algorithm data sets of examples or even human feedback.

In this respect, GPT-3 (and other similar algorithms) may hasten adoption of machine learning in natural language processing (NLP). Whereas the learning curve has previously been steep to work with machine learning algorithms, OpenAI says many in the GPT-3 developer community have no background in AI or programming.

Its almost this new interface for working with computers, Greg Brockman, OpenAIs chief technology officer and co-founder, told Naturein an article earlier this month.

OpenAI licensed GPT-3 to Microsoftwho invested a billion dollars in OpenAI in return for such partnershipsbut hasnt released the code publicly.

The company argues monetizing their machine learning products helps fund their greater mission. In addition, they say theyre able to control how the technology is being used by strictly gating access to it with an API.

One worry, for example, is that advanced natural-language algorithms like GPT-3 could supercharge online disinformation. Another is that large-scale algorithms also contain built-in bias and that it takes a lot of care and attention to limit its effects.

At the peak of the initial frenzy, OpenAI CEO Sam Altman tweeted, The GPT-3 hype is way too much. Its impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes.

Deep learning algorithms lack common sense or contextual awareness. So, of course, with the right prompt, GPT-3 has readily parroted the online ugliness that was part of its training data set.

To tackle these issues, OpenAI vets developers and applications prior to granting access to GPT-3. Theyve also created guidelines for developers, are working on tools to identify and mitigate bias, and require that processes and people are in place to monitor apps for bad behavior.

Whether these safeguards will be enough as access to GPT-3 scales remains to be seen.

Researchers would love to give algorithms a degree of common sense, an understanding of cause and effect, and moral judgement. What we have today is essentially a mouth without a brain, Yejin Choi, a computer scientist at the University of Washington and the Allen Institute for AI, told Nature.

As long as these qualities remain out of reach, researchers and GPT-3s human handlers will have to work hard to ensure benefits outweigh risks.

Not everyone agrees with the walled garden approach.

Eleuther, a project aiming to make an open source competitor to GPT-3, released their latest model GPT-Neo last week. The project uses OpenAIs papers on GPT-3 as a starting point for their algorithms and is training them on distributed computing resources donated by cloud computing company CoreWeave and Google.

Theyve also created a meticulously curated training data set called the Pile. Eleuther cofounder Connor Leahy told Wired the project has gone to great lengths over the months to curate this data set, make sure it was both well filtered and diverse, and document its shortcomings and biases.

GPT-Neos performance cant yet match GPT-3, but it is on par with GPT-3s least advanced version, according to Wired. Meanwhile, other open source projects are also in the works.

There is tremendous excitement right now for open source NLP and for producing useful models outside of big tech companies, said Alexander Rush, a Cornell University computer science professor. There is something akin to an NLP space race going on.

The risks of open source remain: Once the code is in the wild theres no going back, no controlling how its used.

But Rush argues developing algorithms in the open allows researchers outside big companies to study them, warts and all, and solve problems.

Open source or not, GPT-3 wont be alone for long. Google Brain, for example, recently announced their own huge natural-language model, weighing in at 1.6 trillion parameters.

In a recent Tech Crunch article, Oren Etzioni, CEO of the Allen Insitute for AI, and venture investor Matt McIlwain wrote that they expect GPT-3 and the addition of other large-scale natural-language algorithms to bring about more accessibility and lower costs.

And in particular, they see prompt programming as a significant shift.

Text, Etzioni and McIlwain wrote, may increasingly become the new command line, a universal translator of sorts that allows the codeless to tap into machine learning and bring new ideas to life: We think this will empower a whole new generation of creators, with trillions of parameters at their fingertips, in an entirely low-code/no-code way.

Machines, it would seem, are about to get an awful lot chattier. And weve got our work cut out for us to make sure the conversations meaningful.

Image Credit:Emil Widlund / Unsplash

Originally posted here:
OpenAI's GPT-3 Algorithm Is Now Producing Billions of Words a Day - Singularity Hub

Related Posts
This entry was posted in $1$s. Bookmark the permalink.