Meet GPT-3. It Has Learned to Code (and Blog and Argue). – The New York Times

Before asking GPT-3 to generate new text, you can focus it on particular patterns it may have learned during its training, priming the system for certain tasks. You can feed it descriptions of smartphone apps and the matching Figma code. Or you can show it reams of human dialogue. Then, when you start typing, it will complete the sequence in a more specific way. If you prime it with dialogue, for instance, it will start chatting with you.

It has this emergent quality, said Dario Amodei, vice president for research at OpenAI. It has some ability to recognize the pattern that you gave it and complete the story, give another example.

Previous language models worked in similar ways. But GPT-3 can do things that previous models could not, like write its own computer code. And, perhaps more important, you can prime it for specific tasks using just a few examples, as opposed to the thousands of examples and several hours of additional training required by its predecessors. Researchers call this few-shot learning, and they believe GPT-3 is the first real example of what could be a powerful phenomenon.

It exhibits a capability that no one thought possible, said Ilya Sutskever, OpenAIs chief scientist and a key figure in the rise of artificial intelligence technologies over the past decade. Any layperson can take this model and provide these examples in about five minutes and get useful behavior out of it.

This is both a blessing and a curse.

OpenAI plans to sell access to GPT-3 via the internet, turning it into a widely used commercial product, and this year it made the system available to a limited number of beta testers through their web browsers. Not long after, Jerome Pesenti, who leads the Facebook A.I. lab, called GPT-3 unsafe, pointing to sexist, racist and otherwise toxic language the system generated when asked to discuss women, Black people, Jews and the Holocaust.

With systems like GPT-3, the problem is endemic. Everyday language is inherently biased and often hateful, particularly on the internet. Because GPT-3 learns from such language, it, too, can show bias and hate. And because it learns from internet text that associates atheism with the words cool and correct and that pairs Islam with terrorism, GPT-3 does the same thing.

This may be one reason that OpenAI has shared GPT-3 with only a small number of testers. The lab has built filters that warn that toxic language might be coming, but they are merely Band-Aids placed over a problem that no one quite knows how to solve.

View original post here:
Meet GPT-3. It Has Learned to Code (and Blog and Argue). - The New York Times

Related Posts
This entry was posted in $1$s. Bookmark the permalink.