Where Did ChatGPT Come From?
It’s been no overnight success for OpenAI’s ChatGPT. I get that many of you say there’s been no success, at all.
Will Douglas Heaven, a senior editor for AI at MIT Technology Review has an informational (if your science skills are sharper than mine) and entertaining read about where ChatGPT came from.
ChatGPT’s foundation goes back to the 80’s and 90’s with the development of certain language models required for a neural network (software inspired by the way neurons in animal brains signal one another) that can make sense of text required in AI.
The breakthrough came when Google researchers invented transformers, a kind of neural network that can track where each word or phrase appears in a sequence.
OpenAI developed GPT-2 in 2019, only months after GPT. But GPT-2 created the big buzz.
So much so that OpenAI was apparently concerned people would use GPT-2 “to generate deceptive, biased, or abusive language” and decided not to release the full model.
Times changed with GPT-3, released in 2020. People were blown away with its ability to generate human-like text, to answer questions and generate stories.
AI gains are made from supervising activity and input, as opposed to developing new technology. The problem is you pick up a lot of disinformation from the net and what’s delivered can be toxic.
In early 2022, Open AI trained GPT-3 to reduce misinformation and toxic text. As a result, GPT-3 was better at following people’s instructions and produced less misinformation and and fewer mistakes.
OpenAI’s been as surprised as anyone by how ChatGPT has been received. On the day before it was launched in November of last year, ChatGPT was pitched as an “incremental update” to InstructGPT. ChatGPT was basically being trained to improve itself
As Heaven writes, “OpenAI trained GPT-3 to master the game of conversation and invited everyone to come and play. Millions of us have been playing ever since.”