What I came away with was the sense that OpenAI is still bemused by the success of its research preview, but has grabbed the opportunity to push this technology forward, watching how millions of people are using it and trying to fix the worst problems as they come up.
It was viewed in-house as a “research preview,” says Sandhini Agarwal, who works on policy at OpenAI: a tease of a more polished version of a two-year-old technology and, more important, an attempt to iron out some of its flaws by collecting feedback from the public. “We didn’t want to oversell it as a big fundamental advance,” says Liam Fedus, a scientist at OpenAI who worked on ChatGPT.
ChatGPT is a version of GPT-3, a large language model also developed by OpenAI. Language models are a type of neural network that has been trained on lots and lots of text. (Neural networks are software inspired by the way neurons in animal brains signal one another.) Because text is made up of sequences of letters and words of varying lengths, language models require a type of neural network that can make sense of that kind of data. Recurrent neural networks, invented in the 1980s, can handle sequences of words, but they are slow to train and can forget previous words in a sequence.
‒ https://www.technologyreview.com/2023/02/08/1068068/chatgpt-is-everywhere-heres-where-it-came-from/