In the sixth edition of his Unconfuse Me podcast, Bill Gates spoke with Sam Altman
The interlocutors, of course, focused on artificial intelligence, because, according to Bill Gates, this is a very interesting thing, and people are concerned about its development. Gates admitted that he watched OpenAI work and was skeptical: “I didn’t expect ChatGPT to be this good; it amazes me.”
At the very beginning of the podcast, while introducing his guest, Gates said that he was very surprised when Altman was fired from OpenAI, albeit briefly, shortly after the episode was recorded. Therefore, the main part of the podcast was preceded by an insert – a call from Gates to Altman related to events in OpenAI. Commenting that “it was so crazy,” Altman noted the positives of his layoff and return to the company: “The team has never felt more productive…it was a real coming-of-age moment for us in a way.”
Gates drew attention to the difficulty of understanding how ChatGPT works. In response, Altman drew a parallel with the complexity of understanding the workings of the human brain. In his opinion, our current understanding of how neural networks work is really at a low level.
“Would you say where in your brain ‘Shakespeare is encoded?’ – Altman asked a counter question to Gates. “We don’t know,” Gates answered him, but he assured the interlocutor that an understanding of the work of AI would come within five years.
The conversation also turned to the future of artificial intelligence, with Sam Altman outlining the key advances he expects to see in the next two years: multimodality, including speech input/output, as well as the inclusion of images and eventually video: “Obviously, people want it.” But perhaps the most important advance will come in what Altman calls “thinking ability”: “Right now, GPT-4 can think very limitedly.”
Altman also emphasized the importance of improving reliability: “Right now, if you ask GPT-4 most questions 10,000 times, one of the answers will be pretty good; but he himself does not always know which one. In addition, Altman sees improvements in customizability and personalization as important in future AI models.
Speaking about the possible regulation of AI, Altman gave the example of the International Atomic Energy Agency: according to the director general of OpenAI, the IAEA’s regulatory work can be considered a model that can be taken as a basis. Altman did not miss the opportunity to recall his “roundabout” last year, during which he met with many heads of state and received “almost universal support” during the discussion of II regulation.