[Summary of Podcast] Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367
https://www.youtube.com/watch?v=L_Guz73e6fw&t=2676s
Part 1:
The Lex Fridman Podcast recently featured a conversation with Sam Altman, CEO of OpenAI. The conversation revolved around the history, possibilities, and challenges of AI. Altman discussed how OpenAI was mocked at the time of its inception in 2015 when it announced that it was working on AGI, but now, the company is doing important work. He emphasized the critical nature of AI and the danger it poses to society. Altman referred to the "collective intelligence of the human species" as paling in comparison to the "general super intelligence" that AI systems have. While the capabilities of AI are exciting, they can be terrifying due to the power AGI holds, which could lead to the intentional or unintentional destruction of human civilization. Altman believes it is important to have conversations about the power that AI holds and to have check and balance systems in place.
The conversation also revolved around GPT-4, ChatGPT, and other AI technologies, which Altman described as some of the greatest breakthroughs in the history of AI. The podcast discussed ChatGPT, which uses reinforcement learning with human feedback (RLHF) to align the model with what humans want it to do. The conversation also touched on the training data set for GPT-4, which was created by pulling data from various sources. Altman discussed how understanding how to incorporate human feedback into AI is crucial and how there is an ongoing process of discovery in science as we continue to better understand the capabilities and limitations of AI.
Part 2:
The conversation in Part 2 continues with a discussion of the wisdom that ChatGPT seems to possess in interactions with humans, especially in continuous interactions of multiple prompts. They discuss the struggles of the model to understand and generate text of the same length in response to different prompts. They talk about the bias of the model and the importance of building AI in public to get feedback and shape the technology's development. They also talk about the process of ensuring AI safety and the importance of finding better ways to align AI models with human values. They discuss RLHF, which is a process that came applied across the entire system, where a human votes on what's the better way to say something, which helps make a better and more usable system. The conversation then shifts to how GPT4 can be made more steerable based on the interaction users have with it. They discuss the process of writing and designing a great prompt to steer GPT4, which is an art form that is still evolving. They also talk about how GPT4 is changing the nature of programming.
Part 3:
In part 3 of the Lex Fridman podcast with Sam Altman, they discuss the challenges of developing GPT and the need to align AI systems to human preferences and values. They also touch on the difficulty of regulating hate speech and defining what constitutes harmful output. Sam Altman explains that the ultimate goal is for every person on earth to have a thoughtful deliberative conversation about where to draw the boundary on this system and to build a technology that will have a huge impact while still drawing the lines that we all agree have to be drawn somewhere. They also talk about the GPT4 iteration process, where users can have a back and forth dialogue with the system to refine code output, which can also help catch mistakes as they are made. Altman notes that there have been many technical leaps in the base model of GPT4, which is why it is vastly different from its predecessors. Additionally, they discuss the challenges of treating users like adults and not scolding them while also preventing the system from being used for dangerous conspiracy theories.
Part 4:
In part 4 of the podcast, Lex and Sam talk about the incredible complexity of the GPT models and the amount of data and knowledge that goes into creating them. Sam notes that the models compress all of humanity's text output and that it is surprising how much of humanity's knowledge they can reconstruct. He argues that the number of parameters in a model does not necessarily matter and that what matters is getting the best performance. They then discuss the potential of large language models to achieve general intelligence and the components needed for a system to be considered a superintelligence. Sam believes that a system that cannot add significantly to the sum total of scientific knowledge is not a superintelligence, and to achieve this, we need to expand on the GPT paradigm. He notes that he cannot say anything with certainty as they are deep into the unknown territory. Lex and Sam then discuss the potential for AI to take over human jobs, and Sam states that it will automate a lot of programming but that great programmers will always be necessary. Sam believes that AI can deliver extraordinary improvements in the quality of life and help people be happier, more fulfilled, and achieve great things. However, he acknowledges that there is some chance that superintelligent AI systems could pose a danger to humans, and it's crucial to discover new techniques to solve the problem. Finally, Sam notes that the exponential improvement of technology makes it challenging to reason about AI takeoff, and there is a need for a tight feedback loop to adjust the philosophy of AI safety.
Part 5:
In this section of the podcast, Sam Altman and Lex Fridman discuss the potential risks associated with AGI and the challenges of prioritizing safety in a market-driven industry. Altman expresses concerns that there are risks associated with the deployment of open-source language models that are not adequately equipped with safety controls. The possibility of disinformation problems or economic shocks could result from large-scale deployment of these models, regardless of whether they possess superintelligence or not. Altman suggests that there are many things we can do to try to prevent these risks, including regulatory approaches, using more powerful AI to detect potential problems, and trying many different approaches. Altman also talks about OpenAI's unusual structure and how it enables the organization to stick to its mission and resist pressures from other companies. He also discusses OpenAI's transition from a nonprofit to a capped for-profit organization and the benefits of the latter structure. Finally, Altman expresses concerns about uncapped companies that are playing with AGI and their potential to pose significant risks.
Part 6:
In this section of the interview, Lex asks Sam about the fear that a few powerful people could control AGI and whether he thinks he could be corrupted by that kind of power. Sam responds that he does worry about it and that he thinks decisions about AGI should be increasingly democratic over time. Sam also acknowledges that AGI has a lot of power and that there are only a few teams working on it currently. Lex asks Sam for his honest opinion about how they are doing so far, and Sam responds that he thinks OpenAI is doing well, but there is always room for improvement.
Lex brings up Elon Musk and asks Sam what he admires about him. Sam acknowledges that Elon has driven the world forward in important ways, such as with electric vehicles and space exploration. However, he also wishes that Elon would do more to look at the hard work OpenAI is doing to get AGI right.
Finally, Lex asks Sam about bias in AI models and whether the bias of employees can affect the bias of the system. Sam responds that the bias he is most nervous about is the bias of human feedback raters and that OpenAI is still figuring out how to select those people in a way that is representative and not biased. Sam also emphasizes the importance of empathizing with the experiences of different groups of people when designing these rating tasks.
Part 7:
In this part of the podcast, Sam Altman discusses his concerns about the impact that AGI will have on society and his efforts to try to understand the perspective of users. Altman mentions that he believes charisma is a dangerous thing, and that communication style flaws are a feature, not a bug, in general. He also notes that he is disconnected from the reality of life for most people and wants to empathize with them. Altman shares his thoughts on the impact that AI will have on jobs and believes that customer service jobs are a category that could be replaced. He also believes that UBI is a component of something that we should pursue, but not a full solution. Altman discusses his belief that the economic transformation will drive much of the political transformation and predicts that the cost of intelligence and energy will dramatically fall over the next couple of decades. He also thinks that there will be systems that resemble something like democratic socialism that will reallocate resources in a way that supports people who are struggling. Altman then discusses his views on centralized planning and the control problem of AGI. He believes that there has to be a hard uncertainty and humility engineered into AGI, and he is a fan of the off switch.
Part 8:
In this episode of the Sam and Lex podcast, the hosts discuss truth and the challenges of creating a GPT-like model that can contend with it. They explore the different types of truth, such as the truth found in math and physics, historical facts, and sticky narratives. They also discuss the harm that can be caused by some scientific truths, such as group differences in IQ, and the responsibility of companies like OpenAI to minimize the harm caused by their tools.
The hosts then discuss the process of going from idea to deployment that allows OpenAI to be so successful at shipping AI-based products. They also discuss the hiring process and the thinking that went into Microsoft's multi-billion dollar investment in OpenAI. They explore the pros and cons of working with a company like Microsoft and discuss how Microsoft CEO Satya Nadella was able to transform the company into a fresh, innovative, developer-friendly one.
Part 9:
In the last part of the podcast, Sam Altman discusses the recent run on SVB bank, the fragility of the economic system, and the potential impact of AGI on society. He suggests that a statutory change is needed to guarantee deposits, so depositors don't have to doubt the security of their deposits. He further discusses the impact of the SVB bank run on startups, revealing the fragility of the economic system. Altman expresses concern about the speed of change and the ability of institutions to adapt, highlighting the importance of deploying AGI systems early while they are still weak. Altman expresses his excitement for the possibility of AGI solving mysteries of physics, providing a better estimate of the Drake equation, and exploring the existence of other intelligent alien civilizations. He advises young people to be cautious about taking advice from others, and to determine for themselves what will bring them joy and fulfillment. Altman believes that the meaning of life is a complicated question, and AGI might help to answer it, as it feels like humanity was always moving towards creating such a system.