What Will It Mean to Be Living in a World Where AI Does Almost Everything?
If you’ve been paying attention to artificial intelligence, you’ve likely heard of Anthropic, one of the major players shaping the future of AI. Recently, Dario Amodei, the company’s CEO and co-founder, sat down at the Council on Foreign Relations (CFR) to discuss AI’s rapid development, the risks it poses, and the fundamental questions it forces us to ask about our future. This wasn’t just another tech industry conversation; it was an urgent discussion about AI’s role in everything from national security to economic stability, and even what it means to be human.
Amodei’s journey to founding Anthropic began in 2020 when he and several colleagues left OpenAI after realizing the incredible power of scaling AI models. They saw that increasing computational power and data led to an exponential improvement in AI capabilities, a concept known as the scaling hypothesis. More importantly, they recognized that these advances came with enormous risks. Unlike traditional software, where engineers control each function, AI systems are more unpredictable, evolving like human brains rather than being built like machines. With OpenAI shifting its focus, Amodei and his team decided to create Anthropic, a company dedicated to developing AI with safety and responsibility at its core. This commitment goes beyond marketing; it is woven into their work, from research into why AI makes certain decisions to establishing ethical principles that guide AI behavior. They even created a responsible scaling policy, ensuring that as AI models become more advanced, security measures are implemented to prevent potential harm.
While AI’s progress is exciting, Amodei warned that we are approaching a tipping point where these technologies could pose serious risks. He described how Anthropic categorizes AI safety levels, similar to the way the biosafety field classifies dangerous pathogens. Right now, we are at AI Safety Level 2, where AI poses risks comparable to other major technologies. But the next threshold, AI Safety Level 3, could be reached within a year. At that point, AI could enable individuals with no expertise to develop chemical, biological, or radiological weapons simply by following instructions from a model. The challenge is that AI is not like traditional code, where every output can be explicitly controlled. Instead, these systems must be carefully trained to recognize and reject harmful requests, while still being useful for legitimate applications. AI safety is an ongoing process, requiring constant monitoring, yet the uncomfortable reality is that we can never be entirely sure of what these models are capable of until they are widely deployed.
Beyond security concerns, AI’s impact on employment was another pressing issue. Amodei shared that AI is already capable of writing 90 percent of computer code, and within months, it could be doing nearly all of it. This same pattern could spread across industries, transforming professions that rely on knowledge work. But the biggest danger is not that AI will take every job at once. The real risk is that AI will replace some jobs while leaving others untouched, creating a divided society where certain groups of workers are displaced while others continue as usual. This kind of uneven disruption could lead to resentment and instability, forcing us to rethink the role of work in people’s lives. Amodei believes that the most important question AI forces us to confront is how we define human purpose. If our value is no longer tied to economic productivity, where do we find meaning? He pointed out that activities like chess remain deeply meaningful, even though computers can easily defeat human grandmasters. Similarly, sports, art, and personal ambition can still drive people to excel, even if AI can outperform them in technical skills.
Despite these challenges, Amodei remains optimistic about AI’s potential to improve lives. He is especially excited about its applications in medicine and scientific research. AI could accelerate breakthroughs in treating diseases like cancer, Alzheimer’s, and schizophrenia, helping to solve complex medical problems that have long been out of reach. AI models could significantly reduce the time needed for drug discovery and clinical trials, making life-saving treatments available much faster. If managed correctly, AI could drive an economic boom, increasing productivity at an unprecedented scale. However, Amodei cautioned that whether this future leads to a fairer world or deepens inequality depends on the policies we put in place now. Governments and institutions need to actively shape AI’s role in society rather than passively reacting to its disruptions.
One of the most thought-provoking moments in the discussion came when Amodei reflected on the future of humanity in an AI-dominated world. If AI surpasses human intelligence in nearly every area, what does it mean to be human? His answer focused on relationships and ambition. Human life has always been defined by the complexity of our relationships, the struggles we face, and the drive to achieve something meaningful. While AI might assist us, or even outperform us, it cannot replace the fundamental human experience of striving, creating, and connecting with others. He sees a future where AI is a powerful tool, but not one that strips away the things that truly make life worth living.
AI is at a crossroads, and the choices we make now will determine the kind of world we live in over the next decade. Amodei emphasized that we must take AI safety seriously, rethink economic structures to accommodate these changes, and invest in AI’s positive applications, particularly in medicine and research. The technology is advancing whether we are ready or not, but by acting thoughtfully and responsibly, we can shape its impact for the better. AI is no longer just an interesting experiment—it is a force that will redefine the world. Whether that future is one of opportunity or division depends on the actions we take today.