For the optimal listening experience and to ensure you never miss an episode, subscribe to Machines Like Us on Apple Podcasts and Spotify.
The Global AI Rush and the Neglect of Safety
A few months ago, I became a member of the Canadian government's artificial intelligence strategy task force. Among thirty participants, I was one of only four individuals dedicated to safety concerns. The majority focused exclusively on promoting growth, reflecting a worldwide trend where nations are aggressively advancing AI development while viewing regulation as a potential hindrance.
A Rapid Shift in Priorities
It is difficult to exaggerate the speed of this transformation. Just a few years back, prominent figures like Elon Musk advocated for a pause in AI development across the industry. Concurrently, the Biden administration was crafting an AI Bill of Rights, a framework widely regarded as one of the most thorough and insightful approaches to AI regulation ever conceived.
Insights from a Leading Expert
The architect of that initiative was Dr. Alondra Nelson. Currently, she directs the Science, Technology, and Social Values Lab at the Institute for Advanced Study and recently served on Zohran Mamdani's mayoral transition team in New York. I invited her to discuss a pressing issue: how can we ensure the safety of a technology when regulatory interest is minimal, and what are the potential consequences of inaction?
Key References:
- Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published by the White House Office of Science and Technology Policy
- The Mirage of AI Deregulation, an article by Alondra Nelson in Science
- International AI Safety Report 2026, authored by Yoshua Bengio and colleagues
