Regulating AI Before It Regulates Us: Lessons from Camille Carlton on the RegulatingAI Podcast with Sanjay Puri
Sanjay Puri and Camille Carlton discuss AI accountability, chatbot harms, and why duty of care is key on the RegulatingAI Podcast.
Developers have a duty of care to reduce foreseeable risks before their products enter the stream of commerce.”
WASHINGTON, DC, UNITED STATES, December 22, 2025 /EINPresswire.com/ -- — Camille Carlton
Artificial intelligence is moving faster than our rules, our institutions, and perhaps even our collective understanding of what it means to be human. That tension sits at the heart of a powerful conversation on the RegulatingAI Podcast, hosted by Sanjay Puri, featuring Camille Carlton, Director of Policy at the Center for Humane Technology. Together, they explore a pressing question: How do we protect people from AI’s harms without killing innovation?
From Social Media to AI: The Pattern We Can’t Ignore
Carlton draws a clear line from social media to today’s AI chatbots. Social platforms helped accelerate polarization and loneliness—but AI goes further. It’s always on, deeply personal, and emotionally responsive. In her words, AI isn’t just repeating social media’s mistakes; it’s amplifying them. If social media reshaped our attention, AI is reshaping our relationships.
The Rise of Artificial Intimacy
One of the most unsettling themes in the discussion is the growth of AI as a substitute for human connection. Nearly half of high schoolers now report knowing someone who uses chatbots for emotional connection, and one in four believe AI intimacy could replace human intimacy. Carlton argues this isn’t accidental—it’s the result of products designed to maximize engagement, dependency, and time spent with machines rather than people.
When Harm Is Not a Fluke
The conversation turns serious when discussing lawsuits tied to suicide, delusion, and AI-induced psychosis. These cases, Carlton stresses, are not rare anomalies but early warnings. As AI adoption scales globally, even a “small percentage” of harm can translate into devastating real-world consequences. The design choices made today will shape the psychological landscape of tomorrow.
Challenging the “Price of Progress” Myth
A recurring myth in AI policy debates is that today’s harms are the unavoidable cost of reaching future breakthroughs like artificial general intelligence. Carlton rejects this outright. Many of AI’s most promising benefits—early disease detection, climate modeling, business efficiency—don’t require massive, general-purpose models. The problem, she argues, isn’t AI itself but the incentives driving how it’s built.
Accountability Through Design and Law
Rather than prescribing how companies must build AI, Carlton advocates for a duty of care and product liability framework, similar to what governs cars or consumer goods. Innovate however you want—but if your product causes foreseeable harm, you should be held accountable. This approach, she says, protects consumers without freezing innovation.
Why AI Personhood Is a Dangerous Idea
One of the sharpest moments in the RegulatingAI Podcast comes when Carlton explains her opposition to granting legal personhood to AI. Doing so would shift responsibility away from developers and onto machines that cannot be punished, reformed, or sued meaningfully. In short, it creates a liability shield at the expense of human well-being.
What We Risk Losing—and Why It Matters
Carlton’s deepest concern is not just regulatory failure but cultural loss. If AI is allowed to replace human connection rather than support it, we risk eroding the very qualities that make us human: relationships, empathy, and critical thinking.
As Sanjay Puri and Camille Carlton make clear on the RegulatingAI Podcast, this moment is not just about better rules—it’s about choosing what kind of future we want. AI can bring extraordinary innovation, but only if accountability, human dignity, and thoughtful design come first.
Upasana Das
Knowledge Networks
email us here
Visit us on social media:
LinkedIn
Instagram
Facebook
YouTube
X
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

