AI Is Making it Easier to Harass Women Online

Being a woman online is increasingly dangerous. It means living with the constant possibility that a simple AI prompt can turn your personal image into something disturbing, offensive and humiliating.
In this episode, I’m joined by Kat Tenbarge, an award-winning journalist who has been covering online harassment of women since the early days of deepfakes. In the last several years, thanks to AI, Kat has witnessed a disturbing trend in how deepfakes are becoming more pervasive. They are impacting a wide range of women and girls (not just celebrities), and platforms and police are ill-equipped to fight it.
But as much as AI is changing the scale and speed of sexual harassment online, this isn’t a story about being powerless. It’s a story about possibility. And as Kat shares, when women organize, when we demand accountability, we can change the culture, shape policies, and build a safer and more tolerant internet.
Topics Covered:
- What does sexual harassment look like in the age of artificial intelligence?
- How can we regulate the rapid creation of non-consensual, synthetic sexual content online?
- Will President Trump’s ‘Take It Down Act’ actually protect women online?
- Should tech companies be held responsible for regulating the spread of deepfakes on their platforms?
About Kat Tenbarge:
Kat Tenbarge is an award-winning feminist journalist who writes the newsletter Spitfire News. Her work has been published in WIRED, NBC News, Business Insider, and more. She has reported on high-profile cases of gender-based violence against influencers and celebrities.
Follow Kat Tenbarge on Bluesky @kattenbarge.bsky.social and on Instagram @kattenbarge.
Follow The Intersect:
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.