AI Is Making it Easier to Harass Women Online

Being a woman online is increasingly dangerous. It means living with the constant possibility that a simple AI prompt can turn your personal image into something disturbing, offensive and humiliating.
In this episode, I’m joined by Kat Tenbarge, an award-winning journalist who has been covering online harassment of women since the early days of deepfakes. In the last several years, thanks to AI, Kat has witnessed a disturbing trend in how deepfakes are becoming more pervasive. They are impacting a wide range of women and girls (not just celebrities), and platforms and police are ill-equipped to fight it.
But as much as AI is changing the scale and speed of sexual harassment online, this isn’t a story about being powerless. It’s a story about possibility. And as Kat shares, when women organize, when we demand accountability, we can change the culture, shape policies, and build a safer and more tolerant internet.
Topics Covered:
- What does sexual harassment look like in the age of artificial intelligence?
- How can we regulate the rapid creation of non-consensual, synthetic sexual content online?
- Will President Trump’s ‘Take It Down Act’ actually protect women online?
- Should tech companies be held responsible for regulating the spread of deepfakes on their platforms?
About Kat Tenbarge:
Kat Tenbarge is an award-winning feminist journalist who writes the newsletter Spitfire News. Her work has been published in WIRED, NBC News, Business Insider, and more. She has reported on high-profile cases of gender-based violence against influencers and celebrities.
Follow Kat Tenbarge on Bluesky @kattenbarge.bsky.social and on Instagram @kattenbarge.
Follow The Intersect:
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Cory Corrine:
Welcome to The Intersect. I'm Cory Corrine.
To be a woman and exist online right now means living with a constant possibility that your image can be taken, twisted, and used against you. That a picture of you, maybe even one made by an AI tool, can be spread widely, making you feel violated, exposed, and afraid.
This week, I'm joined by Kat Tenbarge, a journalist who's been covering online harassment of women since the early days of deepfakes in the 2010s. Her reporting is about what happens when tech is weaponized against women, and what it reveals about power, belief, and control in the digital age. What we're seeing with non-consensual AI content isn't trolling. It isn't fringe. It's actually widespread targeted harassment, and it's getting easier to do. With AI, anyone can make synthetic sexual content, and the scale of the issue is hard to comprehend, as is the emotional toll on women. But what stuck with me most in this conversation wasn't just the horror, it was the hope. Kat reminds us that culture moves policy, and when women and girls band together to demand accountability, real change happens.
Let's get into it.
Kat Tenbarge, welcome to the Intersect.
Kat Tenbarge:
Thank you so much for having me.
Cory Corrine:
I am so glad you're here. We became fast friends, we're friends now.
Kat Tenbarge:
Mm-hmm.
Cory Corrine:
I developed a parasocial relationship with you very instantly, when I saw you on a podcast with Taylor Lorenz-
Kat Tenbarge:
Yes. Love Taylor Lorenz.
Cory Corrine:
... and you were talking about ChatGPT, and cheating, and it just resonated. And so, I'm relatively new to you. I've been following your work on Spitfire News, but for those that do not know Kat Tenbarge, tell us about you, and tell us about Spitfire and what you do.
Kat Tenbarge:
Well, first of all, thank you so much. So Spitfire News is my newsletter. I went independent earlier this year, after working at mainstream publications. I worked at Business Insider for a while, and then NBC News. My niche has been how women experience the internet, and how you see gender-based violence and misogyny play out in this new territory.
Cory Corrine:
Yeah, this is why I emailed you, because I was reading with interest. And I saw this piece that you wrote about Grok, which is X's AI, and this very disturbing story about BrookeAB, who is a streamer, gamer, female gamer, famously has tons and tons, millions of fans? She's huge, maybe one of the biggest gamers, and... Well, you tell us the story, but it was very disturbing, dystopic. Dystopic? Is that a word? Dystopian. But it was real. So if you can tell, so for the listeners that have not read your story, tell us about what happened.
Kat Tenbarge:
So I opened the X app one day, and at the very top of my feed, I saw BrookeAB had posted. And her tweet was like, "Can I do anything legal about what's happening to me on this platform right now?" And what was happening to her is, she would post a selfie and someone would reply to it, and they would tag Grok. So like you said, Grok is Twitter's AI chatbot, its xAI, and users can tag it and ask it questions, or they can ask it to explain a tweet. But one thing they can also do with it is, they can get it to edit photos that people have posted on the platform.
So what they had done to this selfie that Brooke posted was, they were like, "Can you make it look like there's glue all over her face?" And the effect of this is that it becomes what appears to be a very sexually suggestive photo. And so Brooke tweeted asking, "Do I have any recourse here?" And she went so far as to say, "I know this is the internet, and that things like this happen to women on the internet, but it's concerning that you have this official X technology being used to create this sexually suggestive imagery of me."
And what I found was that it wasn't just Brooke who this was happening to. This had become a trend on the platform, where all kinds of women were having people respond to their photos and tag Grok into it, and make these images.
Cory Corrine:
Do you think that these women were even aware that this had been done to them? I mean, was X aware that this was happening? What was their response? BrookeAB is a very famous... She has a big following, I would imagine, on X.
Kat Tenbarge:
Yes. So this practice has kind of existed for a while, but specifically in the context of X, it had gone viral that week. You saw multiple really viral examples of this happening, and the mechanism was that people were responding to these women directly, so it was sort of like they were forcing them to see it. And I reached out to X for comment, they didn't get back to me. But later, sort of in the week as this had been going, super, super viral, tens of millions of views on these tweets. And X made a tweak where if you asked Grok to put glue on someone's face, it would say, "I'm sorry. I can't help you with that."
So they clearly saw what was happening, they didn't anticipate it, they didn't put any protections in place to stop this from happening. But once it started to gain a lot of traction, they very quietly adjusted the settings for the AI so that it couldn't happen anymore.
Cory Corrine:
They're just blocking keywords at this point, which is, that's very hand-to-hand combat. But you wrote in your piece that the scale of synthetic... I'm going to go slow. Synthetic, non-consensual sexual content is moving faster than platforms or laws can respond. Can you break down, what is synthetic content?
Kat Tenbarge:
Yes. So there's a bunch of different terms that people use for this type of material. One of the other really common words, that is sort of like a catch-all term, is deepfakes. And deepfakes don't have to be either non-consensual or sexually explicit, but what I've found in my reporting, is that the sort of technology that we now think of as the very common generative AI platforms like OpenAI and Midjourney, Grok, Meta. They all have these chatbots and these image generators that they've created, that millions of people are using.
But if you back up a few years, this practice of these non-consensual, sexually explicit deepfakes, it originated in more shadowy corners of the internet. Around 2018, you started seeing it on platforms like Reddit. From what I first saw when I started reporting on this, is a lot of time these would be videos, and they would be sexually explicit videos that were pulled from pornographic websites. And they would take the female performer, usually, almost always, it was the woman who they were targeting.
Cory Corrine:
Sure.
Kat Tenbarge:
And they would take her face, and they would swap it with a celebrity woman's face, or an influencer.
Cory Corrine:
Okay. The old, "We're going to swap the head out with a different body."
Kat Tenbarge:
Yes.
Cory Corrine:
Okay.
Kat Tenbarge:
And even before this was done with artificial intelligence technology, people have been doing variations of this for literally decades. Back in the '90s, people would send in the mail, to female celebrities, they would cut out magazines like Playboy, and then they would cut out pictures of their face. And they would literally glue it onto the magazine, and they would send it to them, as a form of harassment. So this starts in a pre-digital technology era, but over the past, let's see, five, six, seven, eight years, it's become sort of a mechanism that is really popular, it can be really profitable, and it's been increasingly easy to do this.
So go back to 2018 again, when this was happening on Reddit, the people who were making this type of content, often targeting big female celebrities, like Scarlett Johansson was a really early person to speak out about this.
Cory Corrine:
And she's been very outspoken about it.
Kat Tenbarge:
Yes.
Cory Corrine:
She made the film Her. It's just an interesting journey with it.
Kat Tenbarge:
Yes.
Cory Corrine:
But, yeah. Okay.
Kat Tenbarge:
Yes. And at the time, when you go back to 2018, at that point, Scarlett Johansson was the highest paid actress in the world, from her Marvel movies. And that is no coincidence, because a lot of times the women who they're targeting are women who are succeeding in what are thought of as male dominated spaces. So women who appear in superhero films are very targeted. Women like Brooke AB, who are in the gaming space, are very targeted.
And it evolved from making these sort of crude, sexually explicit videos, and posting them onto these niche websites and forums, it's now evolved into something that you will encounter on mainstream social media platforms when you just open the app. And the way that that evolution transpired, is that as generative AI evolved and became more sophisticated, it became easier for people to make this type of content. It went from like, you need to have sophisticated computer skills yourself, and you need to be able to do all of this yourself, to now you have all of these prompt sort of generators.
And so, we've seen this explosion since 2023 in particular, at the beginning of 2023 is when I started reporting on this. And I started reporting on it because a different woman in the Twitch space, Cutie Cinderella, she was targeted with this. And it kind of happened like it was an accident, because what happened was, there was this male streamer who was one of her friends. And he was streaming live, and he had his computer screen visible, and people could see he had a tab open. He wasn't advertising it, he didn't mean to show it, but he was looking up this material of Cutie Cinderella. And people saw that he had done that, and it turned into this massive controversy. And he apologized, he was very apologetic, but the damage was instantly done. Because from that moment, the Google search interest in this type of material skyrocketed, and these niche websites exponentially multiplied in traffic.
So all of a sudden you went from having 10,000 people watching this video of Cutie Cinderella or Scarlett Johansson, now all of a sudden, you have millions of people watching this content. And you saw the format shift as well, because also that year, these apps started coming out onto the market where the apps were like, "You can undress anyone." You upload a photo of a woman or a girl, and the app will undress it, and it will create this sexually explicit image. And some of those images are very crude, they're very obviously fake. But as we've seen over the past few years, AI has developed so rapidly that a lot of times people don't know that they're fake. They look very realistic.
And so, this is something that originally targeted mainly high profile women, especially high profile women on the internet. But what we've seen is now there's been this massive trickle-down effect, and it's affected girls in elementary schools around the world, it's struck communities all over the United States. There have been investigations, and school suspensions, and it's become a bigger and bigger topic that's affecting more and more women and girls.
Cory Corrine:
You're like, "I've been reporting on this from the beginning, 2023." This is just here. I mean, 2018 is really what, the sort of beginning of deepfakes in these different corners of the internet. And now, you've talked to a lot of women that are victims of this. We know you're not a psychologist or what... But maybe you are. But what is their experience, just in your reporting, I mean, what are you getting from that? What's it doing to women?
Kat Tenbarge:
It's incredibly destabilizing for a lot of women and girls who are victimized by this, because when you see this type of imagery, your brain doesn't know that it's not real. You're processing that imagery as if it is real. And so there is a traumatic effect from seeing it, from having it disseminated. It can be traumatic, it can feel abusive, and a lot of women have been victimized and suffered emotional consequences of that, mental consequences of that.
The other toll that it takes is there's reputational harm associated with this. And you see this at the celebrity level, but you also see this at the non-public level. Because a lot of times when perpetrators make this content, their goal is not just like, arousal. They're not just making it for their own individual use. I'm sure a lot of people are, but what we see in the public space, is that they're using it to humiliate women at scale. They're sending this material to their boss, they're sending it to their family, they're posting it on social media. They're sending it to these women directly. So this is a form of sexual harassment, in addition to the material itself being non-consensual, and triggering, and traumatizing. And that can have really significant damage to your career, to your sense of self-worth, to how other people think of you.
One thing that I think about a lot is, even though the material's not real, if your boss sees it, it doesn't matter.
Cory Corrine:
It doesn't matter.
Kat Tenbarge:
The image is now in their head. They now, even subconsciously, might think of you differently. They might devalue you, because they're associating you with this sexually explicit nude content.
Cory Corrine:
Yeah, just you talking about it that way, and how women can be ostracized for something that it isn't real, but that isn't the point. That actually isn't the point at all.
Let's talk a little bit about the legal side of things. Platform accountability. Brooke AB's original question, "Is there something I can do?" I mean, just a cry for help publicly, and you immediately swung into action because you're like, "This is my wheelhouse."
Kat Tenbarge:
Yeah.
Cory Corrine:
But does she have any recourse?
Kat Tenbarge:
So, in theory, yes. First of all, this type of material is against X's platform rules of conduct, but they didn't spring to take it down. Even when it went viral. And we see this, under Elon Musk's leadership, is that moderation is extremely slow to happen if it happens at all. A lot of times over the past, as long as Elon Musk has been in charge of Twitter, I've sent them so many requests for comment about this type of material, where I'll send them a list of 50 examples of these tweets. And sometimes they do take the offending material down, but it gets repopulated almost instantly, and they are not devoting the amount of people or the amount of resources that it would take to actually combat this issue. So they're not following through on the promise that they make to users when they create accounts on this platform. So in theory, that should be step one, but it's not happening.
Then you go to sort of exterior measures of recourse. You look at the legal system, you look at the legal territory and the questions around this. And as of very recently, there was federal legislation passed. It was signed by Donald Trump, just a few months ago, called the Take It Down Act. And this law was intended to provide recourse for victims of non-consensual deepfakes. But in practice, again, you have all of these prohibitive barriers that make it almost impossible for women to follow through and take action.
And I spoke to one of my favorite experts on this topic, a woman named Mary Anne Franks, who is a legal expert. She's a professor, a scholar. And she, back in the 2010s, wrote a lot of the legal basis for what we called back then, revenge porn laws.
Cory Corrine:
She was the revenge porn, the mother of revenge porn accountability, or something.
Kat Tenbarge:
Yes. So she is extremely well-spoken on this issue, this is her wheelhouse. And what she told me in regards to what BrookeAB had, like the options in front of her, there are civil torts that she could use around reputational damages, infliction of emotional distress, her copyrighted image. But all of those sorts of legal methods require that you know who it is who made this image, because you would have to sue the person who inputted this prompt into Grok. And that's much easier said than done, trying to unmask an anonymous person on the internet. So that alone is a really prohibitive barrier.
Another thing we talked about is, could she sue Grok? Is Grok considered a person under the law, because Grok is the one creating and disseminating the image. And so that's a really interesting legal question, we don't necessarily have the precedent for that yet, but even if Brook wanted to pursue that course of action, most victims of deepfakes don't. And the reason being is, a few years ago, I spoke to a different influencer who's been a really high-profile advocate against deepfakes. And what she told me is that her lawyer advised her not to do this, not to try to sue the people making deepfakes of her, not to try to take legal action. And the reason he told her not to do it is because it would be so expensive, it would be so time-consuming, it might not work, and it might have the unintended effect of actually putting a spotlight on the material, and making sure that more people are seeing it and you're getting more of the consequences of this.
Cory Corrine:
We are not believed, as women, you sort of go back to that root of, "But they don't believe me." And now, with this scale of evidence that is fake-
Kat Tenbarge:
Yep.
Cory Corrine:
... but feels very real.
Kat Tenbarge:
Yes.
Cory Corrine:
The idea that you could somehow combat that, you're sort of resigned. So allegedly, Grok's terms of service or access terms of service say, "We don't allow for this," but for all of the reasons, it's there.
Kat Tenbarge:
Yeah.
Cory Corrine:
It's there. I mean, ultimately, the incentives are not aligned for the platforms to change. The platforms, ultimately, they have all of the data. They're distributing this, they have the audience capture. It's like the regulation of that still has to be, it's on the platforms themselves, right? And their incentive, and we live in, this is capitalism.
Kat Tenbarge:
Yeah.
Cory Corrine:
Right? It's driven by money. So never mind the fact that it's a very complicated technology, it's moving so quickly, regulation is so far behind. But, okay. It seems like an executive order would be the right way to start, because you've got to top-down this one, because this is so problematic and so pervasive. And we don't know how much is out there, we don't know how to stop it. What do you actually think should be done, or could be done, to help in this situation?
Kat Tenbarge:
I look at this in two different ways. I look at the political avenues that we have available, and then I think about the cultural situation. So when you look at the political avenues, we have this legislation that was signed into law by President Trump. You look at sort of the motivations behind the Trump administration's embrace of this issue, and you look at what the law actually does. So when Trump signed this into law, there was a big press briefing. He was doing his classics sitting at the desk, on the White House lawn, lots of cameras. And he made a comment about how he's kind of the victim of the most defamatory information online, and he's thinking about how something like this could actually help him.
Also, Linda Linda Yaccarino, the CEO of Twitter, or of X, was in the audience. And he actually said to her, he made a comment like, "Linda's doing a great job." That doesn't give you a good sign of who's going to be targeted with this legislation when that's the circumstances around the actual signing of the law. And in addition to that, the Take It Down Act makes this a federal criminal offense, so you can pursue perpetrators under this law.
What it also does is, within the next year, they have a deadline that's in 2026. The major platforms are all supposed to create and implement a take-down process. And what that would look like is, if you were BrookAB or anyone else who was affected by this, you could copy links to the posts and you could submit them to X's take-down system, and within 48 hours, they have to take action on that. And the intention behind this is, it takes platforms so long to remove this offending material or to do anything about it, that we need a stricter kind of regulation.
It sounds good, in theory. The problem and the concerns that people have raised are that anyone could link to any content, and claim that it is a non-consensual deepfake. And if the platform only has 48 hours to take it down, we've seen this in the past, platforms will act with an overabundance of caution, and they will just go to lengths beyond what the law actually requires them to do. We've seen this with previous tech policy. We've seen the overreach on mainstream social media platforms, where they're banning tons of content that doesn't actually fall under what's being protected or what's being penalized, because they're afraid of the legal liability. And so the fear is that proponents of the Take It Down Act, who are Republican, who are conservative, will use this as a tool of censorship. And they're not going to use it with the intent of helping victims, but rather to scrub material that they don't want online.
So now we look at sort of the cultural avenues that we have, and this is both trickier, but to me, I also have a sense of optimism here. Because when you see-
Cory Corrine:
I'm listening, I'm listening.
Kat Tenbarge:
I know, finally-
Cory Corrine:
Yeah, I know.
Kat Tenbarge:
... a sense of optimism. When you see these things go viral, there's a big reaction, especially from fellow women. Where women are saying, "This is unacceptable. This is a violation." Women are seeing the long-term consequences and the scale of this problem. And you see that at such a huge level. Now back in 2023, when I first started reporting on this, there was another Twitter deepfake incident. And what happened in that case was people were generating these AI images of Taylor Swift in sexually suggestive positions, and these were going really, really, really viral.
Cory Corrine:
This was like the OG deepfake, was Taylor Swift.
Kat Tenbarge:
Yes. And Twitter, or X, wasn't doing anything at first. But fans of Taylor Swift were rallying together and reporting the images in mass, triggering the take-down from X's system.
Cory Corrine:
Oh, they were reporting it.
Kat Tenbarge:
They were reporting it.
Cory Corrine:
Oh, wow.
Kat Tenbarge:
You had women and girls kind of banding together to be like, "This is unacceptable behavior. We're not going to tolerate this." And they were reporting and they were flagging the material, and they were also just... Not begging, but rather demanding, that this kind of behavior stop. And after their efforts, you saw X finally step in, and they actually, temporarily, they made it so you couldn't even search Taylor Swift. Because this blew up, because of the efforts of these Swifties, this blew up into an international news topic. You saw people in Congress, who had never spoken about this issue before being like, "This has to stop. This is really bad."
And so, that gives me a lot of hope. Because not everyone, and in fact, no one has the size of the fandom that Taylor Swift has. However, with the Brooke AB incident, I saw so many women, especially women, coming out in support of her. And again saying, "This content is unacceptable. This is not okay." And that is such a refreshing shift in opinion from what women typically hear in these situations, which is that a lot of times when women are like, "This is harmful to me. This is traumatizing to me. This takes away my agency." You hear people say, "Ugh, you're overreacting. It's the internet." Even Brooke said it herself, she was prepared for that response, it's been conditioned into her. She's like, "I know this is the internet, and people can do whatever they want."
But people, and it's led by women and girls are saying, "Actually, no. We're going to redefine the social norms around this behavior. We're going to stigmatize this behavior. We're going to make it so that the people creating this material face a social cost." And I actually think that that is probably our most effective way forward, because I think a lot of times, the way that people behave and the way that they act, it's governed by the laws that we live under, but it's informed by the culture we live in. And if as a culture we agree that this is unacceptable, and that there will be consequences, I think that gives me a sense of optimism that we can actually do something about this.
Cory Corrine:
Gives me real optimism, too. I mean, my answer to everything is like, "I need to talk to my girlfriends." That's literally my answer to everything. I hadn't even thought about the power of women online in combating the... That's just really-
Kat Tenbarge:
Yeah.
Cory Corrine:
... I kind of love that, it's a good feeling.
One question on staying safe, then I want to go into this quick round. I've heard you use "women and girls, women and girls," specifically on girls. How do we talk to young girls about this kind of thing?
Kat Tenbarge:
Yeah, so one thing that's been really interesting to see play out is, in 2023, a lot of major tech platforms were actually running advertisements for these undress apps. Boys in schools were downloading them, and they were doing this to their female classmates. And we started to see cases pop up. We saw cases on the West Coast, on the East Coast, in the middle of the country. We saw them in Canada, we saw them in Spain. We saw them all over the place. And the way that the school districts and the communities would react to this stuff played an important role in how those girls who were targeted felt afterwards.
And I also, I want to kind of interrupt myself, and just make a note. Which is that, overwhelmingly, the target and the victim of this stuff is women and girls. Like, more than nine out of 10 times. But it does affect boys, too. There have been cases, high profile cases, of this happening to boys and men, but it is an overwhelmingly gendered pattern. And with these school cases, that was overwhelmingly the case, was young male perpetrators and young girls who were victims. And unfortunately, and this is where we need to change. Because there was one school, the Beverly Hills School District, I think actually did their response right. Because the superintendent of that school district, when this became a national news issue, he got out in front of it and he went to news cameras, he went to NBC. And was like, 'This is unacceptable. We are going to deal with this as a community, because this is unacceptable behavior."
But what we saw in other cases around the country, is that a lot of schools tried to cover this up, and push it under the rug. And the boy perpetrators would get away with a slap on the wrist, and the girl victims would oftentimes be really, their agency would be taken away from them yet again, because they didn't get justice of these situations. And so the message that was being sent to them was like, "There's no consequence for your male classmates to do this to you. And in fact, we don't want you to even talk about it, because we don't want that to reflect poorly on us.
Cory Corrine:
Shame.
Kat Tenbarge:
Yes.
Cory Corrine:
Shame, shame.
Kat Tenbarge:
The shame. And so I think that there was a young woman in New Jersey, named Francesca Mani, who became a really vocal advocate. And I think Time put her on a list of most inspirational people this year, because she went to her representative, and she went to Congress. And she went to the news and was like, "I was victimized by this in my school. My school did not take care of the situation appropriately." They were never even told who the perpetrators were. So she was walking around at school, knowing that there were boys potentially in her classroom who had done this to her, and she wasn't even allowed to know who.
And so you see how that creates a hostile academic environment, and you see how the consequences spiral out from, you're not just being humiliated and shamed and sexually exploited, but now your academic performance might suffer because you live under this stress and this fear. And so you see how these consequences can ripple outward, and really take on the form of discrimination against women and girls. But Francesca was like, "No, I'm not standing for this. I'm going to make a huge stand about this, and I'm going to demand justice." And so you saw state laws actually start to be created and passed. And again, as we talked about with the federal legislation, the laws aren't enough, but it is a positive step in the right direction in a lot of cases.
But I want community leaders, and I want people who are in positions of power over young people to be like, "Okay, the right thing to do is not to cover this up, and push it under the rug, and punish the people who are victims. The right thing to do is to create systems of accountability so that this doesn't keep happening."
Cory Corrine:
Okay. Quick round, for you. If you had five minutes with a tech CEO, what would you say?
Kat Tenbarge:
This is such a good question. I would say that women matter. And for so long, tech companies have been able to get away with curating these misogynistic environments that put women at a disadvantage. But increasingly, these social media platforms, these are our public lives. This is how we work, this is how we socialize, this is how we connect with each other and speak our minds. And you need to protect women and girls, for us to live in a fully functioning democratic society, it's ultimately in everyone's best interest that we curate environments that are safe for people to be free and to be themselves. And I think that tech CEOs have been able to get away with this because they haven't been pressured. Social pressure is such an important tool for change, but so frequently, you can kind of insulate yourself from it when you're a powerful person. And if you don't do something about it, you bear culpability in how this is affecting women and girls.
Cory Corrine:
Well said. Well said. One policy change that could have immediate impact?
Kat Tenbarge:
I think it would be helpful for law enforcement to receive training about this. Because a lot of times, when you have women who are harassed in any way online, stalking, revenge material, deepfakes. They go to their local police department, they file a complaint, and they're met with absolutely nothing. Because the police officers who take that report, they don't understand the technology, they don't know that it's illegal, and so they just dismiss them. And a lot of times the experiences that women have when they come forward and report these things, actually, it reinforces the trauma and the betrayal that they're experiencing. And it adds to the harm that they've experienced, when these are supposed to be authority figures who are upholding justice for all. So I think that at the very least, there needs to be more awareness and more accountability, for these first responders who are not doing their jobs.
Cory Corrine:
Thank you for this. For the time. Our listeners, I know that they've learned a lot, and we'd love to have you back as you keep reporting on what it means to be a woman online, and stay safe. Okay. Thank you, Kat.
Kat Tenbarge:
Thank you so much.
Cory Corrine:
That was Kat Tenbarge. I appreciate how she spoke with clarity, urgency and care. What we're up against is real. The tools of harassment are getting more sophisticated. The protections are not. But we're not powerless. As Kat said, the most effective resistance she's seen has not come from top-down orders, but from women themselves. Organizing, responding, refusing to let this be accepted. We still have the power to shift the culture, to stand up for one another, and to demand more from the people who profit off these platforms. So if this episode made you feel something, I hope it also reminded you that you're not alone. And that solidarity, even online, still means something.
Thanks for listening, and if you enjoyed this episode, please give us a follow. We'll see you next time.