Aug. 3, 2025

Privacy is becoming a main character

Privacy is becoming a main character

This week on The Intersect, I talked to Udbhav Tiwari, Signal’s VP of Strategy and Global Affairs -- someone in tech who’s deeply engaged with the question of what it means to build for privacy in an AI-powered, surveillance-heavy world. And someone whose app has been thrust into the cultural spotlight.

We started with a simple question:

Is my group chat safe?

And not in the theoretical way. In the actual way. Like -- should I move my chats off iMessage? Off WhatsApp?

@theintersectshow Is end-to-end encryption enough to keep our messages safe? What if the real risk is in the metadata? This week’s episode of The Intersect unpacks how private encrypted messages really are, featuring Udbhav Tiwari, Signal's VP of Strategy and Global Affairs. Cory and Udbhav dive into digital saftey and how Signal is doing things differently. 🎧 Available at the link in bio or wherever you get your podcasts. @Dear Media #TheIntersect #Privacy #Metadata #SignalApp #EncryptedMessaging #fyp #foryoupage ♬ original sound - theintersectshow

The short answer is that most of the communication tools we use -- yes, even ones that say they're ‘encrypted’ -- are still collecting your metadata, your device info and your usage patterns. Encryption does not fully equal privacy. That’s what this conversation unpacks.

Udbhav explains that it exists to protect your information and your autonomy in messaging by collecting very limited data. And for a while, the use cases felt more niche, limited to journalists discussing confidential information, for example. But now, Signal is being used by the general population and is even appearing in pop culture -- from political and cultural plotlines to Drake lyrics to on-screen as a main character, (see Netflix's latest series, The Hunting Wives, which I did binge … )

I also asked Udbhav how he personally uses AI, where he sees the threats and what gives him hope. The Full episode here, Is your group chat really private?


Some of what I’m reading this week:

Yes, some public ChatGPT queries are showing up in Google searches
OpenAI’s hosted playgrounds have a new side effect: visibility. If you’ve ever typed something personal into ChatGPT via public tools, there’s a chance it’s now indexed. Check your settings!
by Devin Coldewey (TechCrunch)

Vogue’s AI-generated model stirs new ethical backlash
Vogue has used an AI model (in its print pages, for the first time ever) which has reignited debates about beauty standards, racial bias and whether synthetic faces are replacing real ones.
by Laura Mullan (Technology Magazine)

Barbie is a getting a chatbot brain
Toy giant Mattel is working with OpenAI to build generative AI into Barbie and other products. It’s not just about voice -- it’s about turning toys into personalities, with real-time, unscripted interactions.
(Reuters)

Australia’s OpenAI challenger is backed by a 28-year-old multimillionaire
Entrepreneur Ed Craven is building a sovereign foundation model from down under. He says it’s about independence from US tech giants. Another signal that governments outside of the US are paving pathways for privacy for their citizenship.
by David Swan (Forbes Australia)

AI companies quietly drop the ‘not a doctor’ disclaimers

Companies like OpenAI and Google used to warn users not to rely on chatbots for medical advice. Now they’ve mostly stopped -- a shift driven by confidence in improved model accuracy, competitive pressure to legitimize health-related use cases and a broader move to position AI as a viable assistant across sensitive domains. The disclaimers are disappearing even on products already used by millions.
by Tate Ryan-Mosley (MIT Technology Review)

Amazon agrees to pay the New York Times $20M+ a year to train AI
The future of AI training may involve scraping and licensing. Amazon just signed a major content deal with the Times, signaling that journalism has real market value in the model wars. Indie journos and publishers, take note.
by Alexandra Bruell (WSJ)