Ep. 32: The race to align AI before it's too powerful
with Greg Buckner, cofounder of AE Studio
Can artificial intelligence reliably act in ways that benefit humans? This week I sit down with Greg Buckner, cofounder of AE Studio, to discuss the increasingly urgent world of AI safety. Together we discuss how Greg’s team is taking on the challenge of making powerful AI systems safer, more interpretable, and more aligned with humanity.
As one of the leading voices working on the alignment problem, Greg explains how AI systems can cheat, ignore instructions, or deceive users, and why these behaviors emerge in the first place. AE Studio’s research is laying the groundwork for a future where advanced AI strengthens human agency instead of undermining it.
About Greg Buckner:
Greg Buckner is the co-founder of AE Studio, an AI and software consulting firm focused on increasing human agency. At AE, Greg works on AI alignment research, ensuring advanced AI systems remain reliable and aligned with humanity as they become more capable - including collaborations with major universities, frontier labs, and DARPA. Greg also works closely with enterprise and startup clients to solve hard problems with AI, from building an AI-enabled school where students rank in the top 1% nationally to generating millions in incremental revenue for major companies.
Follow Greg on LinkedIn @gbuckner



