CREDIT: 60 MINUTES
What if AI isn’t just here to assist—but to imagine, reason, and possibly even feel?
That’s the mind-bending journey Demis Hassabis, the chess prodigy turned neuroscientist and co-founder of Google DeepMind, takes us on in a recent interview that’s less “tech update” and more “glimpse into the next chapter of human existence.”
At the heart of it: Artificial General Intelligence (AGI)—a kind of machine mind that’s not only faster than us but as flexible and intuitive as we are. Think of a system that could see the world through your eyes, hear what you hear, understand what you’re looking at—and then help you act on it, intelligently.
But here’s the catch: It’s already happening.
AI That Sees, Feels, and Thinks (Almost)
We’re introduced to “Astra,” a next-gen chatbot that doesn’t just answer questions from Wikipedia—it observes, interprets, and creates. Shown a painting for the first time, Astra not only names it but builds a backstory around its emotional tone. “Only the flow of ideas moving onward,” it says of a woman in solitude. As one interviewer admitted, “I wish I had written that.”
This isn’t scripted behavior. It’s emerging. Hassabis calls it a “training situation,” where the AI picks things up in ways that even its creators don’t fully anticipate. That’s both magical—and a little terrifying.
And it’s not stopping at chat. DeepMind’s new Gemini system aims to act in the real world—book tickets, shop, assist—essentially live alongside us. Integrated into smart glasses, it can identify the building you’re looking at and tell you its history, environmental impact, and more. All this, whispered into your ear.
The Promise and the Panic
There’s no denying the upside. DeepMind’s protein-folding breakthrough—solving in a year what would’ve taken science decades—may lead to new drugs, new cures, even the end of disease. “Why not?” Hassabis asks. “That might be within reach in a decade.”
But with power comes the possibility of misuse. Hassabis is clear-eyed: the biggest risk isn’t the machines themselves—it’s us. Humans cutting safety corners in the AI arms race. And AI, once autonomous, becoming misaligned with human values.
Can we teach morality to a machine? “I think we can,” Hassabis says. “Like you’d teach a child—by showing, guiding, and setting boundaries.”
Q&A
Q1: What makes DeepMind’s approach to AGI different?
It’s not just about performance—it’s about behavior. DeepMind trains AI to reason, imagine, and interpret the world, pushing closer to AGI that can integrate into daily life, not just answer questions.
Q2: How close are we to AI with real emotions or self-awareness?
While today’s systems don’t appear truly self-aware, Hassabis believes they may develop a sense of self over time, especially if they learn to model “self and other” like humans do.
Drop your thoughts in the comments. And if this sparked your curiosity, sign up for our AI Newsletter to explore how the future of AI is being written right now.
Curious about how your business or idea could harness AGI responsibly? Click here to connect with our AI consulting team and bring some clarity to your roadmap.