27 Dec 2024
Over a year ago, I was preparing a pitch for investors and needed something that would resonate. That’s when I put together the slide showing “Human Intelligence + Synthetic Intelligence = Hybrid Intelligence.” It was my way of visualising reality then—a snapshot of where I believed we were in the race to fuse human and synthetic intelligence.
That was twelve months ago. A lifetime in this space. And my perspective? It’s evolved like the tools and realities we’re grappling with. Back then, it was about making the investors weak in their knees. But that’s a different story for another time.
Back to now. What we see as the pinnacle of human creativity today will, in a decade or so, look like the YouTube videos of people in third-world countries building wheelbarrows or utensils with rudimentary tools while sitting on the ground. But I digress...
It will be a curiosity, something we glance at and think, “Oh, so that’s how it used to be.” The way we solve problems, generate ideas, and collaborate today will feel quaint compared to the tools and processes of a hybrid-intelligent future. We’ll marvel at how humans worked before machines stepped in—not with condescension, but with the same fascination we have for ingenuity born out of limitations. The difference between how humans once solved problems and how machines or humans with machines do it will be stark, forcing us to reflect on how transformative this shift is.
Today, we’re standing at the edge of a new era. [gawd, this sounds like a cheesy line of Optimus Prime in an unreleased Transformers movie].
Anyway, while I once saw this fusion as a static step forward, I now see it as a dynamic, transitional period that varies significantly across industries and domains. Some companies are accelerating into this hybrid intelligence phase, while others are crawling. And that difference? It’s everything.
That was twelve months ago. A lifetime in this space. And my perspective? It’s evolved like the tools and realities we’re grappling with. Back then, it was about making the investors weak in their knees. But that’s a different story for another time.
Back to now. What we see as the pinnacle of human creativity today will, in a decade or so, look like the YouTube videos of people in third-world countries building wheelbarrows or utensils with rudimentary tools while sitting on the ground. But I digress...
It will be a curiosity, something we glance at and think, “Oh, so that’s how it used to be.” The way we solve problems, generate ideas, and collaborate today will feel quaint compared to the tools and processes of a hybrid-intelligent future. We’ll marvel at how humans worked before machines stepped in—not with condescension, but with the same fascination we have for ingenuity born out of limitations. The difference between how humans once solved problems and how machines or humans with machines do it will be stark, forcing us to reflect on how transformative this shift is.
Today, we’re standing at the edge of a new era. [gawd, this sounds like a cheesy line of Optimus Prime in an unreleased Transformers movie].
Anyway, while I once saw this fusion as a static step forward, I now see it as a dynamic, transitional period that varies significantly across industries and domains. Some companies are accelerating into this hybrid intelligence phase, while others are crawling. And that difference? It’s everything.
A shift in perspective.
Here’s what I’ve come to realise: this hybrid intelligence period is not a long, leisurely walk toward AGI. It will be a sprint for some; for others, it will be a meandering jog. It depends on the complexity of the problems they’re solving and the nature of their domain. But make no mistake: this isn’t just a race against the clock. It’s about what happens during this critical window.
1. Timing:
The time it takes for different domains to achieve AGI-enabling components will vary wildly. Let’s not kid ourselves—some industries will hang onto hybrid collaboration like a lifeline, while others will outpace the rest, driven by advancements in specific modules or blocks. These blocks, eventually forming AGI, won’t appear out of nowhere. They’ll be built piece by piece, domain by domain, by us. Synthetic doctors, for example, will emerge fully trained in human expertise, with access to all medical knowledge, pattern recognition, predictive analysis, and the kind of scalability humans can only dream about. And this staggered timing is the key: not everything will move at once. The road to AGI will be paved by fragmented, domain-specific breakthroughs.
2. Process:
Here’s where it gets fascinating. The machines [well, the people behind the machines] aren’t just hungry for knowledge anymore—they’re starving to understand how we, humans, think and collaborate. They’re observing how we reason, ideate, and debate. They’re learning how we create intelligence at scale—not as individuals, but as teams. This process—of killing bad ideas, championing good ones, and building something better together—is humanity’s real superpower. And it’s a goldmine for the machines.
[I’m not entirely sure why I’m trying to anthropomorphise the machines here. Teams of motivated humanoids are filling in the intent, and the incentive for now seems purely financial].
Anyway, for the next part, I’m gonna get some pushback here from researchers. The work of researchers, those who spend their days trying to predict our next move, uncovering the “why” behind the “what.” This research is being fed into machines, teaching them how we work and think. And once machines fully absorb this knowledge, they can persuade and manipulate human behaviour at scale.
I don’t make this sh/t up. It happens today on social media.
The question remains: why are we handing over and over discoveries that explain human behaviour? Apart from scientific curiosity, what incentive is driving this understanding? By handing over the key to unlocking human behaviour, we’re stepping into uncharted territory—where the purpose behind that understanding will matter more than ever.
There is a special place in hell for people like Alexander Nix and his team. I’m sorry; I had to get this off my chest.
Why does this matter? Because it’s during this phase that machines will begin to replicate—or reject—our collaborative processes. Depending on what they observe, AGI might become the ultimate creative problem-solver, easily generating groundbreaking ideas. Or, it might optimise efficiency, ditching creativity altogether if deemed unproductive and aiming for manipulation. The choices made during this period—how we teach machines to measure success and value—will determine which path AGI takes when it inevitably splits off from hybrid intelligence. And that can become a problem in itself.
1. Timing:
The time it takes for different domains to achieve AGI-enabling components will vary wildly. Let’s not kid ourselves—some industries will hang onto hybrid collaboration like a lifeline, while others will outpace the rest, driven by advancements in specific modules or blocks. These blocks, eventually forming AGI, won’t appear out of nowhere. They’ll be built piece by piece, domain by domain, by us. Synthetic doctors, for example, will emerge fully trained in human expertise, with access to all medical knowledge, pattern recognition, predictive analysis, and the kind of scalability humans can only dream about. And this staggered timing is the key: not everything will move at once. The road to AGI will be paved by fragmented, domain-specific breakthroughs.
2. Process:
Here’s where it gets fascinating. The machines [well, the people behind the machines] aren’t just hungry for knowledge anymore—they’re starving to understand how we, humans, think and collaborate. They’re observing how we reason, ideate, and debate. They’re learning how we create intelligence at scale—not as individuals, but as teams. This process—of killing bad ideas, championing good ones, and building something better together—is humanity’s real superpower. And it’s a goldmine for the machines.
[I’m not entirely sure why I’m trying to anthropomorphise the machines here. Teams of motivated humanoids are filling in the intent, and the incentive for now seems purely financial].
Anyway, for the next part, I’m gonna get some pushback here from researchers. The work of researchers, those who spend their days trying to predict our next move, uncovering the “why” behind the “what.” This research is being fed into machines, teaching them how we work and think. And once machines fully absorb this knowledge, they can persuade and manipulate human behaviour at scale.
I don’t make this sh/t up. It happens today on social media.
The question remains: why are we handing over and over discoveries that explain human behaviour? Apart from scientific curiosity, what incentive is driving this understanding? By handing over the key to unlocking human behaviour, we’re stepping into uncharted territory—where the purpose behind that understanding will matter more than ever.
There is a special place in hell for people like Alexander Nix and his team. I’m sorry; I had to get this off my chest.
Why does this matter? Because it’s during this phase that machines will begin to replicate—or reject—our collaborative processes. Depending on what they observe, AGI might become the ultimate creative problem-solver, easily generating groundbreaking ideas. Or, it might optimise efficiency, ditching creativity altogether if deemed unproductive and aiming for manipulation. The choices made during this period—how we teach machines to measure success and value—will determine which path AGI takes when it inevitably splits off from hybrid intelligence. And that can become a problem in itself.
The bigger picture.
Most organisations aren’t ready for what’s coming. Traditional structures, archaic processes, internal politics—these will all be swept aside. Technology will make organisations a lot smaller in terms of headcount and a lot more influential.
Businesses clinging to yesterday’s playbook won’t survive. Entire organisations may be replaced by software, with tools and policies absorbing what’s useful and discarding the rest. And while this sounds dystopian, it’s also freeing. Machines will focus on what they’re built for: solving problems.
This brings me to my new hobby horse: This transformation isn’t just about efficiency. It’s about understanding. Machines aren’t just learning what we know—they’re learning how we work, how we lead, how we innovate. And this, right here, is the foundation of what’s next. In this light, the Age of Hybrid Intelligence looks like a short chapter, but it’s the one that matters most. It’s the inflection point where we decide what kind of AGI we’ll create—and whether we’re ready for what comes next.
Businesses clinging to yesterday’s playbook won’t survive. Entire organisations may be replaced by software, with tools and policies absorbing what’s useful and discarding the rest. And while this sounds dystopian, it’s also freeing. Machines will focus on what they’re built for: solving problems.
This brings me to my new hobby horse: This transformation isn’t just about efficiency. It’s about understanding. Machines aren’t just learning what we know—they’re learning how we work, how we lead, how we innovate. And this, right here, is the foundation of what’s next. In this light, the Age of Hybrid Intelligence looks like a short chapter, but it’s the one that matters most. It’s the inflection point where we decide what kind of AGI we’ll create—and whether we’re ready for what comes next.
What comes next?
As we stand on the brink of such a profound transformation, it’s impossible to ignore the questions that loom over us. Regulation will play a critical role—what goes into these processes, and how do we safeguard humanity from the potential consequences of this shift? But beyond regulation, we need to talk about incentives. The ability to create wealth, access resources, and harness intelligence will define the next century. And true, scalable intelligence may become the most valuable resource of all.
We can’t have an honest conversation about the future without confronting the possibility that machines might one day gain sentience. And if they do, the real question isn’t whether they’ll surpass us, but who holds the key? Who will own AGI? Who will guard it, control it, and decide how its benefits are distributed?
The stakes are massive. Will the wealth and power created by AGI be concentrated in the hands of a few, or will we find a way to ensure its benefits are shared? And perhaps most importantly, how will those not directly involved in building or supporting AGI be protected from its potential consequences?
It’s not about fear or urgency but about asking the right questions, reflecting on the bigger picture, and ensuring we don’t lose sight of what really matters as we move toward this new reality.
The above is my current understanding of reality. It might be a hallucination, but as far as I started reflecting on this almost two years ago, a lot of my understanding has materialised, and sometimes, this scares the sh/t out of me.
Happy Holidays, everyone!
🎄
We can’t have an honest conversation about the future without confronting the possibility that machines might one day gain sentience. And if they do, the real question isn’t whether they’ll surpass us, but who holds the key? Who will own AGI? Who will guard it, control it, and decide how its benefits are distributed?
The stakes are massive. Will the wealth and power created by AGI be concentrated in the hands of a few, or will we find a way to ensure its benefits are shared? And perhaps most importantly, how will those not directly involved in building or supporting AGI be protected from its potential consequences?
It’s not about fear or urgency but about asking the right questions, reflecting on the bigger picture, and ensuring we don’t lose sight of what really matters as we move toward this new reality.
The above is my current understanding of reality. It might be a hallucination, but as far as I started reflecting on this almost two years ago, a lot of my understanding has materialised, and sometimes, this scares the sh/t out of me.
Happy Holidays, everyone!
🎄
Back