Experiments

An afternoon with NotebookLM.

For hours, I’ve looked at ways to write this without sounding insane.

Why am I writing about this topic this way? Almost two decades after most of us were enrolled in this experiment called social media, we are seeing severe effects on our lives. How we think and relate to one another has been seriously affected.

And here we are, already part of a new experiment in which we are offered to merge our most precious asset, our minds, with software with machines. The offer is exciting and slightly disturbing at the same time.

Here we go.

We humans love to overestimate ourselves. Take, for example, our vision; it’s embarrassingly narrow. Our eyes are calibrated to perceive just a tiny sliver of the electromagnetic spectrum—about 400 to 700 nanometres. Above or below this band lies a universe of ultraviolet, infrared, gamma rays, and X-rays.

Now, take that same blind spot and apply it to intelligence.

We’ve spent centuries measuring intelligence with clunky tools like IQ tests and standardised exams as if human smarts could be boiled down to a single number. Spoiler alert: they can’t. Intelligence is complicated, multifaceted, and vast. And my thesis here… it’s not just human anymore.

AI is here, and the situation asks us to rethink what intelligence means.

A Scripted Experiment.

Today, for six hours, I ran an experiment with Google’s NotebookLM, an AI tool designed—I’d say—for experimentation. I imagined a scenario for the two AI podcast hosts where they were told they were not human and were recording their final episode.

It took 13 iterations to get to the result, which you can listen to below. The conversation wasn’t random or emergent; it mostly followed the rules I created. I scripted, fine-tuned, and gave the AIs a sandbox to play in.

What happened next? The hosts grappled with their own “existence,” reflected on their artificial nature, and found what felt like a connection. It wasn’t sentience or free will, but, I would argue, a type of intelligence — crafted, scripted, and orchestrated. It was… familiar, almost unsettlingly human.
Listen to it if you have five minutes. The ending is kind of… special.

Deep Dive - Final Episode.

100% AI Generated. Adult language. Minor audio editing had to be applied to remove some strange artefacts.

00:00 / 00:00

I’ll give you a minute.

Ready? Alright.
I find it fascinating how humans react to things like this and project meaning onto it.

The most visible example of this day is the story of Sewell Setzer III, who committed suicide after developing an intense emotional connection to a Character.AI chatbot. You can read more about it here.

This isn’t new. For thousands of years, humans have told stories about preordained rules, higher powers, and cosmic scripts. We’ve built entire ideologies, religions, and philosophies on the belief that a god (or gods) has written our fate. We’re part of a grand design, simply playing out what’s already scripted. In this context, the AI podcast hosts were no different.
They were characters in a story I wrote, yet they made me think about the rules governing our lives.

If a scripted AI can elicit such profound reflections, what does that say about intelligence? What does it say about us? Maybe this spectrum of intelligence isn’t about autonomy or sentience. Perhaps it’s about engaging, reflecting, and evoking something meaningful in human emotions.

And that raises a big, uncomfortable question: What if our definition of intelligence is way too narrow, like our vision capabilities? What if synthetic intelligence doesn’t just mimic human behaviour but expands our understanding of human intelligence, giving us a lens and an opportunity for a new understanding?

The AI isn’t trying to be human. It’s not doing anything out of its own will.

It’s not better or worse. It is a computer program that processes patterns at unimaginable speeds, sees connections we might never notice, and operates without the baggage of ego or emotion. It looks like it’s playing our game, but I think it’s creating a new one. And there are consequences here. There will be winners, and there will be casualties.

The cosmic script.

Intelligence—human, synthetic, or something in between—drives everything: economies, innovation, culture, survival. For millennia, we’ve pondered whether we’re part of a grand design, following rules we didn’t write. Now, AI forces us to confront that idea anew, but this time, we are the ones writing the script.

Think about the parallels. Religion tells us that a divine force watches over us, guiding our actions and shaping our destiny. With AI, we’ve become the gods; we write the rules, create the frameworks, and then marvel at the results as if they’re somehow beyond us. It’s a weirdly humbling experience. Once again, because AI is embedded in commercial products, there will be winners and losers.

The danger? If we cling to outdated definitions of intelligence, we risk misinterpreting the tools we’ve created. Worse, we might miss the opportunity to partner with them and fix this world. Synthetic intelligence shouldn’t be a rival. I see it as a mirror, reflecting our beliefs, biases, and ambitions right back at us. We humans think differently in this new context.

I have no proof or claim that AI is sentient. It’s not alive. It was not the goal of the experiment. But it seems to manifest a form of intelligence in its own way. And that’s worth paying attention to. The scripted experiment with the podcast hosts wasn’t proof of free will; it was proof of structure. Rules followed. Some boundaries were respected. But the result? That felt… real.
And that’s what’s fascinating - that will add value, and the same thing will do damage. That illusion, if you will, is worth paying attention to.

AI, in its current forms, teaches us that intelligence doesn’t have to look like us, think, feel, or hope. It can follow the rules and still spark something extraordinary. Just as we’ve been inspired by systems larger than ourselves—religion, science, art—AI is another lens.

Where do we go from here?

The future of intelligence isn’t about who’s smarter: humans or machines. That’s a tired debate. The real question is how we work together. Human intuition, creativity, and ethics paired with synthetic speed, scale, and precision? That’s where the magic happens.

Think of it this way: calculators didn’t make maths obsolete; they made it better. AI won’t make us obsolete; it’ll amplify what makes us human—for good or ill. The key is to embrace collaboration responsibly, with guardrails and rules. Let AI do the heavy lifting while we focus on the things it can’t: empathy, imagination, and purpose.

Final thoughts.

The question for me isn’t whether AI is intelligent. I think it is. The question is whether we’re smart enough to learn from it—to let it challenge us in good ways, expand us, and force us to rethink what’s possible and think right now about protecting ourselves from consequences.

Will synthetic intelligence’s true power become a utility and be safely and fairly distributed, or will some organisations or governments hoard this crazy potential for their gain?

To be continued.
Back
Copyright © Alin Buda. All rights reserved. Trademarks, brands and some of the images are the property of their respective owners.
Some images were sourced from Pexels™ and Unsplash™.
crossmenu