The Paradox of AI: How the Most Advanced Technology Ever Built Might Make Us More Human

Will AI make us more or less human

The more advanced artificial intelligence becomes, the more it exposes what makes us uniquely human. As machines take over efficiency, logic, and execution, qualities like empathy, trust, creativity, and real human connection become even more valuable. Research shows that as AI grows, people increasingly seek authentic relationships and meaning—highlighting a paradox where technology amplifies, rather than replaces, our humanity. If you want to understand the future of work, relationships, and identity, it starts by seeing how AI is forcing us to become more human—not less.

There is a paradox quietly unfolding at the heart of our relationship with artificial intelligence, and it deserves more attention than it receives amidst the noise of technological anxiety.

Every previous wave of technology made us, in some sense, less human. The printing press distanced us from oral tradition. The industrial revolution pulled us from craft and natural rhythm. Social media gave us connection without presence, reach without intimacy. Each innovation expanded capability while narrowing something harder to name — the raw, inefficient, irreplaceable quality of being in a room with another person who is fully there.

But AI, the most powerful technology we have ever produced, may be doing something different. It may, paradoxically, be pushing us back toward each other.

This is not wishful thinking. It is an observation grounded in neuroscience, evolutionary biology, and the early signals of a cultural shift that is already underway. Dr. David Eagleman, Stanford neuroscientist and one of the world’s foremost experts on how the brain adapts to its environment, touched on this idea in his conversation with Steven Bartlett — and the insight is worth sitting with carefully.

What Technology Has Always Taken From Us

To understand what AI might restore, it helps to be honest about what previous technologies have cost us.

The smartphone did not simply give us information on demand. It gave us an escape hatch from every moment of boredom, discomfort, or social awkwardness — which turned out to be a significant cost. Boredom, we now understand, is neurologically useful. It is in the quiet, unoccupied moments that the brain consolidates experience, generates creative associations, and does the slow background work of making sense of a life. Social awkwardness is uncomfortable precisely because it demands the kind of effortful social calibration that keeps our relational brains sharp.

Social media promised community and delivered something closer to performance. The architecture of platforms built around likes, shares, and follower counts rewired the ancient circuitry of social belonging toward something shallower and more compulsive. We got more connection and less intimacy. More audience and less friendship.

AI, as it reaches its current level of capability, is doing something different in the cultural landscape. It is becoming so good at generating text, images, music, and analysis that it is forcing a fundamental question: if machines can produce all of this, what is left that only a human can do?

The answer to that question is, it turns out, everything that actually matters most.

The Renaissance of Presence

When Napster arrived in 1999 and made music freely available online, the music industry panicked. If you could download any song for free, why would anyone pay for it? More urgently: if recorded music became essentially free, what happened to the musicians?

What actually happened was unexpected. Concert attendance soared. Live music became more valuable, not less, precisely because it could not be digitised. People were not, it turned out, paying primarily for the recorded product. They were paying for the experience of being in a room where something real was happening, performed by a real human being who might make a mistake, who was visibly moved by the music, who was present in a way no recording could replicate.

Taylor Swift’s Eras Tour did not struggle to sell tickets. It became a cultural event of historic proportions — hundreds of thousands of people paying significant money to stand in a field together and watch another human being perform. Not because they could not access the music any other way, but because the live human experience of it was irreplaceable.

Eagleman has observed the same pattern in his own professional life. When AI first became sophisticated enough to generate convincing video and audio avatars, a friend suggested he could create a digital version of himself to deliver lectures and keynotes remotely — same voice, same content, none of the travel. His instinctive response: nobody is going to want that.

And he was right. In the years since AI became widely capable, demand for his in-person talks has increased. Organisations are flying people across the country specifically to have them stand in a room and speak to other human beings. Not because a digital version would be less informative. But because the presence of a real human — with all its inefficiency, its physical reality, its irreducible aliveness — is something we cannot replicate and, it turns out, something we are unwilling to forgo.

What Happens When Machines Can Do the Work

There is a thought experiment worth sitting with: what would a nurse, freed from administrative paperwork, actually do with that recovered time?

Currently, a significant portion of clinical working hours in healthcare is consumed by documentation — charting, coding, filing, the bureaucratic infrastructure of modern medicine. It is work that must be done, but work that pulls skilled human professionals away from the thing they trained for and, often, the thing they entered the profession to do: care for another person.

AI is already beginning to take over significant portions of this administrative load. And the question of what happens when it does is revealing. What is left, when the paperwork is handled, is the irreducibly human work: being present with someone who is frightened, holding a hand during a difficult conversation, noticing the thing a patient said three minutes ago that might matter, sitting with someone in pain without needing to immediately fix it.

This is not a minor residual category. This is, in many ways, what medicine is. And it turns out to be precisely the kind of work that no AI will be able to do — not because it lacks the information, but because it lacks the embodied, evolutionary weight of one human being genuinely attending to another.

Eagleman and Bartlett both arrive at the same conclusion from different angles: perhaps the deepest promise of AI is not that it will replace human work, but that it will strip away the layers of bureaucratic friction that have buried the most genuinely human parts of our working lives, and give us back the time to do what only we can do.

The Evolutionary Pull Toward Other People

There is a neuroscientific reason why no amount of technological sophistication is likely to permanently displace our need for real human connection — and it runs deeper than preference or habit.

Millions of years of evolution have wired the human brain specifically around the challenge of other people. Nothing, as Eagleman puts it, is as cognitively demanding as another human being. You cannot predict exactly what they will say, how they will feel, what they need, or what they are thinking. Navigating that uncertainty in real time — reading micro-expressions, tracking emotional register, adjusting your own behaviour in response to invisible cues — is among the most complex things the brain does.

This complexity is also the reason social connection is so important to brain health. Maintaining rich, varied, and sometimes difficult human relationships keeps the brain working in ways that no digital equivalent can replicate. The brain built for social life does not simply stop needing social life because a more convenient alternative appears.

Eagleman raises the emerging phenomenon of AI relationships — by some estimates, a billion people currently have some form of ongoing interaction with an AI companion. For those who grew up before this existed, the instinctive response is discomfort. But Eagleman’s view is more measured. Used thoughtfully, an AI companion might function as a kind of relational sandbox — a low-stakes space to practise vulnerability, explore emotional patterns, work through anxieties about connection before bringing them into the much more consequential arena of actual human relationships.

This is not a replacement. It is, potentially, a rehearsal room. And the goal of a rehearsal room is always to prepare you for the real performance — not to substitute for it. Because the performance, the real human encounter with all its risk and reciprocity and aliveness, is the only thing that actually satisfies the evolutionary hunger we carry.

The Effort Phenomenon and the Return of Authenticity

Eagleman describes a principle he calls the effort phenomenon: our deep, consistent tendency as human beings to value things more when they appear to have required genuine effort. We pay more for handmade objects. We are more moved by performances we know were years in the making. We value the imperfect and effortful over the seamless and automated, even when we cannot objectively explain why.

This principle is now playing out in real time in our relationship with AI-generated content. As AI becomes capable of producing plausible-sounding text, images, and ideas at scale, readers, viewers, and audiences are becoming increasingly attuned to the absence of human effort in what they consume. The AI-generated LinkedIn post does not irritate us because it contains bad information. It irritates us because we sense — correctly — that no one actually thought it through.

What this creates, at scale, is a revaluation of genuine human expression. The more AI floods the zone with competent, forgettable, effortless content, the more the things that are recognisably human — imperfect, specific, hard-won, searching — become precious. Authenticity, which has been used as a buzzword for so long it lost much of its meaning, may be about to become genuinely scarce, and therefore genuinely valuable.

The writer who works through a difficult idea over many drafts. The speaker who is visibly moved by what they are saying. The creator who brings something to their work that no training data could have produced. These people are not competing with AI. They are doing something AI fundamentally cannot do: offering the irreducible evidence of a human being genuinely trying.

Will AI make us more or less human

AI as a Mirror Turned Outward

There is one final dimension to this story that deserves attention, and it is perhaps the most important. AI, Eagleman suggests, has the potential to help us understand each other better — not by simulating human connection, but by challenging the lazy tribalism that makes genuine human connection harder than it needs to be.

We are, evolutionarily speaking, tribal creatures. We are wired to form in-groups and out-groups quickly, and to stop extending the full complexity of our humanity to those we place on the other side of those lines. When we dismiss another person as a member of the out-group, something in the brain’s social circuitry actually powers down — we stop modelling them as a full human being with an interior life, and begin treating them as an obstacle or an object.

AI, used wisely, has the capacity to interrupt this. To introduce us to the human being behind the position we have dismissed. To complicate our certainties. To ask us: what if I told you about this person? What if you heard their story?

This is not naive. The same technology can be used to deepen polarisation rather than reduce it — and there are powerful economic incentives pushing in exactly that direction. But the possibility exists. And Eagleman, who studies how brains model other brains, believes the potential is real.

Perhaps the deepest paradox of AI is this: a technology built by human beings to replicate and extend human intelligence may ultimately be most valuable not for what it can do instead of us, but for what it reveals about what we are.

We are creatures built for each other. We are at our most capable, most alive, and most fully ourselves when we are genuinely present with another person. We have always known this. We may simply have needed a machine intelligent enough to remind us.


FAQ

Q: How might AI make us more human rather than less? By handling administrative, repetitive, and low-meaning tasks, AI frees up human time and energy for the work only people can do: genuine presence, care, creativity, and connection. It also creates cultural pressure toward authenticity by making AI-generated content increasingly recognisable and valued less than genuinely human expression.

Q: Will AI replace live human experiences like concerts and live events? Evidence suggests the opposite. When recorded music became freely available, live concert attendance increased. As AI becomes capable of generating virtual content, in-person human experiences — performances, talks, gatherings — appear to be growing more valuable, not less, because they offer something irreplaceable.

Q: Can AI help improve human relationships? Potentially, as a relational sandbox. AI companions may help people practise emotional skills, process anxieties, and develop relational confidence in lower-stakes environments. But they cannot replace the evolutionary necessity of genuine human connection, which the brain requires for its own long-term health.

Q: What is the effort phenomenon and why does it matter in the age of AI? The effort phenomenon refers to humans’ tendency to value things more when they appear to have required genuine effort. As AI floods culture with effortless content, things that carry the recognisable mark of human struggle and genuine engagement become increasingly scarce and valuable.

Q: What parts of human work can AI never replace? Genuine presence, embodied care, emotional attunement, and the irreducible quality of one human being fully attending to another. These are not peripheral features of important work — in medicine, education, leadership, and relationships, they are often the core of the work itself.


Based on insights from Dr. David Eagleman — Stanford neuroscientist, bestselling author of Livewired: The Inside Story of the Ever-Changing Brain*, co-founder of Neosensory and BrainCheck, and director of the Center for Science and Law — as shared in his conversation with Steven Bartlett on The Diary of a CEO.*