The Truth Paradox

The Truth Paradox


Why We Fear AI More Than Our Own Imperfections

Ah, the sweet irony of our collective distrust in AI. Every other day, someone passionately tells me, “You just can’t trust ChatGPT!” And the punchline? It all needs to be “double-checked”—by us, the infallible humans who, naturally, never make mistakes.

But here’s the kicker (and yes, I know you hate that phrase!): we cling to the idea that humans are the ultimate source of truth. Have we forgotten how biased, mistaken, and utterly clueless we can be at times? The history of human error is rich and well-documented, yet when it comes to AI, we proudly believe only humans can validate information—as if our judgment is the epitome of perfection.

The Verification Paradox

So, I asked her, “Where exactly do I check what ChatGPT tells me? Should I run to an encyclopedia? Google?” And then it hit me—the irony. ChatGPT doesn’t rely on just one source; it synthesizes knowledge from thousands, drawing from the very same sources we've depended on for years. But, of course, I should trust Google more, right? Except, who checks Google? And who checks those who check Google? You see where I’m going with this—an endless loop of self-verification.

Let’s face it: we’re just doubting AI because it’s new, not because it's any less reliable. In fact, it might be more reliable—faster, more comprehensive. But no, we’re humans. We need to "double-check," as if that's been foolproof all along.

The Digital Native Delusion

Then we jumped to education. "Teachers often complain that so-called digital natives aren’t that “digital” after all.", she said. "They say the kids don’t know how to use the tools."

But what teachers really mean is this: youngsters are using those tools differently. They’re not playing by the old rules. They’re creating a new universe, not just operating within the confines of the old one. The kids aren’t confused about digital tools; they’ve outgrown them and moved on to something entirely new.

This difference between teachers and students is a perfect metaphor for the digital strategy debate: Are we just slapping a digital layer on an old, outdated cake, or are we creating a brand-new digital cake? Spoiler alert: it’s the latter that wins.

The Fear of Something Better

And so, back to our fear of AI. The deeper issue? We’re scared AI might do some things better than we can. Not just faster—but better. And that’s where the distrust comes from, not from any real evidence that AI is faulty, but from the uncomfortable truth that it might outshine us in certain areas.

Think about it: teachers teach, we learn, and we hold onto that information—until we forget it, twist it, or lose it in the recesses of our unreliable memories. But AI? It remembers. Perfectly. Every time. And that terrifies us because it highlights just how imperfect we really are.

What Are We Really Afraid Of?

So what’s the real fear here? It’s that these systems can do things we’ve always trusted humans to do, and maybe even do them better. We’ve built AI to help us, but now we’re suspicious because it seems a little too good at its job. Instead of curiosity, we’ve defaulted to skepticism.

Ironically, we trust textbooks, teachers, and Google with ease, but when AI does the same thing, we panic. It’s like we don’t want to admit that something we created might outpace us.

Fear in Disguise

In the end, this isn’t about trust. It’s about fear. Fear that something new and unfamiliar might challenge our long-held assumptions. But that’s the beauty of progress: it’s always about building things that do what we can’t—or won’t. AI is just another tool in that long line of innovations, not a threat, but an opportunity.

So instead of fear, why not embrace AI with curiosity? Let’s stop worrying about whether we should double-check everything and start exploring what AI can truly offer us—flaws and all.

Harari and the Convenient Truths of AI

Let's bring in Yuval Noah Harari to peel back yet another layer of this paradox. Harari, in Sapiens, brilliantly points out that humans have always created shared myths—whether through religion, economics, or social norms—that allow us to collaborate on a massive scale. These “truths” become standardized not because they are the ultimate reality, but because we need them to function as a society. In fact, they're more like convenient truths—a collective agreement we rely on to keep the wheels turning.

So, when you ask, “Where did teachers get their information?” it's not surprising that they pulled it from encyclopedias, textbooks, or other “approved” sources. We trust these sources, not because they are flawless, but because they've been validated by a system built on human consensus. They’ve gone through the social filter of acceptance, marked with the stamp of "approved by the masses."

But is that really the truth? Or is it just a truth—a comfortable one, passed down from generation to generation, like a family recipe no one dares question?

Harari would argue the latter. In his view, much of what we accept as fact is simply a collective agreement—a useful narrative we've created to help us navigate life. It’s a myth, but a functional one. So, when we question the reliability of AI, it’s not really about whether it’s factually correct; it’s about our discomfort with a new kind of storytelling. One that doesn't rely on centuries of human tradition and feels like it's skipping the line.

The Same Cake, Just Faster

This brings us back to ChatGPT. It isn’t inventing anything new. It’s simply synthesizing the same sources of information we’ve always trusted. It does it faster, on a grander scale, and without that familiar human touch. And that, my friends, is where the unease begins to creep in. People aren’t wary because it’s wrong; they’re wary because it’s not human. It’s not part of the long, handshaking, back-patting chain of knowledge we’ve been passing down for generations.

Harari’s insights strike at the heart of this: we trust human-made institutions—schools, encyclopedias, textbooks—because they’ve been woven into the shared myths of our society. They feel safe, tried, and true, despite their imperfections. AI, however, is the new kid on the block, strutting in without paying its dues. And while it pulls from the same sources, it challenges the old ways of distributing knowledge.

If anything, what makes us uncomfortable about AI isn’t the fear that it’s inaccurate. It’s that it’s exposing how arbitrary our old systems are. All these sources—whether it’s an encyclopedia, a textbook, or even a revered teacher—are still human creations. And just like ChatGPT, they’re built on layers of human interpretation, opinion, and consensus.

So Why Trust Anything?

At this point, the real question isn’t, “Can you trust ChatGPT?” It’s, “Why do you trust anything at all?” Whether it's an encyclopedia, a textbook, or a human teacher, the truth is always layered in the context of who’s telling the story. ChatGPT is just pulling from the same collective pool of knowledge we’ve always used—it’s just doing it faster and more efficiently.

And here’s the uncomfortable part: if AI can do what humans do, but better, then what does that say about the myths we’ve been relying on all along? Perhaps the real threat isn't that AI will get it wrong—but that it will get it right, more efficiently, and leave us wondering why we ever trusted the slower, messier version in the first place.

So, let’s stop pretending we fear AI because it's inaccurate. What we really fear is that AI is simply holding a mirror up to the convenient truths we've been living by—and that reflection is just a bit too clear for comfort.


Peter ⛵ Delva

CEO Sail & Lead - DE reis van je leven - Moving beyond inspiration

1w

In Google, euhm delete that... In ChatGPT we trust! 😁 And it used to be something else, but since you name yourself like that, I can't write 'delete that'. 🤣

Like
Reply
Ilse De Vos

building bridges between academia and society

1w

Oh waw! Two textbook examples of reductio ad absurdum for the price of one! Thx, Rik Vera. Been a while since I've spotted one in the wild.

Isabelle Borremans

Staff member at VAIA, Flanders AI Academy 📡 Communication 🤖 Artificial Intelligence 👨🎓 Lifelong Learning 🚀 Marketing a digital, technological future

2w

May I ask to share some of your sweet followers and tag me and VAIA - Flanders AI Academy in this post? Since it's me you're claiming makes "a circular reasoning in the history of humanity.” Happy to have been the spark for a "groundbreaking discussion" 🔥. However, I didn't say "the only solution is to verify everything it says by using the exact same sources ChatGPT pulls from." I said you cannot trust ChatGPT because 𝗶𝘁 𝗶𝘀 𝗮 𝘀𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝗮𝗹 𝗺𝗼𝗱𝗲𝗹 𝘁𝗵𝗮𝘁 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝘀 𝗮𝗻 𝗮𝗻𝘀𝘄𝗲𝗿 𝘁𝗵𝗮𝘁 𝗶𝘀 𝗣𝗥𝗢𝗕𝗔𝗕𝗟𝗬 𝗿𝗶𝗴𝗵𝘁, predicting words based on (their distribution in) sources. A calculator is rule-based, using mathematical logic to generate an exact answer. A "probable" answer as predicted by ChatGPT should be checked with your own knowledge and logical reasoning since 𝗵𝘂𝗺𝗮𝗻𝘀 𝗮𝗿𝗲 𝗰𝗮𝗽𝗮𝗯𝗹𝗲 𝗼𝗳 𝗯𝗼𝘁𝗵 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗟𝗢𝗚𝗜𝗖, and ChatGPT is not. What's your opinion: Ayla Rigouts Terryn Lisa Hilte Ilse De Vos Jefrey Lijffijt?

I for one trust my calculator even when it dates back to my school days.

  • No alternative text description for this image

Rik Vera To me, Rik, it boils down to learning to be 'tolerant for ambiguity' in order to 'drive out fear'. W. Edwards Deming asked us to do that some forty years ago and we didn't succeed (yet). And you're right we become tolerant by being curious and asking 'humble' questions. We should be aware that our assumptions are... assumptions. It's not the 'truth', it's merely our 'truth'. And our truth depends of our 'culture', the paradigm we cling to and by definition our truth is not THE truth; not even close BS would sing.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics