The Truth Paradox
Why We Fear AI More Than Our Own Imperfections
Ah, the sweet irony of our collective distrust in AI. Every other day, someone passionately tells me, âYou just canât trust ChatGPT!â And the punchline? It all needs to be âdouble-checkedââby us, the infallible humans who, naturally, never make mistakes.
But hereâs the kicker (and yes, I know you hate that phrase!): we cling to the idea that humans are the ultimate source of truth. Have we forgotten how biased, mistaken, and utterly clueless we can be at times? The history of human error is rich and well-documented, yet when it comes to AI, we proudly believe only humans can validate informationâas if our judgment is the epitome of perfection.
The Verification Paradox
So, I asked her, âWhere exactly do I check what ChatGPT tells me? Should I run to an encyclopedia? Google?â And then it hit meâthe irony. ChatGPT doesnât rely on just one source; it synthesizes knowledge from thousands, drawing from the very same sources we've depended on for years. But, of course, I should trust Google more, right? Except, who checks Google? And who checks those who check Google? You see where Iâm going with thisâan endless loop of self-verification.
Letâs face it: weâre just doubting AI because itâs new, not because it's any less reliable. In fact, it might be more reliableâfaster, more comprehensive. But no, weâre humans. We need to "double-check," as if that's been foolproof all along.
The Digital Native Delusion
Then we jumped to education. "Teachers often complain that so-called digital natives arenât that âdigitalâ after all.", she said. "They say the kids donât know how to use the tools."
But what teachers really mean is this: youngsters are using those tools differently. Theyâre not playing by the old rules. Theyâre creating a new universe, not just operating within the confines of the old one. The kids arenât confused about digital tools; theyâve outgrown them and moved on to something entirely new.
This difference between teachers and students is a perfect metaphor for the digital strategy debate: Are we just slapping a digital layer on an old, outdated cake, or are we creating a brand-new digital cake? Spoiler alert: itâs the latter that wins.
The Fear of Something Better
And so, back to our fear of AI. The deeper issue? Weâre scared AI might do some things better than we can. Not just fasterâbut better. And thatâs where the distrust comes from, not from any real evidence that AI is faulty, but from the uncomfortable truth that it might outshine us in certain areas.
Think about it: teachers teach, we learn, and we hold onto that informationâuntil we forget it, twist it, or lose it in the recesses of our unreliable memories. But AI? It remembers. Perfectly. Every time. And that terrifies us because it highlights just how imperfect we really are.
What Are We Really Afraid Of?
So whatâs the real fear here? Itâs that these systems can do things weâve always trusted humans to do, and maybe even do them better. Weâve built AI to help us, but now weâre suspicious because it seems a little too good at its job. Instead of curiosity, weâve defaulted to skepticism.
Ironically, we trust textbooks, teachers, and Google with ease, but when AI does the same thing, we panic. Itâs like we donât want to admit that something we created might outpace us.
Recommended by LinkedIn
Fear in Disguise
In the end, this isnât about trust. Itâs about fear. Fear that something new and unfamiliar might challenge our long-held assumptions. But thatâs the beauty of progress: itâs always about building things that do what we canâtâor wonât. AI is just another tool in that long line of innovations, not a threat, but an opportunity.
So instead of fear, why not embrace AI with curiosity? Letâs stop worrying about whether we should double-check everything and start exploring what AI can truly offer usâflaws and all.
Harari and the Convenient Truths of AI
Let's bring in Yuval Noah Harari to peel back yet another layer of this paradox. Harari, in Sapiens, brilliantly points out that humans have always created shared mythsâwhether through religion, economics, or social normsâthat allow us to collaborate on a massive scale. These âtruthsâ become standardized not because they are the ultimate reality, but because we need them to function as a society. In fact, they're more like convenient truthsâa collective agreement we rely on to keep the wheels turning.
So, when you ask, âWhere did teachers get their information?â it's not surprising that they pulled it from encyclopedias, textbooks, or other âapprovedâ sources. We trust these sources, not because they are flawless, but because they've been validated by a system built on human consensus. Theyâve gone through the social filter of acceptance, marked with the stamp of "approved by the masses."
But is that really the truth? Or is it just a truthâa comfortable one, passed down from generation to generation, like a family recipe no one dares question?
Harari would argue the latter. In his view, much of what we accept as fact is simply a collective agreementâa useful narrative we've created to help us navigate life. Itâs a myth, but a functional one. So, when we question the reliability of AI, itâs not really about whether itâs factually correct; itâs about our discomfort with a new kind of storytelling. One that doesn't rely on centuries of human tradition and feels like it's skipping the line.
The Same Cake, Just Faster
This brings us back to ChatGPT. It isnât inventing anything new. Itâs simply synthesizing the same sources of information weâve always trusted. It does it faster, on a grander scale, and without that familiar human touch. And that, my friends, is where the unease begins to creep in. People arenât wary because itâs wrong; theyâre wary because itâs not human. Itâs not part of the long, handshaking, back-patting chain of knowledge weâve been passing down for generations.
Harariâs insights strike at the heart of this: we trust human-made institutionsâschools, encyclopedias, textbooksâbecause theyâve been woven into the shared myths of our society. They feel safe, tried, and true, despite their imperfections. AI, however, is the new kid on the block, strutting in without paying its dues. And while it pulls from the same sources, it challenges the old ways of distributing knowledge.
If anything, what makes us uncomfortable about AI isnât the fear that itâs inaccurate. Itâs that itâs exposing how arbitrary our old systems are. All these sourcesâwhether itâs an encyclopedia, a textbook, or even a revered teacherâare still human creations. And just like ChatGPT, theyâre built on layers of human interpretation, opinion, and consensus.
So Why Trust Anything?
At this point, the real question isnât, âCan you trust ChatGPT?â Itâs, âWhy do you trust anything at all?â Whether it's an encyclopedia, a textbook, or a human teacher, the truth is always layered in the context of whoâs telling the story. ChatGPT is just pulling from the same collective pool of knowledge weâve always usedâitâs just doing it faster and more efficiently.
And hereâs the uncomfortable part: if AI can do what humans do, but better, then what does that say about the myths weâve been relying on all along? Perhaps the real threat isn't that AI will get it wrongâbut that it will get it right, more efficiently, and leave us wondering why we ever trusted the slower, messier version in the first place.
So, letâs stop pretending we fear AI because it's inaccurate. What we really fear is that AI is simply holding a mirror up to the convenient truths we've been living byâand that reflection is just a bit too clear for comfort.
CEO Sail & Lead - DE reis van je leven - Moving beyond inspiration
1wIn Google, euhm delete that... In ChatGPT we trust! ð And it used to be something else, but since you name yourself like that, I can't write 'delete that'. ð¤£
building bridges between academia and society
1wOh waw! Two textbook examples of reductio ad absurdum for the price of one! Thx, Rik Vera. Been a while since I've spotted one in the wild.
Staff member at VAIA, Flanders AI Academy ð¡ Communication ð¤ Artificial Intelligence ð¨ð Lifelong Learning ð Marketing a digital, technological future
2wMay I ask to share some of your sweet followers and tag me and VAIA - Flanders AI Academy in this post? Since it's me you're claiming makes "a circular reasoning in the history of humanity.â Happy to have been the spark for a "groundbreaking discussion" ð¥. However, I didn't say "the only solution is to verify everything it says by using the exact same sources ChatGPT pulls from." I said you cannot trust ChatGPT because ð¶ð ð¶ð ð® ððð®ðð¶ððð¶ð°ð®ð¹ ðºð¼ð±ð²ð¹ ððµð®ð ð´ð²ð»ð²ð¿ð®ðð²ð ð®ð» ð®ð»ððð²ð¿ ððµð®ð ð¶ð ð£ð¥ð¢ððððð¬ ð¿ð¶ð´ðµð, predicting words based on (their distribution in) sources. A calculator is rule-based, using mathematical logic to generate an exact answer. A "probable" answer as predicted by ChatGPT should be checked with your own knowledge and logical reasoning since ðµððºð®ð»ð ð®ð¿ð² ð°ð®ð½ð®ð¯ð¹ð² ð¼ð³ ð¯ð¼ððµ ð½ð¿ð²ð±ð¶ð°ðð¶ð¼ð» ð®ð»ð± ðð¢ððð, and ChatGPT is not. What's your opinion: Ayla Rigouts Terryn Lisa Hilte Ilse De Vos Jefrey Lijffijt?
Directeur Vorm DC
2wI for one trust my calculator even when it dates back to my school days.
CEO bij Roels'Inn
2wRik Vera To me, Rik, it boils down to learning to be 'tolerant for ambiguity' in order to 'drive out fear'. W. Edwards Deming asked us to do that some forty years ago and we didn't succeed (yet). And you're right we become tolerant by being curious and asking 'humble' questions. We should be aware that our assumptions are... assumptions. It's not the 'truth', it's merely our 'truth'. And our truth depends of our 'culture', the paradigm we cling to and by definition our truth is not THE truth; not even close BS would sing.