No “I” in AI: Why LLMs are just “Frankenstein’s Data Monsters” -
and what this means for our Collaboration with AI.
How does a “Frankenstein’s Data Monster” actually look like? Well…

No “I” in AI: Why LLMs are just “Frankenstein’s Data Monsters” - and what this means for our Collaboration with AI.

TL:DR // Key Insights: 

  • 🚀🤖🙅♂️🧠👀, 🤖👩🎨👩🏫👩🔬 - 📊🗑🚫👍🚫🧠 - Despite significant recent advancements, AI still lacks genuine understanding, making it more a mimic of than a true challenge for human intelligence - basically because the over-reliance on large data without regard for relevance is keeping the current AI approaches from achieving true intelligence
  • 👎🤖🙅♂️👩💼👩🚒👩⚕️, 👍🤝👩🎨🤔 - 🚨👩🎓👩💻👩🔧👩🔬🤖 - Because of these shortcomings AI should not be used to replace but to complement human capabilities, especially to support human core capabilities like creativity and complex decision-making. Nevertheless we need to prepare for more advanced AI through workforce up-skilling and further integration of human expertise with machine efficiency.


No doubt, 2023 has been a year of revolutionary change in the area of artificial intelligence. But despite all the hype about this transformative technology, most people still seem rather ignorant about what intelligence actually means - the understanding of which is obviously necessary to answer the question whether there even is any “I” in AI. 

Julian Stodt, Dr Sae Schatz and Geoff Stead rightly mention in their recently published book “Engines of Engagement” that AI is like the clueless person in John Searle’s famous thought experiment called ‘Chinese Room’.

In this thought experiment, Searle is imagining himself sitting in a closed room with all the requirements to translate between Chinese and English, in order to answer any requests he would receive through a slot in the door. In that way, his output would appear to a Chinese speaker as if he actually understood Chinese - basically manually emulating any AI chatbot which is ultimately also acting like a computerised librarian. But in the same way a chatbot would be able to pass the Turing test without understanding the meaning of anything it is talking about, Searle would also be perceived by the Chinese person conversing with him through the door as if he understood Chinese, without actually understanding anything at all. Although it could be argued that, because Searle himself is in fact intelligent, over time he might actually learn Chinese by picking up the meaning of all Chinese words he gets to translate…

In effect this thought experiment is showing us that, by talking to AI systems we are actually just projecting our own understanding onto these systems - and are hence effectively only talking to ourselves. Although such self-talk can improve cognitive performance, it does not answer the question why we are worshipping these modern ‘stochastic parrot’s’ so much and whether the current hype about AI is actually justified. Despite the many ways in which current AI is still not living up to the cognitive standards of human intelligence, LLMs however did seem to have cracked one of the key elements of what intellectually defines us as humans: Language. Language, as Stodt, Schatz and Stead describe in their book “Engines of Engagement”,

“forms the conduit between our inner and outer worlds, and it’s largely through lenses built of language that we view and interpret the reality around us.”  … it “gives permanence and expression to our intellectual cores” … “It permeates our existence.”

Does  that mean that, by using attention-based transformer systems, LLMs have actually managed to statistically emulate our intellectual existence? It would if our existence could be reduced to the data upon which these models are built. And quite a few people might actually subscribe to such a worldview. Especially in today’s “data-centric” society, which is trying to reduce everything to measurable quantities, effectively discarding anything that can’t be measured as irrelevant.

But Goldman’s Paul Burchard recently reminded us that this over-reliance on data is actually a faulty concept, because “data has no value or meaning in the abstract, without an understanding of the context or processes by which it was generated.” Of course this also means that, independent of the size of the data-set an LLM is trained on, data alone will never render any such system “intelligent” in the sense that it will spontaneously develop an actual comprehension of the world. An important reason for this is that no data set, however large it may be, is ever going to provide a “complete and detailed coverage of the entire space of possibilities” of the realities behind that data. Meaning data alone is never enough to predict all possible outcomes in a complex system. Which is exactly why traditional methods of “machine intelligence” have always stopped working when the underlying complexity exceeded a certain threshold (think Deep Blue vs. AlphaGo). Formally this insight has of course already been captured in Godel's Incompleteness Theorems which mathematically prove that there is no sufficiently complex system that is both complete and consistent at the same time.

Given these obvious logical flaws of the current LLM-based hunt for AI/AGI, one might rightfully ask whether we are actually seeing machines getting smarter, or rather us getting dumber? Or to put it in the more eloquent words of Jaron Lanier:

“You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?”

So maybe, what we are seeing right now happening as part of this global AI-frenzy is just the latest version of our age-old dream to explain ourselves with the best metaphor that we as tool-building animals can think of. The most complex, data-driven tool we are able to build in our own image: A data-processing computer. Since we don’t understand and hence also can’t think our own conscious thinking process into being, we simply assume that the magic must somehow still live in the data that we humans produce through thinking: Our spoken and written language. Combined with the currently dominating forms of ‘religion’, namely a cargo-cult like Scientism and an almost naive Dataism, this effectively led us down a path where we used a classic quantitative “more-is-better” approach to simply crawl the whole internet so we could analyse it for all the hidden patterns in its data. In effect hoping that the secrets of our own consciousness will surely have to reveal themselves once we dug ourselves through enough of its remains. So just like a modern day genie in a bottle who is summoned to come out, we have built LLMs to summon our collective ghost in today’s biggest data machine, the internet. 

But since we built these LLMs without any actual understanding of our own intelligence, the summoned “AI” can unfortunately only reveal itself as data-triggered agency remnant without intelligence: Much like a dead frog’s leg that keeps twitching when jolted by electricity, an AI chat bot keeps talking when prompted with linguistic input that triggers some of its pre-trained data-connections. But neither does it understand a word it says, nor is there any real understanding of what we are trying to tell this poorly designed imitation of our own Broca’s area. Which is why current AI applications still rather resemble stochastic parrots than any awe-inspiring ASI (artificial super-intelligence) or AGI (artificial general intelligence).

Because to call something AGI, we can rightly demand more than just the capability to correctly calculate the relationships between words in order to then generate a half-reasonable sounding output in a stepwise manner of prediction that literally puts one - word - after - the - other - like - a - stuttering - robot….. 🤖 What we should be asking for in order to speak of AGI is a form of consciousness that actually has a sense of reality and a real understanding of the world, above and beyond modelling linguistic relationships. So what we would need is effectively a Large World Model, not just a Large Language Model. But unfortunately this is most likely unachievable with the current “in silico” approaches relying only on data, statistics and computing power. So where can we find inspiration for more promising approaches?

Biologically (e.g. energy) restricted systems have an important advantage over artificial neural networks and other simulations of neuronal activities: They naturally have to aim at limiting the intake and processing of new signals based on their survival-related relevance and their inherent information density and information importance. What does that mean? Because real brains are already evolutionarily ‘maxed out’ with regard to their energy consumption, they naturally had to find a way to limit the intensity of their information processing activity. The ingenious way nature has solved this problem is an example of how true creativity will always be able to not only overcome any hard obstacle, but actually integrate it into its design and ultimately even make it part of the solution. Several architectural key features of our brain (e.g. novelty-detection in the medial temporal lobe, discrepancy-dependent activation of ‘generative’ hippocampal structures responsible for accurate and efficient future predictions, evolutionarily pre-trained pattern recognition systems in our cortex, differentiation between automatic and deliberative ways of thinking as part of our brain being comprised of two separate cognitive systems, System 1 and System 2) are independently contributing to our brain being not just the most complex, most amazing and most powerful cognitive system on this planet, but also being an incredibly energy-efficient system when compared to the power-hungry AI systems we are currently using.

But my key point here is not the energy efficiency itself. It is the wonderful side-effects of this system’s need to conserve energy. Because this system is naturally not interested in anything that is ‘boring’, ‘usual’, ‘ordinary’, ‘predictable’, and can hence be safely pushed into the background of our mind. It rather helps us guide our limited attention to the events that are experientially different, contextually relevant and which ultimately matter for our survival: These are the events that help us identify and define the rules that govern our universe and hence our existence. How does gravity work? How does social interaction work? What happens, if I behave differently than everyone else in a given context? So it turns out, to understand the world in an AGI kind of sense, you do not need to ingest all the data in the world including all of mankind’s literature, scientific publications, pictures of cats, dogs, and cookies. What you need is the differential, experientially relevant knowledge about which action will make a significant difference when it comes to your life and all the parameters which are existential to it. 

Interestingly, to be able to identify this kind of knowledge, an AI would actually have to ‘give a shit’, or put in more adequate words: An AI would have to have ‘skin in the game’, meaning it would have to share some of the burden of what we call the ‘human condition’: The all too human experiences like losing a good friend, having our heart broken, or being confronted with a serious illness or a life threatening situation. Unfortunately, current AI systems are genuinely not able to ‘feel’ or even empathise with any of these existential situations, though they might find the right words to make us feel as if they did. So what differentiates our intelligence from any data-based artificial derivative is ultimately this: AI isn’t aware and hence also doesn’t care whether it is right or wrong. Because any of its interactions with the world is just a random, probabilistic variant of infinite possible permutations of the patterns the AI has been trained on. Our life however matters, if nothing else then to us. Which makes every of our interactions and decisions in life relevant. Because our self-consciousness necessitates us to think ourselves on this one, irreversible time-line, with a past, present and future version of ourselves, with a physical body that has to suffer the consequences of all our decisions, right and wrong. 

So what are possible conclusions and strategic recommendations we can derive from these insights? Once we adopt such a more informed, critical view on the capabilities and shortcomings of what is being called AI today, we should be able to:

  1. Better understand the Limitations of AI: Given that all current AI systems still lack any understanding, consciousness and self-reflection, we should always be aware that we basically only deal with 'stochastic parrots,' mimicking human language without genuine comprehension. This should guide our expectations and applications of AI, especially in understanding that AI can and should never replace real human judgement or creativity.
  2. Focus on Complementarity: Businesses should use AI as a tool to complement and augment human capabilities, not replace them. AI can handle large-scale data analysis and routine tasks, while humans should focus on areas requiring creativity, empathy, and complex decision-making. But instead of using AI just as a means for cognitive outsourcing (centaur model), we might be better off conceptualising it as a constant ‘cognitive extension’ (cyborg model) that helps us envision how the results of our latest creative ideas and plans will actually play out once we have been thinking them through with some assistance of our cognitive co-pilots. AI allows us to think further faster, but think we still need to do by ourselves!
  3. Be Data-Critical instead of (naively) Data-Driven: Given the above outlined limitations of even the largest imaginable data sets when it comes to actual representation, let alone full comprehension of complex realities, any uncritical over-reliance on data should be avoided. AI systems are limited by the data they are trained on, and they lack the ability to understand context or nuances beyond this data. Businesses should ensure that AI applications are complemented with human oversight to interpret results in the right context.
  4. Prepare for a Future with more Advanced AI: Although current AI is not close to achieving AGI, businesses should start preparing for a future where AI might have more advanced capabilities. This includes developing strategies for integration and considering upgrading/changing the current business model to be able to match and build on more powerful AI systems. Due to the obvious limitations of the currently applied large data based approach of an ‘in silico’ imitation of neural network learning (LLMs), it won’t take the big AI providers long to move beyond such traditional data and computing power-focused methods. This includes learning from biological systems and considering AI designs that emphasise relevance, novelty, and energy efficiency.
  5. Start Upskilling our Workforce: As AI becomes more integrated into business processes, it's crucial to educate and train your workforce to collaborate seamlessly and effectively with AI systems. This includes understanding AI capabilities, limitations, as well as developing the new meta-cognitive skills required to stay on top and control the increasing potential of AI systems. Such upskilling also involves helping the workforce re-focus on the areas in which they will in future add value together with such ‘intelligent’ machines, specifically requiring skills like analytical thinking and problem solving, critical thinking and causal reflection, curiosity and creative thinking, as well as empathy and resilience. And unfortunately HR can’t help you do this, because they will themselves have to be upskilled first…

To sum up: 2023 has most likely been just the beginning of an ongoing roller-coaster ride in the area of AI. Given the immense investment bets placed in this area, it is not unreasonable to expect a lot more major breakthroughs next year. But how should anyone adapt to, let alone plan for such fast-paced change? By now it’s at least clear that more of these game-changing technological advancements will always have an ambiguous effect, leaving us in equal measures ‘future-shocked’ but also empowered and challenged to actually reach the full potential of our natural cognitive abilities. Given the rapidly evolving nature of AI, businesses should in any case try to stay at the forefront of AI advancements, because anyone who delays this business-critical task of learning and adapting does this at the risk of becoming an ‘AI Dinosaur’ very soon. Our strategy for 2024 hence remains to be one of fast adaptation and learning, but also involves focusing on all the newly identified human core capabilities which have now become essential to drive success in collaborating with AI systems. Our goal here is to bundle all of these core capabilities up into a coherent problem solving and innovation process. It is our belief that our future of working and living together with machine intelligence has already begun. We are hence busy prototyping and testing the first fully integrated ‘Humane-Machine Co-Innovation Model' that incorporates all our essential human experience and expertise in order to ultimately combine the best of both worlds: Human creativity and machine efficiency. Welcome to the future!

Kai Nörtemann

Marketing & Kommunikation @ infinit.cx ++ The Customer Experience Powerhouse ++

9mo

The key point for me is that we are "without any actual understanding of our own intelligence". If we understand ourselves as data consuming somethings, LLMs look fascinating. IMHO we are more.

Bernd Kunkel

Passionate Strategist

9mo

I disagree. Especially because it neglects the fact of what human learning actually is and how it works. The point that there is no I also never was the current intention. It’s a step towards it. But an LLM never was meant to be a sentient entity. Creativity is the combination of things that might ordinarily not fit. It’s the trial and error of doing something uncommon. However knowing a lot of things is the first step to become creative. Another definition of it might be you have to know the rules in order to break them. I also want to ask the following. Does everyone who talks about something truely understand it? I think for a big part of society, politicians and CEOs the answer has to be no. They simply don’t they might not even know what they are talking about if they use script writers. So let me ask this follow up: how does an LLM differ from a CEO or from a politician? They also just say what they expect the majority of voters or their investors wants to hear! Of course I am exaggerating but it’s to make a point.

Jad Nohra

Creatively grind at the right technical depth, as a team. ==[Posts are my own opinions, and do not represent my employer]== .

9mo

Disagree. If it quacks like a duck, it is a duck. ‚Understanding‘ is an ill defined concept to begin with, we don’t know ourselves exactly how we understand and think, and there is no reason why the way we do it should be a benchmark. What matters is the result. Understanding is not black and white, it is a continuum. So the ‚I‘ is a continuum, and LLMs have made huge progress along that continuum compared to what was before them.

Alex Monaghan

CX, UC and AI, fluent French and German. Improving user experiences through communications technology and automation

9mo

It’s worse than that TBF. Frankenstein’s monster had a reasonable level of intelligence, as well as self-awareness and emotions. LLMs have none of those things, and never will have - they are just the latest evolution of Babbage’s calculating engine. Crank the handle and see what comes out.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics