Picture a magician on stage. Before him, two hats. One is sleek, elegant, polished chapeau sans lapin. The other is quirky and chaotic, but inside, a living cat a bit of real magic. Most of the crowd reaches for the shiny one. It looks better. Feels safer. But it’s empty.
That’s where we are with Artificial Intelligence.
Whether you’re part of the AI optimist camp or skeptical of the hype, let’s take a breath. Step off the stage. And look under the hats.
Artificial Intelligence is everywhere on your screen, in your workplace, woven into the boldest business pitches of the decade. It’s hailed as revolutionary, inevitable, and intelligent. But is it truly the Cat in the Hat a magical presence that brings surprise and utility? Or is it a chapeau sans lapin, an empty hat, promising a rabbit that never appears?
Whether you fall in the camp of AI optimists or belong to the skeptics' school, I suggest we pause for a moment, take a breath, and examine the facts. Because behind the buzzwords lies a quieter truth: current AI is powerful, but fundamentally limited. If misunderstood or misused, it can carry costs, financial, ethical, and even existential.
Let’s start with the heart of the matter: AI is not intelligent. Not in the way we usually mean that word.
The Limits of Logic: Gödel and the Myth of AI Understanding
To understand why, we turn to the world of mathematics. In 1931, Austrian logician Kurt Gödel published his incompleteness theorems, which proved something stunning: in any consistent system of rules, there are truths that cannot be proven from within that system. In other words, no matter how many rules you have, some truths will always lie outside them.
This matters for AI. Because every AI model, especially large language models (LLMs), is built on a system of rules and probabilities. It doesn’t know what it’s saying. It doesn’t understand why rules exist. It can’t step outside its training data to question its foundations.
Nobel laureate Roger Penrose calls today’s AI “artificial cleverness,” not artificial intelligence. Yann LeCun, Meta’s chief AI scientist, put it more bluntly: “Current LLMs are stupid.” They are brilliant at producing output based on patterns, but they cannot reason. They cannot reflect. And they cannot decide what is true.!!!
Moravec’s Paradox (formulated by Hans Moravec in the 1980s) states:
“High-level reasoning requires little computation, but low-level sensorimotor skills require enormous computational resources.”
In simple terms: the things we humans find easy (like recognizing faces, catching a ball, or understanding sarcasm) are the hardest for machines while things we find hard (like playing chess or solving equations) are relatively easy for AI.
This perfectly complements Gödel’s point. Where Gödel shows the logical limits of AI (truths machines can’t access from within their rules), Moravec shows the practical limits, where AI is good at "thinking" but bad at "feeling and sensing" the world.
The deeper irony is captured by Moravec’s Paradox: the things we humans find effortless like recognizing a friend’s face or sensing someone’s mood are incredibly hard for machines. Meanwhile, the tasks we find difficult like playing chess or analyzing spreadsheets are where AI excels. (a computational power)
This is no accident. Machines are good at what’s easy to formalize into code. But the richness of real intelligence, the kind involved in walking through a crowded street, making eye contact, or building trust, emerges from millions of years of evolution. It’s embodied, intuitive, and non-computational. Just like Gödel showed that no logical system can prove all truths, Moravec reminds us that no machine can replicate the messy, physical, emotional intelligence of a human being.
Computational Power ≠ Intelligence
What we call “AI” today is really just an enormous, expensive calculator, one trained on as much data as the internet can provide. To illustrate this, consider the training of an LLM like GPT. These models process 20 to 30 trillion tokens (roughly 100 trillion bytes). That’s the entire public text of the internet, digested into statistical patterns.
Now compare that to a human child. By age four, the human brain has absorbed roughly the same amount of data about 100 trillion bytes through visual perception alone. That’s around 16,000 hours of waking life, seeing the world through the optic nerves.
Yet what that child develops is not just a command of language, but an understanding of cause, effect, emotion, risk, and trust. The child creates meaning. The machine simply replicates structure.
And the cost? AI training requires thousands of GPUs, massive energy consumption, and millions in capital. The child does it on a banana and a nap.
What Machines Can’t Learn: Consciousness and Reality
At the core of real intelligence is consciousness the ability to perceive, adapt, and respond to the unknown. Even a plant leans toward sunlight. Even an amoeba retreats from danger. This is natural intelligence (natural selection), shaped by evolution and governed by survival.
Machines, by contrast, don’t evolve. They don’t sense. They don’t care. They cannot develop awareness because they cannot engage with what physicists call “non-computable reality”, phenomena that cannot be fully captured by algorithms or mathematical equations.
Quantum physics is filled with such mysteries. Concepts like superposition, entanglement, and retrocausality (where a particle seems to influence the past) reveal that reality is not entirely governed by computation. We don’t fully understand it ourselves so how can we teach it to a machine?
Until a computer can grasp and reason about the physics we don’t yet understand, it will remain unconscious. And without consciousness, it will never have true intelligence.
The Data Gold Rush—and the Seduction Campaign
So why all the hype?
Because data is the new currency. Today’s AI models need more and more data to improve. That’s why companies are rushing to acquire users, track behavior, and optimize prompts because your input is training their future model. they’re running massive data acquisition campaigns. And they’re selling you a dream: that AI will do your thinking for you. The media plays along. The hype becomes self-reinforcing.
This nuance is lost in boardrooms racing to adopt AI. Companies are investing tens of billions, Meta, Microsoft, Amazon not because AI is intelligent, but because the illusion of intelligence sells. The risk? Entrepreneurs are adopting tools without understanding their limits. They’re seduced by scalability, efficiency, and automation but may end up with spiraling costs, broken trust, and generic products that fail to connect.
If you build your business purely on AI tools, you might win short-term efficiency but lose long-term originality. The cost of training and maintaining advanced AI is massive. And without a clear purpose, those costs can spiral without return
But we must remember: most LLMs are still “stupid,” to borrow LeCun’s phrase. They’re impressive only in proportion to the data they’ve consumed. And that’s why you're now the product.
Case Study: TikTok—A Vision, Not Just a Tool
Of course, AI can enable powerful innovation, if used strategically.
Take TikTok. It didn’t simply adopt AI; it redefined its business model around how AI was used. While Facebook and Instagram focused on social graphs (who you know), TikTok focused on engagement (what you interact with). Its algorithm surfaced content based on user behavior, not identity. This shift turned data into a competitive edge and forced every social platform to rethink its strategy.
What made TikTok succeed wasn’t just AI, it was vision. A clear goal. Data served the vision, not the other way around, and AI was just the tool.
When AI Goes Wrong: it Accelerate The Hidden Dangers of Scaling AI
AI works well when it’s closely supervised. But scale it too quickly especially in sensitive sectors and the risks grow fast.
Let me share a few real examples:
• A bank deployed an AI agent to handle loan applications. A user submitted completely fake information. The system approved it. Why? Because the AI wasn’t verifying the logic, it was just following rules. It couldn't detect intent or deceit. It didn’t know what “real” meant.
• A car dealership automated its sales process. A savvy customer figured out how to trick the AI into applying a 90% discount to a vehicle purchase. The system, again, followed instructions. It didn’t stop to ask, “Does this make sense?”
• A large retailer used AI agents to assist customers. A user crafted a clever prompt that tricked the AI into exposing a trove of sensitive internal data, because the agent had access, and didn’t understand the danger of what it was sharing.
These aren’t bugs, they’re features. AI doesn’t understand. It doesn’t ask questions. It just does what it’s told. That’s fine when answering simple customer queries. It’s a disaster when handling decisions that involve risk, privacy, or ethics.
So if you’re worried AI might take your job, pause for a second. The real move isn’t to replace your team, it’s to empower them. AI can be a powerful assistant, but it should never be the one making judgment calls.
The future of business isn't "AI-only." It’s “AI + human insight.”
So What’s in the Hat?
Before you integrate AI into your business, ask yourself: is there a rabbit in the hat? Or are you gambling on hype?
AI is not conscious. It is not intelligent. It is a sophisticated calculator trained on our past. That makes it useful but also dangerous if misunderstood.
Real innovation still comes from human insight. From trust-building, moral clarity, and creative thinking. These things can’t be computed they must be cultivated. And they are what will set the most resilient companies apart.
Where AI Truly Shines
Let’s be clear: AI isn’t just a risk. When used in the right context, it can be a game-changer.
AI is at its best when working with massive amounts of structured data especially in fields where outcomes are clearly measurable and decisions follow predictable patterns.
Take agriculture. Farmers now use AI-powered tools to analyze soil data, monitor crop health through satellite images, and predict the best times to plant and harvest. It’s making farming more precise, more productive, and more sustainable.
In the medical field, AI can process thousands of MRI scans or pathology slides in minutes spotting anomalies that a human might miss. It can help doctors detect early signs of diseases like cancer or Alzheimer’s, giving patients a better shot at recovery.
AI is also being used in disaster prediction. By analyzing seismic data, it can help scientists better predict earthquakes. It can model wildfire spread, or forecast flood risk in vulnerable regions providing communities with critical early warnings.
In these scenarios, AI doesn’t need creativity. It needs data and lots of it. When the patterns are clear, AI can help us make smarter, faster decisions. It can save time, resources, and even lives.
Final Thought: What's in the Hat 🎩?
AI is neither magic nor monster, it can amplify your business, but it can just as easily scale your costs, errors, or ethical blind spots. It has no sense of right or wrong, of context or consequence. It has no vision only instructions. It can be the Cat in the Hat, unlocking wonder and efficiency. Or it can be a chapeau sans lapin an empty promise with a dangerous cost that drains your resources and leads you nowhere, a Ferrari ride to your bankruptcy.
If you’re building for the future, remember: the businesses that will thrive are those with trust, moral clarity, and creative foresight. These are human traits. AI can support them, but it cannot replace them.
The difference lies not in the tool, but in you. Your clarity, your questions, and your vision.
So, before you automate, ask: what’s your vision? What’s in the hat? Or are you gambling on hype? Because the smartest move in a world full of computing power is still to think.
Choose carefully. Because the machine won’t.