Despite the daily media hype about artificial intelligence (AI), there still is a rather sobering reality of AI. This is the point where the media hype of AI will do it all, threatens jobs, takes over the world, etc. meets the dreary reality of AI. The reality of AI is a far cry from what it presents to us. Unlike the often rather fictional claims about AI, real AI is far more limited.
Already in the year 1967, one of the Godfathers of AI – Marvin Minsky – claimed that, within a generation, the problem of artificial intelligence will be substantially solved. Over half a century later, we are sill waiting for that to happen.
In 2002, another so-called Godfather of AI – Ray Kurzweil – claimed that AI would surpass native human intelligence by 2029. However, we will still be waiting for this one too. Undeterred, Kurzweil is still onto cooking rather unrealistic predictions.
In the year 2012, one illustrious prediction even saw the appearance of autonomous cars in the near future. Over a decade later, we are still waiting to see them on our streets.
In 2016, when IBM’s machine won Jeopardy, it was claimed that it would revolutionize healthcare. Every time I am at my local GP or a hospital, I am looking for the promised revolution, but no revolution is in sight – not in health care and not otherwise.
Just a year before that, Facebook’s project “M” – an AI chatbot – was, so we were told, able to cater to every need, from making dinner reservations to planning your next vacation. Almost simultaneously, I can hear my wife downstairs booking our next flight with no help from Facebook’s “M” even though it was set to cater to every need. Worse, I had to make a cup of Muggefug coffee myself with no help from Facebook’s “M” that simply wasn’t able to cater to every need.
In short and despite decades of outlandish announcements about AI, not much has materialized with the notable exception of ChatGPT – which still can’t book my wife’s flights or make my coffee.
Instead of the kinds of fictions that are dished up, Facebook’s “M” was quietly canceled – barely 3 years after Facebook’s hyper-majestic proclamation. Even more electrifying was a claim made by Washington’s golden boy Eric Schmidt. He broadcasted that AI would solve climate change, poverty, war, and cancer – a sad hallucination.
By 2018, Google’s Sundar Pichai declared, AI is one of the most important things the humanity is working on … more profound than … electricity or fire. Five years later, most people cannot do without fire or electricity while the most important thing for humanity is still nowhere in sight.
Topping much of this is Oxford’s Nick Bostrom threatening us with superintelligence that is set to take over the world … in the foreseeable future. Check underneath your bed before jumping in tonight – the terminator is waiting for you!
Teaming up with Schmidt is the war criminal and dedicated AI-non-expert – Henry Kissinger. Undeterred by age, crimes, and his underwhelming knowledge of AI, Kissinger declared that with AI, human history might go the way of the Incas’, faced with a Spanish culture that is incomprehensible and even awe-inspiring to them. While presenting nothing new on AI, the Schmidt-Kissinger book is full of cold war rhetoric and the greatness of American power.
Meanwhile, never to be left out Elon Musk warned that AI is like summoning the demon … worse than nukes. All of these overblown AI announcements seem to follow a pattern.
Every time a self-appointed apostle of AI, AGI, the singularity, the AI explosion, ASI, etc. says something, countless media and YouTube echoes are set to follow. In other cases, even a minor progress in AI is presented as the next AI revolution. Much of this is done for a reason.
Since large parts of media capitalism are driven by online advertising revenue in which every click on a website represents “$”, corporate mass-media does not just hype up every little bit of news about AI, the media also tends to sensationalize AI.
Worse, most just jump on the media bandwagon that lays golden eggs. As a consequence, the unsuspecting public has been enticed to believe in an inevitable move from AI to AGI to ASI. And that this is much closer to be with us than it really is.
Back in reality, when people drive cars, around 95% is defined by routine actions. Luckily, this is replicated relatively easily by AI machines. However, for the other 5% of the time, drivers will have to do things no current AI machine can do reliably. It is for this 5% that our streets aren’t populated with self-driving cars. In short, we still need human beings to drive.
Unlike AI, human beings are also furnished with the ability to reason and act on things that are different, new, unexpected, surprising, and unpredictable. AI find it hard, if not impossible, to deal with those. This is pretty much the end of the line for AI and by inference for the self-driving car. By mid-2022, AI’s failure resulted in 400 self-driving crashes.
One of the key problems for AI is the fact that even AI’s immense database of prior experiences failed – so far – to deal successfully with a much needed flexible understanding of the world, including the unpredictability of the immediate surrounding of self-driving cars.
The key hindrance for AI is, to put it simply but correctly, our words. Most of today’s AI is engineered for rather narrow tasks – winning a board game or recommending the best underwear to you.
In general, AI works pretty well for the particular task that it has been programmed for. Take, for example, AI wins in chess or GO. In the game of GO, its rules haven’t changed in 2,500 years. It is a confined system – a perfect environment for AI. Reduced to a digital idiot, AI can handle this to perfection.
Yet despite this, a narrowly engineered AI system for a rather specific task still manages to offer the wrong advertisement on Facebook or condoms for Lisa. Luckily, nobody is going to die – apart from the troubles that Lisa’s husband experiences when ordering condoms. Yet, when AI drives your car into a location that appears unusual that isn’t on its database, or when an AI-algorithm misdiagnoses a cancer patient, it can have very serious consequences.
Here is the true problem for AI. When one of today’s rather narrow AI system plays, for example, a board game, its algorithms are set to deal with a system that is completely closed. AI experts call this a closed system.
This is an environment in which AI can achieve reasonably good predictions and recommendations. Once a “closed” environment is locked in and the rules are fixed, AI machines have an almost natural advantage.
One of the major problems for AI is that real life isn’t a board game. Real life is open-ended. Real life – even an ordinary traffic on an urban street is an “open system”. Worse for AI, no data perfectly reflects our ever-changing world defined by a sheer endless array of open systems.
The whole thing gets even more problematic when there aren’t even fixed rules for AI to follow. The final predicament for AI comes when choices, options, possibilities, and risks in an environment are unlimited. A seemingly unsolvable nightmare for AI algorithms.
Given all this, it appears as if there is a deep canyon between facts and fictions in AI – a kind of chasm between the often-outlandish desire of AI advocates and hard reality of today’s AI. In other words, AI can simulate games to perfection but it cannot do the same for reality.
AI programmers who engineer algorithms can create perfect simulations based on available data at very minimal cost. However, in our real world, perfect simulations do not exist. Even AI cannot simulate attending your local doctor’s medical center eight million times (step 1) and then slowly but surely adjusting AI’s parameters with each visit (step 2) in order to improve AIs’ decision-making process (step 3).
Beyond all that, another problem for AI is that even with limited achievement inside closed-off work tasks do not assure the same outcome in our open-ended real world.
Despite all the fantastic claims made by the advocates of AI, today’s AI still has a lot to learn from people. And sadly, for AI, even for young kids who – in many ways – easily outstrip AI machines in their capacity to absorb and understand new concepts. There are about five avenues in which the human brain and mind will easily outperform most of today’s AI:
- Human beings can understand, create, and shape language;
- Human beings can understand the physical, social, and emotional world;
- Human beings can adapt to new circumstances and ever changing environments;
- Human beings can learn new things they have never seen before very quickly;
- Human beings can reason in the face of incomplete and contradictory information.
Beyond all that, human beings can easily correct an iPhone’s autocorrect functionality that mistook Happy Birthday, dead Susi for Happy Birthday, dear Susi. Beyond your iPhone announcing Susi’s death, there is, today, no valid reason to think that AI will – in true Hollywood science fiction fashion – rise up and destroy us.
Even before the 1956 Dartmouth AI Workshop and its interception of AI, AI has showed absolutely zero interest in destroying humans, invading their territory, take their possessions and their rights, and annihilate their environment. In fact, there is simply no AI warfare similar to human warfare. AI simply has no idea nor any sort of understanding of how to engage in fighting for the things we like to fight over.
Instead of Terminator III’s Rise of the machine, real existing AI is run by computer nerds engineering simple idiots driven by Bayes’ statistical probability algorithms that focus on pre-defined, i.e. coded work task.
AI is absolutely unaware of the big picture and even if AI could “see” – and this is given all the problems the simple face recognition still has – the bigger picture, most likely, would not understand the bigger picture.
Like virtually all of AI, AlphaGo simply does not and (worse) “cannot” care about questions like, is there life outside of Go? In other words, even a sophisticated system like AlphaGo remains perfectly content doing what it is programmed to do – with absolutely no aspiration to do anything else.
Simultaneously, ChatGPT does not and in fact it “cannot” care about anything that is not typed into its search field. The outside world simply does not exist. In the end, one might paraphrase psychologist Steven Pinker,
The idea that ASI-driven robots will enslave humans
makes about as much sense as the worry that,
since jet planes have surpassed the flying ability of eagles,
one day, they will swoop out of the sky and seize our cattle.
Thomas Klikauer is the author of German Conspiracy Fantasies – out now on Amazon!
ZNetwork is funded solely through the generosity of its readers.
Donate

1 Comment
In software development, AI falls short of media hype. Practical applications lag behind exaggerated claims. Challenges in handling open systems and adapting to unpredictability persist. Human reasoning and adaptability are irreplaceable. Despite fears of AI dominance, reality shows AI remains focused on predefined tasks without a broader understanding. Recognizing this divide is crucial in navigating the tech landscape. For insights on AI in next-gen app development, check out this informative link: https://attractgroup.com/blog/ai-for-next-gen-app-development/