Attention-grabbing and sensationalist media capitalism has ensured that many people fear artificial intelligence (AI). For that, a monster called Frankenstein is often called upon.
Yet, Frankenstein is not only the world’s most favorite monster, it is also a metaphor for a monster that was born pure and then was exposed to torments. Eventually, the monster became murderous and vengeful.
Most recently, people have imagined that this might be the course that will come to us via artificial intelligence (AI). Worse, bigcloud.global even argues that there is an indisputable link between Victor Frankenstein’s creation and Artificial Intelligence. The end is nigh – Duck & Cover!
Yet, despite such – often media-driven – nightmarish showboating of AI, the seventy-year long history of AI, so far, is actually somewhat more of a story of failed ideas.
For example, by mid-2023, AI is looking forward to build AI-driven robots that will have the full range of intelligent action capabilities that human beings have already. To build AI that is self-aware, conscious, and autonomous in the same way that people like you and I still remains a distant dream.
As of today – and despite fancy videos and millions of R&D dollars spent – not even Amazon has AI-driven robots to fully automate their warehouses. By 2022, Amazon still employed 1.6 million people – most of them in its global warehouse gulag slogging away under slave-like conditions.
Still at Amazon, there still is no robot in sight that picks up items to be placed on a trolley. Simultaneously, car-maker Tesla has made not much progress with creating a fully automated car factory either. It employs 130,000 workers – many of them make cars.
All in all, much of what is broadcasted by the sensationalist press about AI is often ill-informed, to say it mildly, and in other cases, simply irrelevant. All too often, media hype is replacing what AI, machine learning, algorithms, and robots can actually do.
The hard reality for AI – and this is for the foreseeable future – looks actually very different from the grandiose dreams of some AI commentators and what the press likes to pretend. In fact, almost all of AI consists of simple algorithms furnished with the ability to examine huge amounts of data while producing a most likely statistical prediction based on Bayesian equations.
Over the last sixty to seventy years of AI development – from Turing’s computational device (1936) and Konrad Zuse’s first programmable computer Z3 of 1941 to today – gigantic amounts of effort as well as state- and corporate research fundings have flowed into AI.
However, your private home robot butler isn’t coming any time soon to make your bed and serve breakfast. Similarly, the all too often predicted killer robot remains absent as well. This is – most likely – forever. Robots simply lack the very human-like intention to conquer and to dominate.
From Zuse’s Z3-Nazi-computer to today, virtually all a robot and a computer can do is to follow a list of instructions – algorithms. And it basically works in the following way:
- add X1 to X2: if the result is bigger than X3, then do X4;
- otherwise, do X5 and repeatedly do X6 until X7.
This is an example of the most basic level of every computer program – including AI. It all boils down to lists of instructions with no consciousness, no understanding, and no awareness of what lies outside of its algorithms.
When AI experts use the word “intelligence” – as in artificial intelligence – their computer is programmed and coded. This version of so-called “intelligence” is basically an artificial, i.e. non-human, form of intelligence. Worse, AI pretends that this is intelligence but perhaps a more appropriate term would be machine intelligence.
It is a sort of machine intelligence that is most radically reduced to simple and very explicit codes of instructions. AI, like all computer programs needs codes, and algorithms. While intellectually underwhelming, computers can do this with incredible precision and at a very high speed.
Let’s say, a person is set to carry out one instruction every ten seconds. And assuming that this person does not pause for any breaks and works 24/7 to get the job done, it would take such a person about 3,700 years to do what a computer can do in just one second.
Unlike Zuse’s first ever programmable computer of 1941 that crashed a lot, today’s computer processors are phenomenally reliable. They operate – on average – for up to fifty thousand hours before failing, faithfully carrying out tens of billions of instructions for every second of those hours.
Yet, despite the exponential growth of computer hardware and decades of AI research, people often don’t link both to the ability to drive a car. Even after years of grandiose announcements of the arrival of the driverless car, based on sophisticated computing power and AI, as well as 419 crashes and 18 definite fatalities, the driverless car is nowhere in sight.
Based on the continuous failures of AI, engineers had a new idea. They have been trying to replicate nature. AI engineers sought to simulate the human brain and its nervous system as the base for the development of AI. The key idea is that the human brain is capable of producing human-level intelligence. Therefore, the thinking is that converting the human brain into a computer program equals AI. This seemingly rather logical idée fixe, however, runs into two problems:
- Firstly, the human brain is a mind-bogglingly complex organ. A human brain contains about one hundred billion interconnected components. Worse for AI researchers, currently we still do not even remotely understand the actual structure and physical operation of the brain enough to duplicate it inside the aforementioned simplicity of an AL algorithm.
- The second problem is this: in the 18th and 19th century, people thought to simulate the flight of a bird to make people fly. Most obviously, that did not work since today’s aircrafts do not flap their wings. We fly with a very different method compared to a bird.
Imitating the human brain inside a computer does not seem to be a realistic prospect any time soon. In fact, it seems utterly unlikely ever to be possible, even though this hasn’t prevented people from still trying to replicate the human brain to create AI.
Beyond all that, it is still getting worse for AI. An AI machine that is going to be intelligent in a natural environment needs to be able to comprehend a lot of information about it. Human beings, for example, perceive the natural world through our five senses – sight, hearing, touch, smell, and taste.
Never mind the 6th sense – proprioception – and the even more mundane idea of human intuition. Whether 5, 6 or 7 senses, AI has not managed – so far – to replicate any one of them at a sophisticated level.
It does not get better. Any AI-driven robot and even when equipped with the very best digital camera will – in the end – only receive a long list of numbers formulated as an algorithm. In short, robots will not see the world like we do, and they will not understand the world as we do. Put simply, it is a machine intelligence – not human intelligence.
Yet, it does not stop there. Machine learning – the very foundation of all AI – remains very different from human learning. AI learning is learning from and making predictions about data. It is not about understanding things like human beings do.
In other words, an Al-furnished robot that can act in our physical world and tries to undertake its own actions and might even work in a warehouse – even though Amazon still has not achieved this will have problems in our natural environment.
And this is quite apart from ever understanding our natural environment. AI might be able to correctly identify trees, but – so far and perhaps never – will AI be able to understand the concept of a forest from seeing the trees.
The final – and perhaps one of the most devastating – problems for AI is that even if we can create a robot that could recognize our natural environment through the five senses, such an AI system would need to make all five senses work together in harmony. Even the world’s most advanced robot (as of June 2023) – Ameca – struggles with this.
Unsurprisingly, most AI robots – even Ameca – are highly suited for a confined environment like, for example, the board game Go or chess, or checkers, or poker or a factory floor. AI is perfect for a closed system like a board game with set rules and a known board. Yet, the same AI system that can beat any human in any board game – although impressive – is not able to identify a coffee maker sitting on the very same table next to the board game.
Given all this, it is not surprising to see that the global AI community was heavily critiqued for grossly over-promising what it actually was delivering. In other words, AI promising too much and delivering too little.
This continues to be the case today. In May 2023, AI’s latest Uber-fantastic claim was that AI will end humanity. It most certainly will not.
Here is one experiment that shows that AI’s hallucination that the end is nigh might not quite come so soon:
Take a moment to halt reading this article and look around. You may be in front of a computer, on your phone, in a coffee shop, on a local commuter train, at home, or sitting near a lake in the spring’s sun. AI’s machine learning approach doesn’t seem to reflect this. AI assumes that an intelligent system operates with a perception-reason-act-feedback loop. AI is inherently decoupled from its environment while you and I are not decoupled from our environment.
Worse, AI driven computer systems tend to passively wait to be told what to do. Meanwhile, human beings take an active role. Conceivably, waiting passively might have meant that we still sit on a tree in Africa. Even more important is the fact that human beings – since 100,000s of years – are equipped to deal with uncertainties.
At times, we can take care of many different uncertainties simultaneously. By contrast, uncertainties are something AI finds very hard to handle.
Put simply, machine intelligence-based AI isn’t versed in social and human understanding as well as our cultural and social world. Machine intelligence-based AI has virtually no idea about philosophy, cognitive science, and even human-like logic. Instead, AI is mostly focused on three things:
- Probability: the likelihood of what comes next: a man goes walking with his…? dog;
- Statistics: analyzing huge amount of data and find statistical correlations; and,
- Algorithm: expressing this in codes and algorithms that can predict outcomes.
AI’s algorithmic restrictions makes it very hard for AI to deal with (read: not understand) reality and the unpredictability that all too often comes with reality.
Basically, AI is confined to its algorithms and the data it uses to feed those algorithms. As a consequence, even the much hyped-up ChatGPT only knows what it finds on the Internet. We can look out of the window and see reality – ChatGPT can’t.
Even more problematic is that AI doesn’t know in advance which features are either going to be relevant or irrelevant. As a consequence, AI programmers might be tempted to include everything. This is a very big problem for AI.
AI calls this the curse of dimensionality. Here is one of the true problems for virtually all of AI: the more features AI includes the more training data AI is going to have to be given in its AI program. This voluminous data input will, inevitably, slow down the AI’s algorithm to learn, to make decision, and to function.
For the still to be awaited self-driving car – it is a bit like Waiting for Godot with Godot never coming! This remains a major problem. In other words, the sheer complexity, uncertainty, and inherent unpredictability of our environment is bad news for AI.
It also means that, for example, a simple road sign that has been altered or even partly obscured, let’s say by an advertising sticker or a “garage sale this way” sign, doesn’t make it all that much difficult for a human driver to still read and understand the sign.
The driver can easily and correctly interpret the road sign. Yet, a road sign that is incomplete, damaged, fainted, partly obscured, overgrown by a hedge, etc. is often interpreted completely wrong by the AI algorithm that guides a driverless car.
As a consequence of much of what has been said above, the first death involving an autopilot-driven car made headlines around the world. It happened in May 2016, when a Tesla driver drove into an eighteen-wheeler truck and was killed. The car’s sensors had been confused by the view of the white truck against a bright sky.
Seven years later, we are still waiting for driverless cars. Currently, the AI community has only a rudimentary understanding of “some” components of human intelligence but still has basically no idea how to build an AI system that integrates these components. As a consequence, even the world’s best AI systems still fail to show any meaningful “understanding” of what they are doing.
In other words, AI is miles away from creating an AI system that can answer the question, what is it like to be … ? As strange as it may sound but human beings still have way more in common with, for example, a rat, a cat, and a dog than with AI.
Simultaneously, AI still has more in common with your toaster, your refrigerator, and your bedside alarm-clock than with human intelligence. During the last half a century not much has changed for AI.
When looking at the human versus AI’s machine intelligence, AI engineers like to focus on what they see as the Turing test. But even when AI would pass the Turing test – if it ever will – AI will not exhibit “understanding”.
As a consequence, no matter how much AI “appears to have” or “pretends to have” an understanding, it would still be nothing more than a fantasy to believe that AI’s machine intelligence actually understands. Behind the often rather shiny promises, media hype, and plausible-looking online videos about AI and robots, there is next to nothing there.
ZNetwork is funded solely through the generosity of its readers.
Donate