By June 2023, the global mass media continued to hype up the hysteria about artificial intelligence (AI) with outlandish statements – such as that AI could destroy democracy – and headlines like these, AI risks leading humanity to extinction, experts warn.
Yet, a serious look at AI will tell you that your toaster is not going to be converting itself into an AI-driven Terminator-style killer robot.
For a start, one of the more obvious – and still unsolved – problems of AI and its algorithms remains on facial recognition – this tells your toaster/killer-robot whom to kill.
Unlike such hallucinations, one of the real issues with face recognition showed up some time ago. In the year 2015, Google Photos misrecognized some photos of African Americans as gorillas. It did not get better.
In 2016, it was revealed that if you did a Google image search, for example, on professional hair style for work, Google’s pictures were almost all of white women. But when doing the same search for unprofessional hair style for work, Google’s sophisticated algorithms found mostly black women.
Worse for Google and many others who advocate artificial intelligence (AI), AI’s algorithmic systems copycat what AI experts call “input data”. Even more problematic, computer codes do this without regard to social and moral values, bias, racism, and even the quality of the data. AI is simply incapable of “understanding” what it is doing.
It even gets worse when society’s bias, stereotypes, racism, etc. are asphyxiated inside the feedback loops of echo-chambers that operate on several online platforms. In other incidents of feedback loops, an AI system is being trained on data that were generated by algorithms in an earlier setting – a vicious circle.
This is a kind of Norbert Wiener-style feedback loop inside artificial intelligence that can – and in some cases, actually “will” – reinforce bias and racism – without ever understanding what racism is.
On the upswing, in July 2018, Google Images replied to a search for an idiot with pictures of Donald Trump. On a more serious note, even stalkers have now started to use AI techniques to monitor and manipulate their victims. At the same time, spammers also use AI, for example, to dodge CAPTCHA websites that make sure you’re a human person and not a robot.
Worse, when extremely efficient technology like AI are used against an out-group, there are huge potentials for online hate speech and communicative violence. In other words, AI may encourage it but since today’s AI is driven – rather slavishly – by data without any real understanding, AI simply does not understand the impact of hate speech on its victims.
For the most part, AI remains confined to a range of specific problems to be solved. Simultaneously, AI tends to side-step core problems even when it has intended to address them. AI simply cannot get the wider picture. Its machine learning system is based on analyzing large amounts of data and come up with a most likely course of action – that is pretty much it.
This also means that there are probabilistic models that analyze incoming data to ascertain (read: not understand) the likelihood of the most suitable answers. It is the output AI thinks is most probable – in many cases, not even the most plausible outcome.
This standard approach was, for example, vital for the success of IBM’s Watson. Much of it is likely to have a continued impact in, for example, IBM’s DeepQA and beyond.
For instance, AlphaGo required 30 million games to reach its human-exceeding and superior performance. That is by far more than any one human could ever play in an entire lifetime. Algorithms are perfect creations for games. What AI experts call deep learning might optimize game-playing, however, it simply is not built for human learning and understanding.
AI would easily be confused between, let’s say, a real refrigerator and a picture of a fridge. Even more problematic, it can’t even give human-style explanations when ChatGPT’s answers appear to be correct or not. Even more troublesome is that AI cannot explain the difference between, for example, these four statements:
- Free Nelson Mandela – there are no free giveaways of Nelson Mandela.
- Free Horse Manure – there is no horse manure to be freed from a prison.
- JFK, “Ich bin ein Berliner” – JFK meant: I am a resident citizen of the German city Berlin
- JFK, “Ich bin ein Berliner” – Goggle thinks: JFK is a German jelly doughnut with no central hole.
The AI pioneers of deep learning know that AI’s deep or neural networks tend to learn only superficial statistical regularities: data analysis, correlations, predictions, etc. It does this in a set – and often rather – specified dataset. It cannot comprehend higher-level abstract concepts or the theory behind it.
In other words, it is a bit like: you can train AI but you cannot educate AI. AI is a bit like a pet dog. You can train a dog, but you cannot educate your dog to have a sensible discussion. Even a well-trained dog will simply not discuss today’s inflation rate, the meaning of capitalism or Picasso, and Gwyneth Paltrow’s sweetly scented products. Would AI ever get the fine nuances of language like idioms, analogies, metaphorical speech, antiphrasis, double-entendre, or irony?
Yet, understanding irony, Paltrow, capitalism, Picasso, etc. is a question of education – not training. On the whole, there is a marked difference between training (AI) and education (human beings):
AI = Training |
Human = Education |
Seeks to improve its abilities |
Seeks to produce knowledge and education |
Improves performance |
Creates reasoning and judgments |
Method of skill development |
Method to knowledge – philosophy of epistemology |
Teaches tasks & creates predictions |
Teaches concepts, theories, critical analysis |
Practical applications |
Theoretical and conceptual application |
Short-term outcomes |
Long-term development |
Narrow range – applied issue |
Wide range – cultural issues |
Related to tasks |
Related to wider education |
Creates statistical correlations |
Is aware of ethical implications |
In short, AI creates statistical correlations and suggests a most likely option for what comes next – for example, in the sentence, a man with a leash goes out to walk his … ? AI will find the most likely word that comes next: “dog”. In another example, a recent online seminar about AI and Judgment that started with the words, “the great German philosopher …” AI may predict the word that comes next: “Kant”.
Yet, unlike human beings, AI does not understand the meaning of what occurred – never mind the moral philosophy of Immanuel Kant. As a consequence, when it comes to complex chains of reasoning, judgments, and morality, AI is at a loss.
Overall, AI does an outstanding job in matching words – but fails bitterly on understanding. By contrast, when asking all three AI programs for directions to the nearest airport, the following used to happen:
- Google Assistant gave a list of travel agents;
- Siri gave directions to a seaplane base;
- Cortana gave a list of airline ticket websites
Worse, in my own experiment, Siri could not even understand my surname – not even when it was pronounced with an American accent. In other words, even after almost seven decades of AI development – starting with the Dartmouth meeting in summer 1956 – AI remains frightfully close to being functionally illiterate. Here is a simple example that is hard for AI to comprehend:
Medical doctor: do you get chest pain with any sort of exertion?
Patient: Well, I was cutting the yard last week and I felt like an elephant was sitting on me (pointing to the chest).
Meanwhile back at a normal school, student Lisa is reading a schoolbook story. Her goal – unlike AI – is “not” to construct a correlation of statistically plausible matches. Unlike AI, Lisa reconstructs a world that an author has tried to share with her – something AI still cannot do.
In other words, even the utmost sophisticated statistical algorithm is no substitute for real-world understanding. At an even simpler level, the following sentence, ‘people can fish’ can mean two different things:
- In the one understanding, it means that people are able to go fishing;
- In the second understanding, it means that people are able to pack fish into cans.
Take the aforementioned school student Lisa and the sentence “Lisa couldn’t lift Gemma because she was too heavy”. The “she” could be either one of the two: Lisa or Gemma. Interestingly, human beings don’t even notice these ambiguities. With very little conscious effort, human beings interpret these ambiguities correctly. For AI, this is a very hard thing to do.
Yet, apart from these problems, some authors still fear a Terminator III-like future. They like to forestall the rise of the machines of super-intelligent robots set to attack us. Yet, when the killer robots come, there are five things you need to do:
- Doors: Close the front door and lock it – today’s robots still struggle with a simple doorknob;
- Color: Paint your door and the doorknob black – it reduces the robot’s ability to open the door;
- Babies: Wear a t-shirt with a picture of a baby – robots think you are a baby and will leave you alone;
- Bananas: Go upstairs, leave a trail of banana peels and nails, and jump onto a table – the robot is done;
- Battery: Call the emergency phone number and just wait – the robot’s battery will soon run flat.
Meanwhile, in reality, AI-furnished robots do not have personalities and human-like desires. Robots simply have no intention to massacre eight billion human beings; they will not take your house; conquer your street; and enslave us to become the evil AI-overlord. All of this is pure fiction. In reality, most robots work on assembly lines.
They do rather dim-witted, dull, and highly routine tasks that people are all too happy to give up. Pretty much the same applies to AI and to ChatGPT.
Yet, AI can find the way your (still not self-driving) car should take and, for example, what the most likely condoms do Amazon customers buy: Durex Thin Feel Latex Condoms Regular Fit, Pack of 30.
Never mind the fanciful – at times rather fantastic and far-fetched – videos put out by Boston Dynamics about its Atlas robot. In some cases, such videos are artificially sped up. What a robot can do in a minute, or an hour is reduced to a few seconds. A second problem is that lab videos are not representative of real-life surroundings.
It is still very likely that a future household robot goes into a kitchen to get a glass of wine and fails to find the cockroach in the wine glass. It will get a glass of wine, but it maybe spiced up with a cockroach – despite its sophisticated AI. Any five-year-old would spot the problem.
Even more problematic for AI is not just the intelligence of a 5-year-old but, also its emotional intelligence and (worse for AI) Howard Gardner’s 8 forms of intelligence.
In other words, AI-powered statistics may be an approximate. This means it “comes close to” – but no more. AI struggles with meaning. Moreover, AI cannot – today – capture the real thing. It is getting more problematic for AI when you consider that language tends to be rather underspecified.
Human beings often don’t say everything that people actually mean. Worse, they also use double meanings – the base for every British Carry On! Movie. Even when James Bond said, things are shaping up nicely, AI might not pick up what 007 really meant.
In short, AI is reasonably okay when it comes to Chomsky’s syntactic structure (grammar) but human beings are way superior when it comes to semantic structure (meaning).
All this is extremely challenging for AI, and this is without even looking at what is called the reality gap – what will happen when AI is let loose into the real world? What occurs inside algorithms is all too often entirely different with what happens when AI is no longer a computer simulation. Hence, Siri failed to recognize my surname.
So much for AI’s rather severe limits such as, for example, its lack of understanding; no common sense; an almost inherent bias; it has next to no creativity; it has absolutely no emotions and no EQ (i.e. self-awareness, social awareness, emotional self-control, and human relationships); and finally, AI’s lack of robustness.
Beyond all that, there are still two major unsolved AI problems: the safety of AI and the ethics of AI or better the lack thereof.
Thomas Klikauer is the author of German Conspiracy Fantasies – out now on Amazon!
ZNetwork is funded solely through the generosity of its readers.
Donate