Algorithms are human inventions and as such, algorithms will make mistakes. Yet when their mistakes mean someone loses their freedom, it is worth keeping in mind that it was a human being that was responsible to have put that power in the hands of a machine – an artificial intelligence machine.
Only recently, Google’s OpenAI ChatGPT made a gaffe about NASA’s James Web space telescope which cost Google/Alphabet $100bn. This isn’t new. In 2016, Microsoft’s Tay Twitter chatbot linked feminism to cancer and suggested the Holocaust did not happen.
In the case of a crime, for example, the following also applies. Despite knowing that human judges might make more errors than algorithms, many offenders still prefer a human judge over an algorithm. Seemingly, people prefer the human touch – even when this disadvantages them.
To set up such algorithms, all algorithms require a clear idea of exactly what such a human being wants them to achieve. As a consequence, algorithms also need a solid understanding of the human failings – of justice and injustice, for example.
Yet, it is the ability to learn that endows some algorithm with what we think of as an artificial intelligence or AI. Armed with that, there are two things people want from, for example, an algorithm that can assist in breast cancer screening:
- Such an algorithm should be sensitive enough to pick up abnormalities present in all breasts that have tumors – without skipping over the pixels in the scanned image and announcing them as clear;
- but such algorithm also should be specific enough not to flag perfectly normal breast tissue as suspicious.
In general, creators of such algorithms want as few false positives and false negatives as possible. Yet, in all those cases, a human-machine interaction is needed. The sensible intention is to combine the strengths of a human with the algorithmic machine.
Inherently, algorithms are good at doing the rather unrewarding routine tasks of searching through an enormous amount of information in digital images. They do the same on job applications. Its job is to highlight a few key areas of interest to the medical doctor or the HR officer. After this work is done, a human being – necessarily and sensibly – takes over.
Yet, an algorithmic machine working, for example, for the British – and ever more privatized – NHS or a corporate for-profit health insurer will try to minimize costs in whichever way possible. And that can be and is programmed into the algorithm. At the same time, corporations like to make the algorithm appear to be neutral – a mathematical, objective formula.
Similarly, an algorithm designed to serve a highly profitable pharmaceutical company will aim to promote the use of one particular drug rather than another. Such a Big Pharma algorithm is likely to suggest the use of commercial drugs rather than any other form of – not so profitable – treatment.
Algorithms not only change medicine, but they also change, for example, how we drive cars. A driverless car is based on algorithms. Such algorithms can be grouped into six different levels:
Level 0: no automation whatsoever;
Level 1: feet off;
Level 2: cruise control;
Level 3: eyes off;
Level 4: brain off; and
Level 5: fully autonomous
There are hidden dangers in relying too heavily on automated systems driven by algorithms, not just in autonomous cars. While most algorithms are built to improve human performance, this also has a downside.
This – rather oddly – is set to result in a reduction in human abilities. Most have experienced this already. Today, some people have problems remembering a simple set of phone numbers. After decades of typing, some people even have problems reading their very own handwritings. While others can’t navigate to a new location without using a GPS.
When algorithmic technology does all of this for people, there is an increasingly less of an opportunity to practice human skills. Worse, there are general trepidations that this might also happen with algorithm-guided self-driving cars – very soon. Of course, in the area of car driving, the stakes are a lot higher than with reading your own handwriting.
With self-driving cars, things might even get worse. For many people it is next to impossible – even for the most motivated human – to maintain effective visual responsiveness towards a source of information – like a boring motorway, for example – on which next to nothing happens. Many people can manage no more than around half an hour. After that, we lose our concentration.
On the upswing, we humans are relatively good at understanding details, analyzing context, applying human experience, and distinguishing patterns. Simultaneously, human beings aren’t that good at, for example, paying attention at precision, at consistency, and at being fully aware of our complete surroundings. In other words, human beings have the opposite set of skills to that of an algorithm’s.
Yet, in the area of a crime, for example, algorithms can have an even more problematic impact. One of the many shortcomings lies in the use of what is known as cops-on-the-dots tactic. This occurs when police are sent into a local area to fight crime on the insistence of algorithmic predictions. The use of algorithms runs the very real risk of getting into a devastating, if not outright toxic, feedback loop.
It works in the following way. A police is sent into a poor neighborhood having shown a high level of crime in the past. The police algorithm will then predict that more crime is very likely to happen there in the future. This is the algorithm’s prediction given by past evidence. This feeds its decision-making.
As a consequence, police will be sent to the same neighborhood. This means – over and over again – that they will detect more crimes. Worse, over time, the police algorithm will predict still more crimes and ever more police will be sent in. And so goes the feedback loop of the algorithm,
Things can get even worse, in the area of facial recognition algorithms. We know – in many cases – that such facial recognition algorithms can do better than humans. Yet, when such algorithms are applied to hunt for criminals, the consequences of a very serious misidentification soon become apparent. The question for algorithms, as well as human beings remains, how easily could one person’s identity be confused with that of another person’s identity?
Yet, we know that the chances of you having your very own real Doppelgänger is extremely small. In fact, the chances of two human beings having exactly the same face is actually less than one in a trillion.
The problem for the algorithms is not finding your Doppelgänger. The problem for algorithmic-driven face recognition is that it is shockingly easy to be confused by unfamiliar faces. Worse, this remains the case even when people have only a passing resemblance.
While facial misrecognition continues to be a problem for algorithms, for teenagers it remains rather sensible to “use” the ID card of an older friend to enter a club or buy alcohol. Yet, it gets even worse – not for the teenagers but for the crime fighting algorithm.
Eyewitness misidentification plays a role in more than 70% of all wrongful convictions. A rate of 70% of misidentifications is a shocking – if not outright devastating – not just for algorithms but for ordinary police, criminal layers, prosecutors, and judges.
Unsurprisingly, when it comes to familiar people – those we know – human beings are enormously successful at recognizing known faces. The problem is – whether Doppelgänger or not – that “similarity” lies to some extent in the eye of the beholder.
All this also means that even algorithmic recognition – as an instrument of facial identification – is nowhere close to something like the DNA. Unlike algorithmic-supported facial recognition, DNA analysis has a rather strong statistical and scientific underpinning.
In the end, algorithms – as advantageous and remarkable as they are – also carry rafts of complications, problems, and false beliefs in their abilities. Algorithms will lead to dangerous outcomes when it comes to decision-making whether in car driving, in facial recognition, and elsewhere. And this is the case without having even touched the issues of poverty, class, and ethnicity.
Yet almost everywhere you look – and even when you do not look – algorithms are found in the judicial system, in healthcare, in policing, online shopping, and so on. Self-evidently, algorithms incur problems with privacy, bias, discrimination, racism, errors, accountability, and transparency. With the increased use and reliance on algorithms, these issues will not go away easily – instead, they are likely to become more severe.
Undoubtedly, algorithms will continue to make mistakes. Algorithms will also continue to be unfair, prejudicial, and unjust. Yet, all of this should in no way distract us from a fight to make algorithms more accurate, less biased, less racial, and less discriminatory.
It appears we will not be able to simply delete them or ban them from the Internet. For one, Amazon would go nuts – without algorithms, Amazon will collapse within hours.
Simultaneously, we need to recognize that algorithms aren’t perfect – perhaps they will never be. We also should realize that any assumption of their mathematical authority might turn out to be a mere hallucination.
People should not expose themselves to the dangerous perception that algorithmic and computational machines are objective masters – for they are not. Given the fact that human beings program them and tell them what to do, they will never be objective.
Instead, we would be well advised to start treating algorithms as a source of power – particularly when algorithms are used by corporations. And such a warning should be issued long before we get to Zuboff’s surveillance capitalism.
In other words, we need to critically question the decision-making power of algorithms. We also need to dissect the – profit, hidden, and not so hidden – motives of those who create and use algorithms.
Finally, we also need to demand – from large corporations, private, privatized, and state institutions – to know who stands to benefit when algorithms are brought to bear. The users of algorithms need to be hold accountable for the mistakes algorithms will inevitably make.
ZNetwork is funded solely through the generosity of its readers.
Donate