By May 2023, the European Parliament’s committee on Artificial Intelligence in a Digital Age (AIDA) has voted to strengthen its legislative proposal on artificial intelligence (AI), as the AI Law heads towards a plenary vote. It was another step for a comprehensive regulation of AI in Europe.
Within a rather unusually short time, members of the European Parliament had reached an agreement on a draft for the world’s first comprehensive regulation of artificial intelligence. In the end, EU lawmakers voted to forward the first draft of what is set to become the EU’s Artificial Intelligence Act to the next stage of the EU’s law-making procedure.
Details of the AI Act – as it will be known – have to be worked out together with the 27 EU member states governing 450 million people – minus the post-Brexit UK. The AI Law is a compromise. It has found a common ground between EU conservatives staunchly in favor of mass surveillance, on the one hand, and potentially to a somewhat over-regulated AI, on the other hand.
Balancing both, the new AI Law will be a typical European compromise achieved by the EU’s parliament that will regulate AI to protect civil rights, while simultaneously stimulating innovation. Even former deregulation, neoliberal fundamentalist, and super-libertarian Elon Musk has agreed to regulate AI.
The EU’s proposals for AI systems are set to classify AI according to different risk levels – from minimal risk to limited risk, to high risk, and to unacceptable risks. Yet, high-risk AI systems would still not be banned.
However, a very high degree of transparency would be required for the use of high-risk AI systems. This is also designed for the still rather futuristic and potentially high-risk development of what AI experts call “generative AI”. Such a general purpose AI or AGI system would, for example, have to also disclose whether copyrighted material was used.
Yet, there are also widespread concerns about such AI systems giving out incorrect answers as well as concerns about the – always prevalent – issue of data protection. In addition to that, China’s plan for an ideologically orientated AI system has intensified EU’s calls for an EU-wide regulation of AI.
Most recently, European members of parliament (MEPs) voted in favor of introducing regulatory requirements for artificial intelligence. European civil rights activists and NGOs have already welcomed the EU’s move.
On the upswing, there will be no AI-controlled monitoring of public spaces in Europe. In addition, MEPs also voted for a complete ban on biometric mass surveillance in public spaces, as well as creating facial recognition databases.
European human rights, consumer rights associations, and several political parties have welcomed the pro-human rights outcome found in the EU’s AI Law. Germany’s AlgorithmWatch – sponsored by Germany’s state, trade unions, and the Robert Bosch corporation – even call it a strong signal towards human rights.
Altogether, the EU has been working on the AI regulation for more than two years. In recent weeks, AI has become somewhat more tangible to the general public. The key trigger has been the text generator ChatGPT and image generators such as Midjourney and Stable Diffusion.
In the highly monopolized search engine “market” – Google’s global share is 93% – there is a kind of a pretended “race” among the elephant in the room (Google) and a handful of mice (Bing, Yahoo, Yandex, etc.). Some are looking into the hallucination of who might be able to compete with the world’s dominant provider – Google.
Microsoft’s Bing has already integrated ChatGPT into its search engine. Shortly after that, Google announced at its developer conference, that it would provide superior answers to entries formulated as questions in its classic search engine.
In Europe, all of these types of Internet services will soon be regulated by the AI Act. This is needed not only for AI-supported text generators that have made negative headlines but also because they can easily generate texts filled with incorrect information and misleading information, and downright disinformation.
Yet, the expected taming of AI chatbots will take a lot of time and effort, says Alexandra Geese, digital expert for the European Green Party. EU politicians do not want to give AI and ChatGPT any leeway to spread disinformation, hate speech, false, and discriminatory information.
Many in Europe believe – perhaps falsely – that EU will put people above profits – quite a hard act to achieve under capitalism. Still, AlgorithmWatch welcomed Europe’s ban on real-time biometric recognition techniques used for facial recognition. These AI instruments are a basis for mass surveillance.
They also invade human privacy, and they are a fertile breeding ground for discrimination, bias, and stereotyping. To fight this, the European parliament’s vote is a historic breakthrough. It is particularly for those who seek to prevent a Terminator-like dystopian future of biometric mass surveillance in Europe based on, for example, the Chinese model.
Praise for EU’s new AI Law also came from the European Digital Rights (EDRi) network. EDRi argues that the recent EU’s decision shows that the EU is willing to put people above profits. However, some aspects of the AI Act also received criticism.
For one, AI developers could decide “for themselves” whether their AI system is “significant” enough to be classified as a “high-risk product”. Strict requirements only apply to those forms of AI classified as high risks.
Most recently, another EU institution – the Council of the European Union representing 27 member states – also agreed on a common AI position. However, several civil rights organizations have criticized the proposal’s numerous loopholes. For example, the Council’s plan would still make biometric recognition of people possible. Whereas the EU parliament’s plan for the new law rejects AI-powered biometric recognition software.
With the current developments in AI, this – at least potentially – no longer looks as being material for science fiction writers. Many have argued that with the launch of Chat-GPT in November 2022, it became rather clear what AI can do. And – potentially even more worrying – is the fact of how rapidly AI is developing.
The ChatGPT application – known to many – is actually GPT-3. Yet, the next version – GPT-4 (with 5, 6, etc. to follow) – is already on the market. It is significantly better in many respects. Yet, many AI experts agree that the technology can be disruptive. Others have warned, that with the next and the Uber-next version, it might get tight for humans, if the whole thing is not placed into regulated pathway.
Meanwhile, three EU institutions will begin to negotiate the new AI Law. Worse, the EU itself sees a 2–3-year implementation phase for the AI Act. For that, the EU plans to set up new authorities while AI developers are given ample time to adapt to the new rules.
Yet, the slow process of the EU regulatory machinery might be an additional problem. After about 3 to 5 years, AI will be a completely different technology compared to what we see today. Worse, it could get really bad as algorithmic discrimination can – hypothetically – increase when more and more people uploads garbage (e.g. fake news, misinformation, disinformation, hate speech, etc.) – onto the Internet.
Since a program like ChatGPT trains itself by analyzing huge amounts of data from the Internet, AI can be fed a huge amounts of garbage. Rubbish in, rubbish out! In other words, our discriminatory society reproduce discriminatory training data fed into AI programs like ChatGPT.
Back at the EU’s law-making process, there is an underlying approach that runs through the EU’s AI Law. AI programs are assigned to one of four risk categories: unacceptable risk; high risk; limited risk; and minimal or no risk.
Depending on the area of application, there can be no risk and low risk. In that case, none or only a few regulations apply. Furthermore, AI in the area of human resources, education, critical infrastructure, and law enforcement, etc. is considered high-risk. In that case, the rules that apply are for approval, data quality, transparency, and human supervision.
The law’s fourth category – unacceptable risks – means that the use of AI can be prohibited. This applies to the aforementioned area of real time biometric monitoring in public space.
Overall, the above outlined time factor for the implementation of EU’s AI Law is not the only problem. Perhaps an even greater problem is how to overcome the neoliberalism ideology prevalent in the AI sector. All too often, it culminates in zealous deregulation (read: pro-business regulation); the phantasm of Ayn Rand’s demagogy; and rampant techno-libertarianism.
Worse, there is a significant part of capitalism’s elite and its entourage of functional apparatchiks that is driven less by concerns of injustice and human rights violations, but more by neoliberalism’s free market ideology and the hallucination of eternal competitiveness.
As a consequence, serious AI rules will only be enforced – never “against” – but only “with” the dominant capital interests in mind. This mindset lines up so beautifully with ideologies like neoliberalism, deregulation, and uncontrolled techno-libertarianism.
And this is not to mention the very serious power of corporate lobbying furnished with plenty of cash by multi-national tech-corporations that lurk in the background. Perhaps, it is still easier to imagine the end of the world as caused by AI, than to imagine the end of corporate capitalism.
ZNetwork is funded solely through the generosity of its readers.Donate