Another brick in the wall - Europe's ambivalence towards innovation and artificial intelligence
Some critical thoughts on how Europe's AI Act is engaging in a difficult balancing act between over-regulation and AI innovation in the geopolitical race for AI dominance
With the General Data Protection Regulation, the Chips Act, the Digital Markets Act, the Data Act, and the recent AI Act, Europe's ambition is to be the world's most decisive regulator instead of being a global, leading innovator. Although these acts could be considered innovative regulatory marvels in their own right. As expected, new regulations are hailed by professionals with zero skin in the economic game, such as auditors, lawyers, consultants, lobbyists, and civil servants. But Europe's regulatory economics may impact millions of business owners, entrepreneurs, and CEOs in innovation trenches, many of whom are working in artificial intelligence.
What we will be discussing:
Europe's geopolitical positioning on the global stage in the race for artificial intelligence using regulatory economics in a world of asymmetric conflicts
The dangers of overregulation, the shortcomings, and how safeguards for artificial intelligence are more a topic of culture, ethics, and morality than law
Why innovation always trumps regulations, yet how Europe created a moat by securing a first-mover advantage in regulating artificial intelligence
The dichotomy of artificial intelligence versus human stupidity and where the real danger of AGI/ASI as dual-use technology lies
Before we enter this long read, it is worth reading Europe's AI Act FAQ and using a simple self-assessment tool while listening to the preliminary remarks by Thierry Breton, European Commissioner for Internal Market with responsibility for space, defense, and security, during the press conference following the Artificial Intelligence Act Trilogue on 9 December 2023 in Brussels.
While prominent figures, such as OpenAI's Sam Altman, warned against the overregulation of artificial intelligence, the world's most restrictive regime on technology development was pushed ahead. Thierry Breton cites the enactment of the AI Act as follows:
The EU becomes the very first continent to set clear rules for the use of AI (…) The AI Act is much more than a rule book — it’s a launch pad for EU start-ups and researchers to lead the global AI race.
The Brussels-Effect
If you can't beat them, let's rush to regulate them. It is Europe's intuitive answer to a rapidly changing world. Over the past decade, it is evident how the old continent has been snoozing through a time of economic prosperity and political stability, where a socialism-infused lackluster attitude towards technology innovation has allowed for a steady accumulation of existential risk under the geopolitical surface.
Much like the tyrannosaurus rex, EU regulators seem to hunt by movement. Whenever an industry appears to be moving too dynamically, Europe rushes in and aims to regulate an accelerating movement. Pushing regulations would be a relatively localized problem if Europe were a closed market. Still, as most tech companies have either a presence in Europe or service the 195 million internet users in the European market, the impact has an immediate global effect when Europe's regulatory coterie passes a law. Political scientist John McCormick shared a more sobering view in his study on Europeanism, where this attribute of cosmopolitanism is associated with the belief that all Europeans, and possibly all humans, belong to a single moral community that transcends state boundaries or national identities.
Addressing the dangers of irresponsible use of AI is not new; the G20 AI Principles of 9 June 2019 and the World Health Organization report of 28 June 2021 influenced the European AI Act long before its enactment. While taking a pole position, Europe is one of many world policymakers that aim to address these technological challenges. Other world powers are rapidly advancing in regulating trustworthy AI, albeit by playing a catchup game. For now, the People’s Republic of China has been drafting either provincial AI regulations, such as the Shanghai Regulations and the Shenzhen AI Regulation, or technology-specific ones, such as the Provisions on the Administration of Deep Synthesis Internet Information Services(2022) or the Measures for the Management of Generative Artificial Intelligence (2023). Meanwhile, China seems to be playing the regulatory game fast and loose while struggling to establish horizontal AI regulation. The United States recently published The National Artificial Intelligence R&D Strategic Plan as a clear strategy for the usage of AI. However, it is still rushing to come up with supporting regulations. While various state-centric AI and data laws have been enacted, the 2022 published The Blueprint for an AI Bill of Rights can be considered a first set of national principles on the responsible use of AI. Three versions of a comprehensive AI regulatory regime have emerged in Congress – two in the Senate and one in the House. However, no comprehensive regulatory and clear framework has been implemented so far. It seems, both the US and China might learn a thing or two from taking a page out of the European playbook on the establishment of a comprehensive and horizontal AI regulatory environment to ensure trustworthy use of artificial intelligence.
So, while AI technology isn’t new, the fact this technology was high on policymaker’s radar worldwide wasn’t either. But it wasn’t until the rapid adoption of foundational models and generative AI in general that things got kicked into high gear. Nobody questions there are noble purposes behind this Brussels-Effect, predominantly ensuring the safety of European users being exposed to these new innovative technologies; Ursula von der Leyen, president of the European Commission, instead addresses it as how the AI Act transposes European values to a new era.
Are European values really stimulating innovation?
To reach a better understanding of these European values, we can digest the Treaty of Lisbon, but perhaps we can reach a more holistic interpretation as postulated by the philosophers Jürgen Habermas and Jacques Derrida, who argued that new values and habits had given contemporary Europe' its own face' as a counterweight to the United States, stating five facets of what they described as a common European political mentality, where in 2003 Germany's most influential contemporary philosopher of the 20th century discussed how Europeans rather trust politics than the capitalist markets and display high support for the organizational and the leading role of the state.
In times when free markets and capitalism play a considerable role in today's thriving technological innovations, such as artificial intelligence, Europhiles underestimate the near-future seismic shift in geopolitics regarding the significance of this technology. More than is the case anywhere else in the world, Europeans expect the state to intervene in the interests of encouraging a level playing field and believe that success is a matter less of personal choices than of community arrangements, where Europeanism ultimately embodies the intention to ensure that opportunity and wealth are equitably distributed.
Europe’s geopolitical opening move on artificial intelligence
I am, by no means or measure, a Europhile. Still, regulating free and dynamic markets could be considered Europe's most important weapon on the global geopolitical stage. The societal transition into the Information Age at the end of the 1990s has profoundly impacted the nature of geopolitics. This shift is primarily anchored in technological advancements, where artificial intelligence can be considered the pièce de résistance. Looking at recent years' cultural and political influences, it's clear that the control and dissemination of data and information are pivotal in shaping national narratives, influencing public perceptions, and spreading values, all of which are vital for achieving national goals and bolstering a country's global standing. In this context, managing information flow becomes critical in shaping and interpreting new viewpoints in geopolitics.
A nation skilled in this domain of information control sets the stage for significantly influencing global geopolitical dynamics, even spilling over into various doctrines of asymmetric and non-kinetic warfare.
In her book "The Age of Surveillance Capitalism," Shoshana Zuboff discusses how the play of markets and states is in tandem. Indeed, looking to the West, we have the United States as the undisputed hotbed of technology innovation, led by gargantuan tech companies such as Apple, Meta, Amazon, Microsoft, Google, and its recent addition, OpenAI. In the Far East, the US is seeing its status quo questioned by the revisionist state of the People's Republic of China, illustrating a great sense of clarity regarding its objectives and political goals by 20301 accelerating the rapid development of AI through its technology behemoths, such as Tencent, Baidu, ByteDance/TikTok, Alibaba, Huawei, JD.com, HIKvision and many more, aiming to cement the core values of socialism in AI systems.
Europe, traditionally late to the technology party and without any relevant tech companies, sees its options as limited to taking the global stage as the world's advocate to strengthen its digital sovereignty of the state and protect the privacy issue in the digital space. The General Data Protection Regime (GDPR) implemented by the European Union is an emblematic example. However, given the lack of "courage and fair-mindedness" to pursue a sovereign cloud and regulate American cloud providers under the EU antitrust law on digital markets, it seems Europe still holstered its big guns.
Still, from a loyalist point of view, one could argue how regulation is being used as an effective geopolitical pawn in global geopolitics while remaining in accordance with Europe's values and political facets. So, Europe is relevant in today's race for artificial intelligence, albeit in a manner that's different from other world powers. Where you stand on the topic is mostly a matter of personal taste; do you appreciate innovation or regulation? Or is it?
A question of regulation versus innovation
Being the theatre for some of history's bloodiest wars, Europe's obsession with keeping perpetual peace introduced a mindset that needed to be adapted to turbulences in its societies and economies. From Merkel's blunder to the EU and NATO's lackluster response to Russia's invasion, Europe isn't equipped for conflict. Like a peacetime CEO who prefers to protect the status quo, a wartime CEO thrives in turbulent times. With the war on data and talent, the only weapon in Europe's arsenal is legislating innovative technologies and industries to keep the proverbial peace and the European markets stable (while introducing new monetary streams of taxes and fines).
Again, looking to the West, we see the United States' pioneer mindset transformed into a liberal meritocratic interpretation of capitalism and a pinch of effective altruism as a push for the booming technology industry. As such, the US remains the most significant innovator in artificial intelligence, having introduced foundational models and OpenAI's chatGPT, demonstrating the world's fastest technology adoption rate with just over 100 million users in less than two months.
Clearly, there are two forces at work: those who innovate and those who regulate. While both approaches may have their merit, innovation always trumps regulation.
First, while innovators keep frontrunning the markets, regulating bodies always tend to play a catchup game. US sociologist William F. Ogburn's concept of cultural lag introduces the notion that culture takes time to catch up with technological innovations and the resulting social problems that may be caused by such lag. The Europeanist mindset, therefore, remains a massive handicap for innovation in artificial intelligence, taking a punitive stance towards the development of this technology. With Kyutai's founding and Mistral as France's first unicorn taking pole position as Europe's first artificial intelligence deep-tech player, it is refreshing to see how President Emmanuel Macron, head of Europe's second-largest economy, stimulates an innovation-oriented environment.
Even in 2018, and more recently during a speech at a startup conference in Toulouse, he addressed the EU legislation designed to tackle the development of artificial intelligence risks hampering European tech companies compared to rivals in the US, UK, and China:
“ We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea. ”
Second, using regulations to encode morality in laws usually works in the short term, but no adversarial actor will adhere to it when push comes to shove. In the geopolitical game of chess, one is playing the long game. Even intergenerational. But when the stakes become high enough (and with artificial intelligence as the economic equivalent of a nuclear weapon, stakes are high), there will be an infliction point in a distant future where at least one actor or world power will actively circumvent, or ignore, Europe's regulatory red tape to achieve a competitive edge, or worse, aim for total economic domination. A sort of digitale Endlösung, if you will. Taking this to its most extreme manifestation, much like criminals don't read codexes to check if they are about to commit a crime, adversarial and rogue states surely won't succumb to Western laws as they don't even legitimize them. The fact is that rolling out a moral stance on artificial intelligence in the global theatre is ultimately not a matter of law but is a matter deeply rooted in culture.
Third, how a compliance checklist will prevent intelligent systems from doing stupid things is beyond me. The Act's rigid structure could limit its ability to adapt to new risks and opportunities in the AI landscape. Instead of a risk-based approach with strict guidelines, a principle-based approach could have been a better choice. A company can still make highly unethical or defective applications that comply with the AI Act. Whenever a law is implemented, and over time, people tend to forget why a law was passed in the first place. And so, when the law's letter trumps the law's spirit, both regulator and subject can play a costly game of hide-and-seek, finding and exploiting technicalities, loopholes, and ambiguous language. While large companies have legal and compliance teams in place to deal with this sort of expensive regulation, the cost of the AI Act will mainly impact smaller AI companies and, therefore, impact the speed and quality of innovation.
Ultimately, great innovators ask for forgiveness and not permission. The rules are bent, even broken, in the true spirit of innovation. Looking at Uber as a recent historical example is revealing. While the AI Act is risk-based, a principle-based approach would have had more outstanding merit, pushing a cultural wave of morals and business ethics through economies instead of rigid laws. When a movement of effective altruism can cause a leadership debacle at OpenAI, regardless of which side of the fence you are, at least it means people are attentive to their work's moral duties and ethics. A reflex that laws rarely instill. So, how much innovation and economic value have previous European acts added? And will the recent AI Act be any different? Perhaps it is relevant to ask if the GDPR has created any unicorns so far. Hardly. While it kept the parasitic economic workers busy, it only pushed companies to divert their funds from R&D toward legal and compliance departments. Cecilia Bonefeld-Dahl, director-general for DigitalEurope, which represents the continent's technology sector, expressed this concern as follows:
We have a deal, but at what cost? We fully supported a risk-based approach based on the uses of AI, not the technology itself, but the last-minute attempt to regulate foundation models has turned this on its head (…) The new requirements — on top of other sweeping new laws like the Data Act — will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers
How I stopped worrying and learned to love artificial intelligence
In many ways, the invention of the printing press in the 15th century was met with equal skepticism, even fear, as is the case today with artificial intelligence. People could suddenly read the Bible themselves and not talk to the priest. All the establishment was against the wide use of the printing press because it would change the power structure. Just like the Catholic Church opposed the printing press, Europe risks over-regulating this technology out of a deep-rooted fear.
Perhaps more than tangible research, there is a lot of intangible fear, uncertainty, and doubt clouding the debate on artificial intelligence. This doesn't only influence policymakers, but even prominent members of the AI industry cry wolf. But let's remember how even intelligent people may be just seeking attention, can be quite naive, or let their emotions take over. If there is one thing the COVID-19 pandemic taught us, it is how above-average intelligent humans are capable of displaying signs of severe dysrationalia and paranoia. Being evolutionary programmed to respond to negative consequences rather than positive outcomes, these AI-doomers are pithed against the AI-accelerationists, those who have an unwavering belief that AI is our ticket out of all the messes we have created on this planet, creating two opposite forces that behave both in a cult-like manner.
A popular theme is how AI is going to take over humans. First of all, let this be a vaguely stated opinion. Of course, AI will be deployed in most, if not all, areas where humans are active and will be our synthetic (and much smarter) companions. Yet humans have a penchant for giving AI systems human-like characteristics; we even claim ChatGPT to be hallucinating, implying that LLMs have sensory perceptions of things that aren't really there. Of course, it is hogwash for a word-predictor-on-steroids to have such an illness. We should avoid making the mistake of anthropomorphizing AI.
Let's begin by stating that intelligence has nothing to do with the desire to dominate. This trait is seen in social species, driven by evolutionary systems and power dynamics. By default, no matter how intelligent a system is, it will be subservient to humans. However, when humans set goals for world dominance and require advanced artificial intelligence to support them, we might understand why ethical, legal, and moral guardrails are essential in developing AI systems. Failure might trigger the global disintegration of trust, a societal cornerstone concept.
Since working with dual-use technology, I have been more afraid of human stupidity than artificial intelligence. Because no regulatory framework or moral code can prevent or control our stupidity and penchant for dominance. So, like uranium, hacking tools, or anything on the Wassenaar Agreement, I am not too worried about the technology. It's the humans I'm concerned about.
While artificial intelligence will ultimately mean the end of capitalism and might preclude a transition into socialism, we will increasingly interact with the world through AI-powered interfaces and assistants. Microsoft, Apple, Alphabet, Meta, and more companies follow suit to make both hard- and software intelligent, essentially turning their software and hardware into more efficient data-capturing devices. But are we comfortable with having a small number of West Coast companies controlling super-intelligent systems capable of influencing a global culture? While the American culture has influenced the world through Hollywood, no government will accept the American culture to further permeate the lives of European citizens through said intelligent interfaces. Ultimately, I strongly advocate for open source systems and non-proprietary AI systems, unlike Microsoft's AI-proxy, OpenAI.
With that in mind, the world is looking to regulate AI in the most appropriate ways. However, during the 2023 AI Safety Summit in London, it became painfully clear how countries still are extremely divided on implementing such guardrails. Regardless of its potential impact on R&D programs, Europe's taking control of the narrative is therefore commendable in its own right, not to mention a bold move of claiming a first-mover advantage by digging a regulatory moat.
A final, personal thought
As a business leader and investor in artificial intelligence, leadership qualities are most important for any organization. The capability to translate a vision into an actionable strategy that pushes an agenda of innovation with the intent to add more value faster than a competitor is likely the only reason for a series of successful exits in my career. Taking these experiences as a frame of reference to assess the European vision on artificial intelligence and geopolitics in general, I find the old continent to be doing what it's doing best: taking a pedantic stance as a self-proclaimed referee in a game they don't fully understand. If this were the Karen meme, Europe would continuously want to talk to the manager. In the short term, I hope that companies worldwide will continue to invest in researching artificial intelligence instead of addressing the sensitivities of a self-entitled Karen. Clearly, Europe's ambition is to become the world's super-regulator of artificial intelligence, not its super-innovator.
Seen as such, the AI Act could become the blueprint for other governments and an alternative to the United States' light-touch approach and China's interim rules. When you further frame the regulatory efforts on artificial intelligence as a way for world powers to defend and promote cultural values globally, Europe might indeed have made an exciting opening move on the geopolitical chessboard.
But for now, and as an innovator, my hope is more than ever on France's political and industrial agenda to become Europe's AI powerhouse. And making such a bold statement as a Belgian, well, that's quite something.
Generation Artificial Intelligence Development Plan (2017), https://d1y8sb8igg2f8e.cloudfront.net/documents/translation-fulltext-8.1.17.pdf