Search
Close this search box.

Links

To chain a god

Blog on AI´s regulation

EL nacimiento de la ia

THE BIRTH OF AI GLOBAL GOVERNANCE

Work here reviewed:

Jonas Tallberg et al. (2023). The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research. International Studies Review, 25(3)..

There will be a time when we will look back and ask each other what we were doing at this fast-paced year of 2023, when humanity first tried to regulate the phenomenon that will define the future of civilization (or its lack thereof). This Autumn 2023 is no doubt a pivotal time in our evolution, a watershed instant open to a delta of unimaginable scientific and social advances or to dire straits of catastrophe and suffering. That we are able in our grey years to reflect on things past at all would be good news, indeed, for according to some we may be now carelessly welcoming the fearful angel of our own extinction.

Yes, there are some doomsayers, but since they are not the angry eremites of the Middle Ages, but rather those clean, vegan Californian entrepreneurs that ride the coming wave, we are paying attention.

Law is a late comer by definition: it tries to regulate situations after they emerge. There is always a temporal hiatus between the maturity of the factual hypothesis and the adoption of a norm that posits a legal consequence to it. AI has been going for a while, but it only took a model like ChatGPT to arrive at every patron´s laptop for lawmakers to be suddenly aware of its potential, and its perils. Since the stakes are perceived as too high to allow for serene legal theorizing, legislative bodies everywhere are suddenly hurrying to regulate AI.

The usual suspect is the nation-state, perhaps the very political agent that risk the most[1] with AI; but alongside it, and with a swiftness and momentum never in the history of law witnessed, there were calls for international governance systems and we seem on the verge of instituting some of them in short. There are few precedents of this: it took two world wars and some 25 years to set up a working system to try a general, worldwide governance body (the UN (1945), the League of Nations being an obvious flop); from Hiroshima and Nagasaki to the Non-Proliferation Treaty coming of effect (1970), 25 years; climate change consensus was reached somewhere around the 80s, so it took 25 years to the Kyoto Protocol (1997), 18 more years to the more workable Paris agreement (2015), and so on.

So, it is interesting or rather (perhaps) pressing to investigate how this dynamic AI global governance movement is hatching as we speak.

The excellent scholar article here reviewed does just that. As they make the case for sketching out the research agenda for the AI global governance phenomenon, Tallberg et al posit a bifid approach: empirical – what the actual structure of the AI global governance movement is now- and normative, that is, what axiological standards that regulation is bearing, put against a couple of chosen values: justice and democracy.

While performing the empirical analysis, the authors thoroughly explore both every AI global governance effort[2] and each possible socio-political theory applicable. They arrive at a convincing conclusion: that the best way to describe the AI global governance´s architecture is that of a regime complex[3], “a structure of partially overlapping and diverse governance arrangements without a clearly defined central institution or hierarchy”[4]. The list is long and growing: UN, UNESCO, the G7, the UE (rather a quasi-national structure here), the Group on Governmental Experts (GGE) within the Convention on Certain Conventional Weapons (CCW), now the AI Safety Summit etc… There is, thus, an overlapping, multi-layered web of fora with a plethora of stakeholders, interests, and areas of focus.

The sprouting of a regime complex in AI global governance is somehow correlative to the depiction of the AI becoming a “general purpose technology”, or “omniuse”, like electricity (Suleyman, 2023). We usually pay no mind to it, but AI is going to be everywhere every time, so it makes sense that it is not treated as a single portent. From a governance perspective, it cannot be the same the AI for ordinary objects (the Ai of things) to the AI governing autonomous weapons or the mechanism able to gather private intelligence all on its own. Different interests, different stakeholders, different bodies of governance.

According to this, AI global governance regime complex is not a random-ruled emanation, but one that corresponds to the structure of the thing governed, which is omnipresent and may have many forms: most AI will need only national-level governance, some even mere industry-level standards, whereas the most dangerous on principle will require global regulation (AI safety, military AI).

The problem with this regime complex structure, and its tenet (place level of governance adequate to each type of AI) is dissemination, that is, the containment problem. We do not really understand how AI profoundly works, so our conceptual boundaries may not be, in practice, its boundaries. At all. To put it bluntly: we do not know whether the AI governing home appliances will gain critical (knowledge) mass and conquer more sensitive areas. First, we take the Nespresso machine, then the nuclear missile silo[5].

Some of these concerns are already visible in regulations: for instance, the Executive Order 14110 selects for special scrutiny, from all possible AI technologies, those with a dual-use foundation model: its definition is both qualitative and quantitative, but the reason for such a detailed safeguard is the potential to overcome its native limitations, and then pose a strategic threat. The USA Government wants to know beforehand if a certain model may be able to make that jump.

The authors then explore the normative analysis of AI governance, which is also refined, in a distinction I find interesting, as both “process” and “outcome”. According to the authors, the usual approach in AI governance tends favour the latter, focusing on identifying potential problems in AI and then trying to place governance mechanisms to tackle them. It should be completed with a “process” imprint. Take “justice” for instance: as an outcome, justice concerns how benefits and burdens are distributed, whereas justice as a procedural value relates to the fairness of the processes by which decisions are made in AI governance.

However, as recent governance summits have proved, the set of values we wish to instil on AI governance are far from universally shared; in fact, I wonder if certain AI nascent powers would agree on an AI values agenda that include justice and democracy.

I would like to end this short reflection in this very interesting paper by echoing the authors´ view that it is imperative that we shift from “Ai ethics in general” to a normative, governance approach. That is, to Law.

Las Palmas, 20 Diciembre 2023

Bibliography

Alter, Karen J. , And Sophie Meunier (2009) “The Politics of International Regime Complexity.” Perspectives on Politics 7 (1): 13–24.

Jonas Tallberg, E. E. (2023). The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research. International Studies Review, 25(3).

Marcus, G., & Reuel, A. (April de 2023). The world needs an international agency for artificial intelligence. (T. Economist, Productor) Recuperado el Noviembre de 27, de The Economist: https://www.economist.com/by-invitation/2023/04/18/the-world-needs-an-international-agency-for-artificial-intelligence-say-two-ai-experts

Suleyman, M. (2023). The Coming Wave. London: Penguin.

[1] As it is a technology that spills its borders and may endanger its very raison d´etre (Suleyman, 2023).

[2] The article did only leave behind the Bletchley Park AI Safety Summit, as it was published before it took place. It should be up for them to properly ascertain it, but my opinion it would not alter their findings, as describe here.

[3] Karen Alter et al (2009)

[4] (Jonas Tallberg, 2023), page 7.

[5] We may find this possibility laughable, but remember the Stuxnet virus incident: a simple firmware on Siemens machines was able to blow sections of an Iranian nuclear plant, an event as unconfirmed by all governmental parts involved (Israel, USA and Iran) as it should be.

LinkedIn

related post

To chain a god

TO CHAIN A GOD

National projects of IA regulation. Since Nick Böstrom wrote his foundational “Superintelligence”

Scroll to Top