National projects of IA regulation.
Since Nick Böstrom wrote his foundational “Superintelligence” a little more roughly a decade ago[1], specialists first and the rest of humanity later have been weighing the benefits that artificial intelligence could bring about alongside the dangers it could entail. The first are obvious: enormous gains in productivity, the [2]resolution of enigmas hitherto intractable for science, and even the expansion of the anthroposphere – since the human being could project himself beyond the spatial limits imposed by his painful mortal condition. These are but a few examples.
AI threats, however, are more subject to speculation and, of course, debate. They could be sequenced in order of probability. For instance, it is taken for granted that AI will transform the labour market and lists and reports of occupations[3] that will disappear because of the full deployment of AI are already circulating on the Internet. It is possible, but avoidable, for AI to handle personal data in a way that violates some of our most cherished and universal provisions. But what makes our hair stand on end is that AI may end up constituting an “extinction event” for humanity. An event perhaps unlikely, but of such magnitude that, we all reason, any precaution is too little.
There are two scenarios in which AI could pose a threat to human existence.
The most eminent and fabled is the “Skynet” moment[4], the one in which AI becomes aware of itself and perhaps decides that we are an inferior and expendable form of life, and maybe also a threat to its existence. And it terminates us. It is a remote eventuality[5], because, on the one hand, as Roger Penrose argued, a [6] cybernetic intelligence comparable to that of humans will find it difficult to create a non-biological consciousness (and we hardly understand the biological one); on the other hand, it is based on a typically anthropological inference: we extrapolate phenomena such as the will or the instinct for survival, which are natively biological (evolutionary) and thus alien to structures of a different order, such as AI.
The second scenario we should fear is more mundane and real: AI is a powerful technology, very difficult to control and in the wrong hands it can be lethal. Imagine the military (nuclear, for example) use of AI by a state like North Korea or some rogue organization.
The heart of the problem is the so-called “superintelligence”, the moment when AI reaches- be it with or without consciousness- levels beyond our control, or – more importantly – beyond our ability to comprehend[7].
It’s not science fiction, I’m afraid: we know how to trigger AI´s successful theorical models (for instance, LLM) and roughly how they work. But there is an obscure area that has come to be called the “Explainability” or “transparency” problem. In neural networks, data or representations are linked in a way that we don’t fully understand. It is AI´s black box.
Given the magnitude of the threats, it is reasonable to postulate that the regulatory mechanism should be international and, as has been advocated, should lead to the institution of a control and sanctioning authority similar to the International Atomic Energy Agency (UN).
It hasn’t been (yet) like this. The legal regulation of AI is still incipient (some jurisdictions are still in the project phase), fragmentary (it does not cover all the issues described above) and national (each jurisdiction has followed its own path).
Indeed, at the time of this article, only a few of the major jurisdictions have rules in place[8], and yet these do not cover the full spectrum of legally relevant AI issues. Most of them have opted for the generic “guidelines” of an ethical nature, hoping for a kind of self-regulation in detail[9] or are only at the design or scheme stage (“blueprints”). In the United States, the main player in AI at the technological level, regulations from the federal states coexist[10] with recommendations from the White House[11] of an ethical nature, so for now it seems to be oriented towards the model of self-regulation.
Spain, obviously, rests on the EU framework[12] and this seems to be aimed at a binding and all-encompassing rule that, if adopted, and given the prestige and weight of the EU in world trade, could naturally impose itself horizontally in the world – as it happened with the GDPR. The EU is not, perhaps, the most technologically cutting-edge player (today, the US, China and UK are) in AI, but it has a great capacity to set standards.
After adopting the Ethical Guidelines for Trustworthy AI in 2019, the EU has worked hard to bring forward bloc-wide legislation that crystallized in the Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain EU legislative acts (Com/2021/206), which is currently being studied and negotiated in Parliament.
It is not the aim of this article to analyze in detail the proposed EU regulation -there will be time for that- but to offer an overview of comparative regulation. And there you can find a list of major common regulatory fields, a sort of regulatory agenda, which is as follows:
Data protection and privacy.
Transparency (crucial, given the “black box” problem)
Non-discrimination and equality.
Security and robustness (to prevent malicious attacks).
Responsibility and accountability (this is going to be a big battleground).
Supervision and governance (in principle, as has been said, private individuals per jurisdiction, but it is quite possible that supranational bodies will be devised by means of treaties).
Ethics and human rights.
This is the visible part of the iceberg. Surely the reader will understand that, underneath and propelled by the latent geopolitical conflict towards the supremacy of the future, there are probably non-normative activities in this field that we are unaware of and that we may never get to know. For example, it should be noted that the agenda does not currently include the regulation of military uses of AI. That will be reserved for specialized commissions subject to official secrecy and national security laws. Secrecy and AI: that combination does generate some distress.
In Las Palmas, 26 September 2023.
Works Cited
Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. University of Oxford Press (OUP).
Brown, T. B. (2020). Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems (Vol. 33).
Cameron, J. (Director). (1991). Terminator 2: Judgment Day [Motion Picture].
Goldman Sachs. (2023, April 5). Generative AI could raise global GDP by 7%. Retrieved from https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
López de Mántaras, R. (2023, February 14). Ramón Areces Foundation. Retrieved from Fundación Areces: https://www.youtube.com/watch?v=1EA7ZejUcJQ&ab_channel=FundacionAreces
McCarthy, J. (1960). Programs with common sense. In Proceedings of the Teddington Conference on the Mechanization of Thought Processes (pp. 756-91). Her Majesty’s Stationery Office.
Minsky, M. (1975). A framework for representing knowledge. In P. H. Winston, The Psychology of Computer Vision (pp. 211-277). McGraw-Hill.
Mnih, V. K. (2015). Human-level control through deep reinforcement learning. Nature(518(7540)), 529-533.
Penrose, R. (1989). The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press.
Pineau, J. M. (2003). Towards robotic assistants in nursing homes: Challenges and results. Robotics and Autonomous Systems, 42(3-4), 271-281.
Rumelhart, D. E. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533-536.
Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.
Watkins, C. J. (1992). Q-learning. Machine Learning(8(3-4)), 279-292.
World Economic Forum. (2023, May 5). These are the jobs that will be lost and created because of Artificial Intelligence. Retrieved June 2023, from https://es.weforum.org/agenda/2023/05/estos-son-los-trabajos-que-se-perderan-y-se-crearan-a-causa-de-la-ia/
[1] (Bostrom, 2014) He wasn’t the first, of course. A progression of the core AI jobs) with one per decade, would be as follows: , , , , . (Turing, 1950) (McCarthy, 1960) (Minsky, 1975)(Rumelhart, 1986)(Watkins, 1992)(Pineau, 2003)(Mnih, 2015)(Brown, 2020)
[2] It could add, it is claimed, 7% annual growth to world GDP. Fountain: (Goldman Sachs, 2023)
[3] For all, see (World Economic Forum, 2023)
[4] For the movie Terminator 2: .(Cameron, 1991)
[5] The great Spanish AI specialist López de Mántaras points out the distance that still exists today, June 2023, to be able to equate artificial intelligence with human intelligence: the artificial intelligence in use today is a specific intelligence, not a general one (it lacks “(López de Mántaras, 2023)common sense“) and is unable to clearly distinguish between correlation and cause-and-effect relationship.
[6] (Penrose, 1989)
[7] That phenomenon, that of a superintelligent AI, has been compared to singularities in terms of theoretical physics, that event in which the rules of known physics may not apply.
[8] China and Japan. See a breakdown of the state of global regulation in https://www.taylorwessing.com/en/interface/2023/ai—are-we-getting-the-balance-between-regulation-and-innovation-right/ai-regulation-around-the-world.
[9] Notoriously, the United Kingdom.
[10] A listing, here: https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation.
[11] https://www.whitehouse.gov/ostp/ai-bill-of-rights/#applying.
[12] However, it is one of the most active member states: in June 2022, and in coordination with the Commission, it carried out the first regulatory “sandbox” of AI in the EU, with an experimental objective and aspiration to transfer the results to other member states.