With a disclaimer
Warning/Disclaimer: this comparison was powered by ChapGpt, then corrected by author. So it may be twice wrong.
It is a snap shot, hopefully enough, however, to distinguish the three models and, particularly, how each one reflects its jurisdiction values and legal principles.
TOPIC USA EU CHINA
Regulatory level Executive Order Act project Interim Administrative Measures for Generative AI Services and Tc260 Standards
Scope Focused on the safe, secure, and trustworthy development and use of AI across various sectors. Broad, covering AI systems and their risks, with a specific focus on high-risk AI. Mainly targets generative AI services accessible to the general public within China
Key principles Safety and security, responsible innovation, support for American workers, equity and civil rights, consumer protection, privacy and civil liberties, government AI use, global leadership. Safety, transparency, accountability, ensuring AI systems are subject to human oversight, respect for privacy, robustness, and security. Emphasis on AI governance, training data requirements, tagging and labelling standards, data protection protocols, safeguarding user rights, content moderation.
Content Moderation Not explicitly mentioned. Focuses more on the overarching principles and safety of AI systems. Requires transparency measures for AI systems, especially for high-risk AI. Specific rules for content moderation, tagging unsafe content, and aligning with national policies and third-party complaints.
Data Protection and Privacy Emphasizes the need for protecting privacy and civil liberties Strong emphasis on data governance, privacy, and protection, aligning with GDPR Providers are required not to collect unnecessary personal information, and to protect users’ data. Specific rules for government data requests and use of training data sources.
Training Data Requirements Not explicitly covered in the Executive Order, but: volume of data triggers bigger controls.Insist of results not violating principles Mandates high-quality data sets to avoid biases, especially for high-risk AI systems. Requires sourcing data and foundation models from legitimate sources, respecting intellectual property rights, and processing personal information with appropriate consent or legal basis.
Governance and Oversight Calls for a coordinated, Federal Government-wide approach to AI governance. Non-regulatory, it however sets the regulating agenda. Establishes a legal framework for AI, requiring compliance with EU standards and oversight by designated authorities. AI services associated with public opinion or social mobilization attributes must undergo security assessment and algorithm filing.
Transparency and Human Oversight Stresses the importance of AI reflecting the principles of its creators and users. Strong emphasis on human oversight, ensuring AI systems are transparent and their actions can be understood and controlled by humans The standards propose measures for subtle censorship and AI moderation.
Global Impact and Collaboration Aims for global leadership and collaboration in AI safety and security principles. Seeks to set a global standard for AI regulation, influencing international norms. Tough primarily focused on domestic regulation, the standards could influence global AI practices.
Innovation and Development Focus Supports innovation and the responsible use of AI in various sectors. Balances innovation with regulations, especially for high-risk AI, to ensure safety and ethical use. Balances AI industry development with innovation, development, and security. Specific clauses added to promote innovation in generative AI
Implementation and Compliance Guidelines and best practices for AI safety and security will be developed, but specifics on compliance are not detailed Proposes a detailed legal framework with clear compliance mechanisms and penalties for non-compliance, especially for high-risk AI. The TC260 standards are detailed but not legally binding; however, they are expected to influence future laws and be treated as binding by companies and regulators. Interim measures are just that, not the final word.
The military/intelligence exception Orders to develop a National Security Memorandum to ensure the military and intelligence community use AI safely, ethically, and effectively, but the military and intelligence community is mentioned as exception in some articles The EU AI Act primarily exempts AI systems developed or used for military purposes from its regulations, as it is the domain of member states The country stresses adherence to law and ethics in military AI applications, compliance with international humanitarian law, and the need for human control and manageability of AI technologies, but real regulation not available.
Las Palmas, 18/11/23.