The European Commission and the U.S. administration signed a landmark “administrative agreement on Artificial Intelligence for the Public Good” on Friday, Euractiv reports.
The agreement was finalized within the framework of the EU-US Trade and Technology Council (TTC), which was launched in 2021 to serve as the permanent platform for transatlantic cooperation across several priority areas, including emerging technologies such as artificial intelligence (AI). The two blocs have endorsed a common approach to critical aspects of AI such as trustworthiness and risk management methods.
“Based on common values and interests, EU and U.S. researchers will join forces to develop societal applications of AI and will work with other international partners for a truly global impact,” said the EU’s Internal Market Commissioner, Thierry Breton.
As part of the agreement on their “joint AI roadmap,” the EU and U.S. partners have decided on shared research and development goals to address potential societal and climate challenges. The five priority areas for the research collaboration are extreme weather and climate forecasting, emergency response management, health and medicine improvements, electric grid optimization, and agriculture optimization. The partners will build joint models but will not share training datasets due to the lack of a legal framework. However, they will share the findings and resources with other international partners that share their values but lack the capacity to address these issues.
The EU-U.S. collaboration on AI marks a symbolic step forward, as Washington seems determined to establish its own standards before the EU is finalizing the world’s first rulebook on AI application. Earlier this month, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework which sets out guidelines for AI developers on mapping, measuring, and managing risks. This voluntary framework represents the American non-binding approach to new technologies, whereas the EU is advancing the work on the AI Act, horizontal legislation to regulate all AI use cases based on their level of risk.
The EU’s AI Act is expected to be highly influential, and—via the ‘Brussels effect’—possibly serve as an international standard on future AI regulation across the globe. No surprise, therefore, that the U.S. is interested in shaping it and lobbies for more flexible rules on individual risk assessment. The publication of the U.S. Framework comes at a critical time just ahead of the EU’s interinstitutional negotiations as the lawmakers prepare to finalize their positions.
One of the AI Act’s primary goals is to outline technical instructions that ensure AI systems do not violate fundamental rights. The document will provide frameworks that ensure that any commercial or public systems are trained on unbiased datasets (to eliminate the risk of gender- or race-based mistreatment) as well as determine how much human oversight will be required to prevent the software from going rogue.
Large tech companies—both European and global—were involved in the AI Act’s development last year, which compelled critics to point out possible conflicts of interest at the legislation’s core. Artificial Intelligence technologies are becoming massively profitable, with the global AI market estimated to surpass €1 trillion by 2029.