EU ministers on Tuesday signed off on a landmark law that sets out rules for using artificial intelligence in situations and sectors that are deemed particularly sensitive.
Under the law, AI systems used in areas such as law enforcement and employment will have to demonstrate that they are adequately transparent and accurate, fulfil cybersecurity standards and meet criteria regarding the quality of the data used to train them.
The vote by EU countries came two months after the European Parliament backed the AI legislation.
The bloc's law goes well beyond the voluntary compliance approach to AI in the US, for example, and could have a signal effect across the globe.
What is the law and how will it be implemented?
The law stipulates that systems meant for use in "high-risk" situations will have to obtain certification from approved bodies before they can be put on the EU market.
Such situations include those where use of AI could potentially harm health, safety, fundamental rights, the environment, democracy, elections and the rule of law.
Systems such as the social credit scoring used in China will be banned outright, as will biometric categorization systems based on people's religious or other worldviews, sexual orientation and race.
AI's role in life & death decisions
The final law does generally ban real-time facial recognition in CCTV, but makes exceptions for law enforcement uses, such as finding missing persons or victims of kidnapping, preventing human trafficking, or finding suspects in serious criminal cases.
AI systems that are deemed to pose limited risks will have to meet much lighter transparency standards, among other things being obliged to disclose that their content is AI-generated to allow people to decide how much trust they have in it.
A new "AI Office" within the European Commission is to be established to make sure the law is enforced at EU level.
The law must now be signed by the presidents of the EU legislature and then published in the EU's statute book. It then technically goes into force 20 days later, but most of its provisions won't take effect until two years after that.
Companies make safety commitments
In another sign of widespread concern surrounding the potential damage that could be caused by AI, more than a dozen of the world's leading AI firms made fresh safety commitments at a global summit in Seoul on Tuesday, according to the UK government.
These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI," UK Prime Minister Rishi Sunak said in a statement released by Britain's Department for Science, Innovation and Technology.
The companies included Alphabet's Google, Meta, Microsoft and OpenAI as well as others from China, South Korea and the United Arab Emirates.
The summit was hosted by South Korea and Britain.
The facial recognition weapon
tj/wmr (dpa, Reuters)