Risky uses of artificial intelligence that threaten people’s safety or rights such as live facial scanning should be banned or tightly controlled, European Union officials said Wednesday as they outlined an ambitious package of proposed regulations to rein in the rapidly expanding technology.
The draft regulations from the EU’s executive commission include rules for applications deemed high risk such as AI systems to filter out school, job or loan applicants. They would also ban artificial intelligence outright in a few cases considered too risky, such as government “social scoring” systems that judge people based on their behavior.
The proposals are the 27-nation bloc’s latest move to maintain its role as the world’s standard-bearer for technology regulation, as it tries to keep up with the world’s two big tech superpowers, the U.S. and China. EU officials say they are taking a four-level “risk-based approach” that seeks to balance important rights such as privacy against the need to encourage innovation.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.”
To be sure, the draft rules have a long way to go before they take effect. They need to be reviewed by the European Parliament and the European Council and could be amended in a process that could take several years, though officials declined to give a specific timeframe.
Previous EU tech regulation efforts have been far reaching and influential, earning it a reputation as a pioneer. Vestager, also the bloc’s competition chief, filed aggressive antitrust challenges against Silicon Valley giants like Google years before such action became fashionable. The EU was also early to the data privacy battle with stringent rules known as General Data Protection Regulation, or GDPR, that became the de facto global standard.
However, results have been mixed: Google still retains its online dominance and EU privacy cases against global tech companies are backed up. Officials are also working on updating the EU’s digital rulebook to protect internet users from harmful material or rogue traders.
Under the AI proposals, unacceptable uses would also include manipulating behavior, exploiting children’s vulnerabilities or using subliminal techniques.
“It can be a case where a toy uses voice systems to manipulate a child into doing something dangerous,” Vestager told a media briefing. “Such uses have no place in Europe and therefore we propose to ban them.”
The proposals include a prohibition in principle on controversial “remote biometric identification,” such as the use of live facial recognition to pick people out of crowds in real time, because “there is no room for mass surveillance in our society,” Vestager said.
There will, however, be an exception for narrowly defined law enforcement purposes such as searching for a missing child or a wanted person or preventing a terror attack. But some EU lawmakers and digital rights groups want the carve-out removed over fears it could be used by authorities to justify widespread future use of the technology, which they say is intrusive and inaccurate.
Biometric and mass surveillance technology “in our public spaces undermines our freedom and threatens our open societies,” said Patrick Breyer, an EU Pirate party lawmaker. “We cannot allow the discrimination of certain groups of people and the false incrimination of countless individuals by these technologies”
Other AI applications are considered high risk because they “interfere with important aspects of our lives,” Vestager said, including criminal courts, law enforcement, critical infrastructure such as transportation — think software for self-driving cars — and management of migration, asylum and border control. But their use is allowed provided operators follow rules including using high quality data to minimize discrimination and having a human in charge.
Herbert Swaniker, a technology lawyer at law firm Clifford Chance, compared the proposals to GDPR, which affect companies worldwide.
“With GDPR, we saw the EU’s rules reach every corner of the world and apply pressure on countries globally to reach a new international gold standard,” Swaniker said. “We can expect this too for AI regulation. This is just the beginning.”
The draft regulations also cover AI applications that pose “limited risk,” such as chatbots which should be labeled so people know they are interacting with a machine. Most AI applications, such as email spam filters, will be unaffected or covered by existing consumer protection rules, officials said.
To help develop standards and enforce the rules, which would apply to anyone providing an AI system in the EU or using one that affects people in the bloc, the commission proposes setting up a European Artificial Intelligence Board.
Violations could result in fines of up to 30,000 euros (more than $36,000), or for companies, up to 6% of their global annual revenue, whichever is higher, although Vestager said authorities would first ask providers to fix their AI products or remove them from the market.
EU officials, trying to catch up with the Chinese and American tech industries, said the rules would encourage the industry’s growth by raising trust in artificial intelligence systems and by introducing legal clarity for companies.