UK government advisory body rules out setting up an AI regulator
Committee on Standards in Public Life warns of dangers posed by AI but doesn't want a 'new, shiny' regulator
An influential UK government advisory body has come out against the establishment of a dedicated regulator for artificial intelligence (AI) but warned that existing regulators need to do more to understand the challenges it poses.
The UK Committee on Standards in Public Life published its much-anticipated report on AI and public life yesterday, when it supported the claim of a key expert that the a “new, shiny” regulator is not necessary.
However, the report, Artificial Intelligence and Public Standards: A Review by the Committee on Standards in Public Life, raises concerns that public bodies are introducing AI 'without a clear understanding' of legal requirements while all regulators must 'adapt to the challenges that AI poses to their specific sectors'.
In evidence to the committee, Helen Margetts, professor of society and the internet at Oxford University and director of the public policy rogramme at The Alan Turing Institute, captured the committee's reasoning.
She said: “People often say ‘Let’s have a new regulator. Let’s have a new, shiny one.’ Actually, there is a lot of expertise already in the regulators because they are having to deal with this kind of thing in markets which they are there to regulate. We ought to build on that and use the expertise we have got.”
The report makes eight recommendations to government, national bodies and regulators to help create a strong and coherent governance and regulatory framework for AI in the public sector.
It argues ethical principles and guidance should be promoted but also made clearer while all public sector organisations should publish statements on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery.
The Centre for Data Ethics & Innovation (CDEI) quickly took to social media to praise the report, tweeting that the report offers 'a really well thought-through set of recommendations'.
CDEI is an independent public sector body established to advise government on AI and other data-driven technologies and tasked to help develop the right regulation and governance.
The committee's report endorsed the UK government’s intention 'to establish CDEI as an independent, statutory body that will advise government and regulators in this area'. However, it warned the government to 'act swiftly to clarify the overall purpose of CDEI before setting it on an independent statutory footing'.
The Committee makes a further seven recommendations to front-line providers of public services to help establish effective risk-based governance for the use of AI. These include consciously tackling issues of bias and discrimination, monitoring and evaluating AI systems to ensure they always operate as intended, and establishing oversight mechanisms that allow for AI systems to be properly scrutinised.
Former MI5 chief Lord Evans of Weardale, chair of the committee, notes “on the issues of transparency and data bias in particular, there is an urgent need for practical guidance and enforceable regulation".
In her evidence, Karen Yeung, interdisciplinary professorial fellow in law, ethics and informatics, University of Birmingham Law School and School of Computer Science, warned about the danger of bias within AI-powered technology.
She said it was “not adequate to employ technical legal arguments to ‘cobble together’ an ‘implicit’ lawful basis, given that power, scale and intrusiveness of these technologies create serious threats to the rights and freedoms of individuals, and to the collective foundations or our democratic freedoms".
Further reading on AI
Law firm investment in automation lagging behind other sectors, survey finds
Email your news and story ideas to: firstname.lastname@example.org