UK sets out principles for an ethical AI future

A UK committee calls for artificial intelligence ethics and calls for legal clarity on liability.

House of Lords committee on AI Shutterstock

A report from the UK House of Lords select committee on artificial intelligence has called for ethics to take centre stage. The report ‘AI in the UK: ready, willing and able?’ recommends the creation of a cross-sector AI Code to help mitigate the risks, and claims the UK is in prime position to succeed in AI. Chairing the committee, Lord Clement-Jones said “The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.

Five principles

The report argues that an ethical approach can ensure the public trusts AI technology and sees the benefits of using it, and also prepares them to challenge its misuse. In proposing an AI Code, which can be adopted nationally and internationally, the committee offered five principles to use. AI should be developed for the common good and operate on principles of intelligibility and fairness. It should not diminish the data rights or privacy of individuals, families or communities, and all citizens should have the right to be educated. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

AI liability law

The committee offers a number of recommendations. One key recommendation is a call for the Law Commission to clarify existing liability law and whether it will be sufficient when AI systems malfunction or cause harm to users. The committee also says the government needs to draw up a national policy framework.

Email your news and story ideas to: news@globallegalpost.com

Top