Simmons & Simmons and Fountain Court advise on 'world first' AI explainability statement

Medtech platform Healthily provides ‘non-technical’ explanation of how AI makes decisions
Website of The Information Commissioner's Office of the the United Kingdom.

ICO has confirmed statement is the first to receive consideration from a regulator Jarretera; Shutterstock

Digital healthcare company Healthily has teamed up with experts from Simmons & Simmons and Fountain Court Chambers to launch what is claimed to be the world’s first artificial intelligence (AI) explainability statement. 

The 'first-of-its-kind' statement provides a ‘non-technical’ explanation of why the London-based company’s AI is being used, how the AI system was designed and how it operates for customers, regulators and the wider public. 

Prepared by a team of experts including Simmons & Simmons managing associate and global AI lead Minesh Tanna and Fountain Court Chambers barrister Jacob Turner with support from advisory firm Best Practice AI, the explainability statement details the processes Healthily’s medical AI platform reaches automated decisions when producing triage recommendations for its users. 

Tanna, who joined Simmons & Simmons from Herbert Smith Freehills in 2013, currently serves as the chair for the Society for Computers and Law’s AI group. Turner, meanwhile, advises governments, regulators and business on AI regulation and has garnered significant experience handling AI and data protection matters as a Fountain Court barrister. 

Explainability statements, a spokesperson for Healthily said, allow companies to comply with global best practices and AI principles as well as binding legislation, including several articles of the General Data Protection Regulation (GDPR). 

The statement was submitted for review and comments to the Information Commissioner's Office (ICO), which confirmed the statement is the first to receive consideration from a regulator. 

Founded in 2013, Healthily is a registered Class 1 medical device-registered AI healthcare platform that allows users to manage their wellbeing through its proprietary app, Smart Symptom Checker, which provides users with medical-grade information approved by its in-house clinical advisory board. 

‘Arguably, the most important area of AI in which explainability is key is healthcare,’ Healthily CEO Matteo Berlucchi said in a blog post. ‘If an AI is going to decide if you need to see a doctor or if your X-Ray is suspicious, it must do so on the basis of full transparency and clear explainability.’

Ridesharing companies Uber and Ola as well as Italian food delivery company Foodhino have all been at the center of EU debates surrounding the lack of transparency in their AI systems in recent months, with the issue attracting mounting regulatory and public scrutiny as AI becomes more commonplace across different sectors. 

“Businesses need to understand that AI explainability statements will be a critical part of rolling out AI systems that retain the necessary levels of public trust,” said Simon Greenman, partner at Best Practice AI. “We are proud to have worked with Healthily and the ICO to have started this journey.”

In July, South Africa has become the first country to award a patent that names an artificial intelligence as its inventor and the AI’s owner as the patent's owner.

Email your news and story ideas to: [email protected]