Washington DC: Governance and law is the key challenge for AI according to the H20.ai team Shutterstock
Data governance platform Immuta’s chief legal officer, Andrew Burt, and H2O.ai data scientist Patrick Hall have launched bnh.ai, a boutique law firm focused on artificial intelligence and analytics.
The Washington DC-based legal startup, which is being incubated by Immuta, will provide advice on the risks of AI as well as guidance on how companies can better harness the power of machine learning.
Managing partner Burt said, in a LinkedIn post: “After spending years working at the intersection of law and data science, I now believe the biggest challenge facing AI is no longer technical — it’s a problem of governance and law. If you’re adopting AI, the biggest obstacle you face is one of liability: what could go wrong, and how to mitigate it.”
Burt — who was previously a special adviser in the FBI’s cyber division — will continue in his role as chief legal officer at Immuta, while Hall will remain as senior director of product at H2O.ai, an AI-focused open source software company.
Hall, bnh.ai’s principal scientist, added, in a LinkedIn post: “I've been on the front lines of AI and machine learning adoption for years, first at SAS and now with H2O.ai. While I’ve seen great progress in mitigating opacity and discrimination risks in machine learning, the biggest challenges are now, in my view, matters of regulation, governance and law.”
Earlier this month, a Thomson Reuters survey of finance directors at the UK’s top 100 law firms found that one in four believe that AI and machine learning is among the biggest threats to their profitability.
And last month, UK firm Kennedys spun out its AI-powered claims management service into a separate company, Kennedys IQ.
The European Commission also last month launched a white paper on AI regulation, exploring whether current legislation is adequate to address and enforce the risks around AI, or whether revisions to existing law or new legislation is needed.
That followed a report from the UK Committee on Standards in Public Life that advised against setting up a dedicated AI regulator but warned that existing regulators need to do more to understand the risks AI poses to certain sectors.