Innovation and opportunity: the UK’s National AI Strategy in review
Vision for innovation leaves unanswered questions concerning liability and ethical AI, writes a team from CMS, Holistic AI and UCL
The publication of the UK’s National AI Strategy represents a step-change in the national industrial, policy, regulatory and geo-strategic agenda.
Whereas AI has previously been discussed under the remit of other strategies, with this publication AI takes centre stage. This represents the concretisation and maturation of perspectives of various bodies and institutions tasked with addressing various dimensions of AI research, innovation, industry, policy and regulation. Although there is a multiplicity of threads to explore, in terms of actionable steps (‘short’, ‘medium’ and ‘long term’, across the verticals of ‘ecosystem’, ‘sectors/regions’ and ‘governance’), this text can be read primarily as a ‘signalling’ document.
Indeed, we read the National AI Strategy as a vision for innovation (research, SMEs) and opportunity (industry, economy) underpinned by a trust framework that has such innovation and opportunity at the forefront of any standard and regulatory framework. In response to this publication, we offer our initial thoughts and feedback on strategic points of contention in the Strategy as well as some main takeaways:
- The clearest signal from the National AI Strategy is that it places the ability to innovate at the forefront of its approach to AI.
- Indeed, research is discussed in the context of adoption (i.e. industry) and in terms of ‘catalytic contribution’ to national aims and challenges (such as in health).
- The skills agenda, relating to vocation and ability rather than education, is two-pronged, with both the facilitation of talent (via visa provision) and in the national educational programmes.
- AI innovation is read into all other streams, such as a vehicle for economic growth and competitiveness globally. Pro-innovation underpins the governance agenda, which is explicitly discussed in terms of enabling innovation.
An alternative ecosystem of trust
- Although the UK’s Information Commissioner agreed on the adequacy for UK-EU data transfers quite early on in the Brexit negotiation process, the UK proposed a National Data Strategy that in some ways steps away from the European framework and inches closer to a US approach to privacy, much more focused on economic outcomes and innovation. Given the desire to be a global leader, these possibilities are read negatively with respect to the stated aims of the Strategy.
- There is a delicate balancing act between incentivising innovation and indirectly encouraging isolationism and a retreat from being a trusted data custodian. The concern is that the UK will become isolated in its regulatory and innovation ecosystem, where industries will choose to follow larger regulatory/market ecosystems and conform to the larger regulatory-market force.
- Alternatively, the UK could become an incubator for testing and innovation, thereby functioning as a launchpad viz. a more innovation-friendly space for start-ups and industry.
- There is the opportunity for the UK’s regulatory-market norms to become a preferred ecosystem for innovation and trust if the UK’s various influence mechanisms facilitate the emergency of a large regulatory-market ecosystem. However, to provide this level of assurance, the UK will need to have robust alternative frameworks in place and an accepted regulatory system.
Revision of data protection
- It has been an open question as to whether the UK would move towards a data protection regime that is ‘lighter’ than the EU approach. Although talk is mainly of ‘revision’ and ‘review’, the signal is that the UK is indeed seeking to position itself as less stringent regarding data protection. The potential intent to minimise documentation requirements aligns to this potential shift.
- This shift will be enabled by opening access to data (including public data), data standardisation and the cyber-physical infrastructure support included in the Strategy. This is critical because access to quality and sufficient data is crucial to the development of AI. It is hoped that a more open regime with respect to data protection will increase the use and possibilities of innovation with respect to AI.
- The relationship between data protection and AI performance (how accurate a system is), fairness (how a system impacts people with respect to protected characteristics, such as race and religion) and transparency (how much explainability a system is said to have) is often a trade-off; securing a high level of data protection is likely to result in a diminished level of transparency.
EU misalignment: Atlanticism?
- A focus on innovation and economic advancement is continuously touted as a step away from the EU’s regulatory strategy. However, in its own aspirations, economic development is a key factor of the European approach to privacy: data protection is intended to regulate and allow for the processing of personal data, when required and in a transparent manner.
- A potential move away from the European approach might have global implications as the GDPR holds some sway outside of the EU as well since businesses dealing with the bloc have to adhere to the rules when managing Europeans’ data.
- This raises the question - is the National AI Strategy simply pro-innovation or is it, in fact, a step back in terms of data protection rights? We invite a deeper discussion regarding innovation enabling standards and regulation.
New approaches to AI development and data protection are there to be seized but require sophisticated analysis and resourcing to properly document and develop ethical AI innovation. Whilst this document sends important signals for fostering and growing innovation, achieving ethical innovation is a harder challenge and will require a carefully evolved framework built with appropriate expertise. We offer points of contention to stimulate the discussion further. These are:
- Signal not strategy: Whilst the document is advertised as a National Strategy for Artificial Intelligence, it reads more as a signalling of geopolitical and economic plans and how AI can support those, not as a strategy for the development of ethical, innovative and well governed AI.
- Data flow continuity: We have some concerns around the continuity of data flows between the EU and UK. If some of the points of this strategy are implemented, it could contribute to further isolation of the UK in the international digital space.
- Sectors: While it is a laudable ambition to have cross-sector regulation, it will be a difficult task to produce rules that are specific enough to provide clarity for sector specific industry.
- Liability: There are some big questions in AI regulation relating to, for example, responsibility and liability. The Strategy does not make clear the likely positions to be reached on those.
- Method of engagement: The Strategy does not disclose the methodology for developing the regulation/rules or how stakeholders will be engaged in the process. To achieve this at a global level, significant expertise in diplomatic and subject matter competency would be required.
- Accountability to citizens: Clearly innovation does bring societal benefits but potentially at the expense of individual citizen rights if the frameworks that evolve are not ethical and robust. Any replacement system does need to retain the intrinsic concept of ‘privacy by design’ whilst potentially rebalancing and reframing the systems that achieve this.
New approaches to AI development and data protection are there to be seized but require sophisticated analysis and resourcing to properly document and develop ethical AI innovation. This will require both interdisciplinary subject expertise and global diplomacy, along with a carefully evolved framework built with appropriate expertise.
This article was authored by Emre Kazim PhD, co-founder and COO of Holistic AI, a start-up focused on software for auditing and risk management of AI systems, Denise Almeida, a PhD candidate at UCL, Nigel Kingsman, a consultant for Holistic AI, Charles Kerrigan, partner and global head of fintech at CMS, Adriano Koshiyama, co-founder and CEO of Holistic AI, Elizabeth Lomas, associate professor at UCL, and Airlie Hillard, a researcher at Holistic AI