Keeping an “AI” on Product Safety and Liability: UK publishes report on Artificial Intelligence - Lexology

2022-05-28 22:14:56 By : Mr. Michael Ma

Review your content's performance and reach.

Become your target audience’s go-to resource for today’s hottest topics.

Understand your clients’ strategies and the most pressing issues they are facing.

Keep a step ahead of your key competitors and benchmark against them.

Questions? Please contact [email protected]

On 23 May 2022, a report on the impact of artificial intelligence (“AI”) on product safety was published by the UK’s Office for Product Safety and Standards (“OPSS”) (the “Report”). Cooley was asked to contribute insights to this Report which examines the use of AI in consumer products and its implications for product safety and liability. The Report is over 100 pages long, so this blog will provide an overview of key points, including the safety benefits and challenges of incorporating AI into the design and manufacture of products.

What is an ‘AI product’?

AI is a broad term referring to technology that can sense its environment, take action in response and learn. In essence, AI seeks to replicate human problem-solving and decision-making abilities. In practice, the term “AI” is used to refer to a wide range of applications from simple algorithms to machine learning. The Report draws a critical distinction between AI and automated products. Whilst AI decisions tend to evolve over time, trained and constantly learning from the information the AI system receives, automated products are pre-set and programmed to carry out a task in a pre-determined way.

The Report identifies a number of benefits AI might bring to consumer safety, including:

On the other hand, the Report also flags that AI can bring its own challenges:

The Report noted that while current UK product safety regulations can be applied to many existing AI consumer products, there are shortcomings, including:

The Report sets out a framework to aid consideration of the effects of AI on consumer product safety and liability. The framework highlights key characteristics of AI (mutability, opacity, data needs and autonomy) and identifies potential associated challenges. Its aim is to guide policymakers when evaluating and developing product safety and liability policy for AI consumer products. The considerations set out also provide a useful basis for product-related AI risk assessments by economic operators.

The Report explains that the hypothetical application of the UK’s product liability rules to AI products is a challenge. It remains unclear how these rules would apply to AI products which can undergo changes in how they operate after their placing on the market (e.g. through interaction with consumers and their data via machine learning). It is also uncertain to what extent manufacturers should be held liable for decisions made by an autonomous system, for damages which could not have been predicted or where a larger number of actors (including data providers or third-party platforms) involved in design and manufacture obfuscate the allocation of liability. AI also relies on complex algorithms that are opaque and can be difficult for third parties to understand – a further challenge to identifying the source of potential harm and attributing liability.

Approaches to tackling AI risks in consumer products

The Report notes that issues with consumer products may become more pronounced with advancements in AI. It discusses initiatives and tools that are already seeking to address related shortcomings in current UK laws on safety and liability:

Will the UK be the first to regulate AI?

While countries, such as the UK, have been hesitant to regulate AI products for fear of obstructing innovation, the introduction of regulations by first movers is likely to be influential (as was the case with the EU’s GDPR). The UK is not the only country considering the need for legislative change.

Despite AI’s seismic impact, both realised and potential, recognised by the Report, significant barriers to adoption for AI (cost, privacy and awareness) remain. Going forward, the Report advocates a more transparent approach to AI systems with greater consideration given to the data used for training, testing and validation purposes. At the same time, it highlights the need for more regulation in this area to provide certainty for economic operators and consumers. The introduction of new regulation will need to be balanced with the need to foster innovation and not duplicate, or cut across, existing legal frameworks, such as those relating to product safety and privacy. Whatever approach is taken, AI is a fast developing field which may fundamentally change the product safety and liability landscape. Stay tuned for future updates.

If you would like to learn how Lexology can drive your content marketing strategy forward, please email [email protected] .

Regulation (EU) 2016/679 - General Data Protection Regulation (GDPR)

© Copyright 2006 - 2022 Law Business Research