Skip to content
BY 4.0 license Open Access Published by De Gruyter May 13, 2022

Response of the European Law Institute to the Public Consultation on Civil Liability – Adapting Liability Rules to the Digital Age and Artificial Intelligence

  • Bernhard A Koch EMAIL logo , Jean-Sébastien Borghetti , Piotr Machnikowski , Pascal Pichonnaz , Teresa Rodríguez de las Heras Ballell , Christian Twigg-Flesner and Christiane Wendehorst

I Introduction

The authors welcome the opportunity to respond to the public consultation of the European Commission on ‘Civil Liability – Adapting Liability Rules to the Digital Age and Artificial Intelligence’.[1] In this survey, the Commission first asks for feedback on whether and how to improve the Product Liability Directive (PLD),[2] and in a second part seeks to ‘collect information on the need and possible ways to address issues related specifically to damage caused by Artificial Intelligence systems’. As the introduction to the call rightly states, the latter concerns both the PLD as well as other civil liability rules of the Member States.

Before addressing the questions raised in the questionnaire, it is important to stress that a debate about a possible further approximation of the tort laws of the Member States has to proceed from a set of starting points:

  1. It is imperative to first assess the current laws of civil liability in the Member States in order to identify the options and chances for any harmonisation project. In particular, the already existing options for victims of defective products not yet covered by the PLD or of Artificial Intelligence (AI) systems to claim compensation for their losses need to be taken into account, not only with respect to the bases of such claims, but also regarding the potential outcome, ie which heads of damage will be compensable at all in the Member States, under which conditions, and to what extent. The less likely victims have a chance of being indemnified at present, the more will they turn to any alternative regime proposed by the EU legislator. The opposite is equally true, however, as could be witnessed particularly in the first decades after the entry into force of the PLD, when its regime was widely ignored in practice in most Member States due to other alternatives for victims already in place. One thereby also has to consider that at least some victims may benefit from contractual claims or from some alternative compensation regime (eg a social insurance regime) rather than having to seek recourse in a tort law regime.

  2. One should also bear in mind that any improvement of the position of either claimant or defendant in a possible tort law action that only applies to certain specifically denominated risks needs to be justified inter alia with an eye to equal treatment of tort law victims in general. After all, why should, for example, those harmed by a self-driving car face less of a challenge to collect compensation than those injured by traditional motor vehicles? This is not to claim that there are no such arguments, but these have to be explained when designing a future liability regime specifically for autonomous vehicles. A specific regime for AI may therefore play a more significant role for any recourse the person sued by the direct victim would seek from the producer. This might entail rethinking the traditional primary claim against the manufacturer of any goods.

  3. Product safety and other administrative law rules also need to be taken into consideration. The more and the better proper conduct is defined, for example, the less need there is for adjustments to, or deviations from, some existing fault-based regime, since the burden of proving fault might be less of a challenge for potential victims in such cases. Also, the more precisely technical requirements are defined ex ante, the more likely will claimants succeed in proving a defect of a product, thereby reducing the need to reverse or otherwise adjust the burden of proof in this regard.

  4. At the same time, however, the interoperability of devices and the ongoing exchange of data between different AI devices may render it more difficult to identify the origin of any defect. The systemic impracticality of identifying a single cause of harm in a group of interconnected potential sources may require reconsidering the most suitable way to allocate the ensuing losses.

  5. On a more general note, policy arguments as to whether tort law is indeed the proper place to proceed with legislative action have to be considered. Promoting confidence in some new technology, for example, may at least not exclusively be achieved by providing potential victims ex ante with easier paths towards compensation (apart from the fact that facilitating claims for compensation may be perceived as counter-intuitive for a technology that is commonly associated with a reduction of risks).

  6. Finally, attention should be drawn to the fact that the debate about liability for AI and other emerging digital technologies has, so far, almost exclusively been focussed on traditional safety risks, such as personal injury or property damage. AI-specific risks, which the response to the public consultation on the White Paper on AI[3] called ‘social risks’ and which the Proposal for an Artificial Intelligence Act (AIA) now refers to as ‘fundamental rights risks’, are usually not discussed in the context of tort liability. This concerns risks such as discrimination, manipulation or exploitation, for example when an AI-based recruitment system used for profiling job applicants is biased and discriminatory.

II Reform of the Product Liability Directive (PLD)

In this part of our response, we focus specifically on the need to reform the current product liability regime. The PLD was adopted in 1985 following the debate on a first draft published in 1976. The Directive was subsequently amended in 1998 to extend its reach to primary agricultural produce in the wake of the BSE health crisis.

The PLD has otherwise remained unchanged since it was adopted in 1985. However, there have been numerous developments in the way products are manufactured and distributed, as well as with regard to the nature of products themselves. The rapid development of digital technology and the integration of physical goods into the digital environment through embedded digital content, such as software and the growth of the market in smart products and the Internet of Things (IoT), have had a significant impact.

The current consultation is an important step towards the much-needed modernisation of the liability regime established by the PLD. The need for this is well documented in expert reports such as the Final Report of the New Technologies Formation of the Expert Group on Liability for New Technologies (in the following: NTF Report),[4] resolutions of the European Parliament,[5] and the Commission’s reports.[6]

There are multiple dimensions for reforms to the PLD.

  1. Not a merely economical perspective: the PLD itself dates from a period when the drafting style of EU legislation was very different and more ‘economical’ than is common now, with many provisions expressed in quite broad terms. There is certainly a need for a more coherent dogmatic approach.

  2. A need to better incorporate the PLD into the existing acquis: the PLD was adopted at a time when there was hardly any EU legislation in related areas, in particular consumer protection. Today, the acquis communautaire is much more detailed and complex, and therefore, it will be necessary to review the terminology and concepts used in a revised product liability regime as well as its alignment with other relevant measures (such as the directives on consumer sales and digital content and services). The role of the General Product Safety Directive should also be better integrated in the reflections.

  3. Products often include digital elements: the PLD predates the digital age, and so there will, as a minimum, be a need to acknowledge that the product liability regime already extends to physical products incorporating digital elements. However, the question is whether it should also apply to ‘purely’ digital products and if so, to which types thereof (eg software only or also media files or map data).

  4. A need to delineate the PLD from liability for AI: furthermore, there may be a need to develop a liability regime in respect of at least some applications involving AI. This could be an aspect of a modernised product liability regime to replace the current PLD, but at least for risks not covered by the PLD regime, it might be preferable to develop a liability regime tailored to AI systems. Depending upon the latter’s scope, the interaction between a new product liability regime and such an AI liability regime would have to be considered.

The European Law Institute (ELI) has already made a general contribution to the discussions on a reform of the PLD through its Innovation Paper: Guiding Principles for Updating the Product Liability Directive for the Digital Age (2021, hereinafter the ELI Innovation Paper).[7] Our response to the consultation reaffirms the views expressed in the ELI Innovation Paper, but it enhances it with comments on reforms which are needed irrespective of any extension to the digital environment or AI.

A General aspects

Before turning to specific comments, it has to be stressed that any modernisation of the PLD regime should not abandon the core principles upon which the current PLD is based. Thus, any new regime should continue to offer a clear and straightforward mechanism through which a person who has suffered harm (personal injury or damage to their property) can seek compensation (see Guiding Principle 1 of the ELI Innovation Paper). The current PLD provides that an individual who has been injured or whose property has been damaged by a defective product is able to claim compensation by proving that a product was defective within the meaning of the PLD and that the defect caused the injury or damage complained of. A producer is held strictly liable on this basis.[8] The concept of a ‘producer’ extends to other parties in the supply chain,[9] notably an importer into the EU, together with a fall-back option if a producer in this extended sense is not identifiable. Where there are several possible defendants, they are jointly and severally liable, and the injured person can claim against any of them (with national law providing recourse options for the person thus held liable).

This possibility for an injured person to seek compensation from a range of parties must continue to be a feature of a revised product liability regime, although the parties which could be held liable might be open to review.[10] We expand on this issue below.

Furthermore, the ELI Innovation Paper stressed that any product liability system needs to strike a workable balance between, on the one hand, a sufficiently high level of protection of individuals to ensure that any harm suffered by them is appropriately compensated, and, on the other hand, maintaining an environment which encourages innovation (Guiding Principle 2 of the ELI Innovation Paper). If this balance tilts too far in favour of protection of individuals, it could have a chilling effect on innovation and utilisation of digital technology; if it tilts too far towards innovation, it could damage consumer confidence and trust in digital technology and affect its potential for economic exploitation.

Also, as already noted, a revised product liability regime has to be co-ordinated with measures in cognate areas of law (Guiding Principle 3 of the ELI Innovation Paper), in particular with the General Product Safety regime (currently Directive 2001/ 95/ EC, hereinafter ‘GPSD’) or the Sale of Goods Directive (SGD).[11] This has not yet been considered to the extent necessary. Under the GPSD, producers may only put ‘safe’ products on the market. The civil liability consequences of placing an unsafe product on the market are within the scope of the PLD. Key definitions, such as ‘producer’, ‘safe’ and ‘defective’ are not co-ordinated at present, which produces undesired results and a high degree of uncertainty as to the applicable provisions.

A person who has bought a defective item which causes personal injury and/or property damage can claim under the PLD, but in respect of the defective item itself, that person has to claim against a different party (the contractual seller) under a different regime (non-conformity of the goods with the contract). Conversely, the SGD is concerned with remedies in respect of non-conforming goods, but this excludes a claim for damages, including consequential losses, caused by the non-conformity, leaving this to national law. This is but one example of where a better co-ordination between the product liability regime and the consumer sales regime[12] is required.

B Specific matters

1 Definition of ‘product’

Guiding Principle 4 of the ELI Innovation Paper has suggested that the definition of ‘product’ currently used in the PLD should be updated to cover: (i) the combination of goods with digital elements; and (ii) digital content and certain digital services supplied as ‘digital products’.

a (Tangible) products with digital elements

Currently, the PLD applies to all movables (and electricity),[13] including those which are installed in an immovable. Furthermore, whilst the PLD is not expressly clear on this point, products which, at the time they are put into circulation incorporate digital elements in order to perform their functions, are also commonly assumed to be already within the scope of the definition of ‘product’. In particular, where operating software is installed on a physical item (such as a domestic appliance), it is clearly a component part whose flaws may render the item in which it is pre-installed defective if it for that reason fails to meet the safety expectations of art 6 PLD.

However, at present the question remains whether the developer of such software can be sued directly by the victim as a ‘manufacturer of a component part’ within the meaning of art 3(1) PLD. This should be expressly answered in the affirmative, also with respect to the defence of art 7(f).[14]

It is further unclear at present what impact subsequent updates of the software have on the defectiveness of a product where the original version of such software was pre-installed when it was put into circulation. It is even less certain whether software essential for the operation of a (tangible) good but provided by a third-party supplier independent of the distribution of the good may render the latter defective within the meaning of the PLD.

(1) Updates to pre-installed digital content after the product was put into circulation

Where pre-installed digital content is updated regularly, typically over the air and therefore without any (additional) tangible data carrier, it could be argued that a flaw in the (now updated) digital content is no longer a problem linked to the physical item itself on which the update was installed. Any uncertainty as to whether physical items incorporating digital content, especially those which rely on regular updates and/or on the interaction with a digital service, should be removed, however, by revising the definition of ‘product’ accordingly and by adjusting the PLD to the consequences of updates after the time the physical item was put into circulation. The victim should not be burdened with proving whether a flaw in the product resulted from its original state or from a subsequent update to pre-installed software.[15]

The ELI Innovation Paper has suggested that art 2(5)(b) SGD, which corresponds to art 2(3) Digital Content Directive (DCD),[16] could serve as a model. These provisions define ‘goods with digital elements’ as ‘any tangible movable items that incorporate or are inter-connected with digital content or a digital service in such a way that the absence of that digital content or digital service would prevent the goods from performing their functions’. Consistent with the aim of better coordination between different measures in cognate areas, this definition could be the basis for extending the definition of ‘product’ currently used in the PLD.

(2) Updates to digital content first installed after the product was put into circulation

In line with the SGD,[17] the point in time when the digital content offered by the manufacturer or one of its affiliates was installed – whether before or after putting the tangible item into circulation – should be irrelevant with respect to the liability of the item’s manufacturer as long as its functionality depends upon this digital content. If the manufacturer of a smart good, for example, requires its user to download and install software from its (or any affiliate’s) website before the good can be used as advertised, the fact that such software was not pre-installed should not lead to a different outcome as to the product liability analysis. However, this would have to be considered when redefining the notion of a product as well as the implications of the (so far crucial) moment when the (tangible) product was put into circulation.

(3) Products with digital elements provided by a person other than the product’smanufacturer

Aligning the language of the PLD with the above provisions of the SGD triggers the follow-up question, however, whether the manufacturer of goods with digital elements should also be held strictly liable for digital content provided by some third party if such digital content is essential for the proper functioning of the goods. Let us take the example of the smart watch given at Recital 15 SGD:

‘In such a case, the watch itself would be considered to be the good with digital elements, which can perform its functions only with an application that is provided under the sales contract but has to be downloaded by the consumer onto a smart phone; the application would then be the inter-connected digital element. This should also apply if the incorporated or inter-connected digital content or digital service is not supplied by the seller itself but is supplied ... by a third party.’

If the software is essential to use the primary functions of the smart watch, and the latter causes harm within the meaning of the PLD because of a flaw in the software, the victim should not be required to sue both the producer of the hardware as well as the developer of the software, but instead – fully in line with the idea of channelling liability underlying the PLD from the beginning – the victim should be able to sue the producer of the smart watch for compensation, who in turn may then have to seek recourse from the software developer. In most cases, the two will be already linked by a contractual relationship anyhow, which could provide for the distribution of potential losses internally.

(4) Drawing the line to (digital) services

Services within the meaning of art 3(5), in particular lit a, of the DCD,[18] as well as other services in which the personal element (the action) predominates over the material element (the result) should clearly be excluded from the scope of the PLD. Some continuous services would still have to be addressed if they are linked to and required for the functioning of the product, such as regular updates provided by the manufacturer, or by someone affiliated to the latter. This in turn requires adjustments to the trigger of liability and defences available to the producer, since the key moment of putting the product into circulation will not be relevant for such subsequent updates (or will be but to a lesser extent).

b Standalone digital content

(1) Software

A further extension should be considered to include wholly digital products within a revised product liability regime. Individuals regularly acquire digital content separately from any tangible items, such as apps installed on tablets or smartphones. Such purely digital products could also trigger personal injury or damage to property. For this reason, it is not only important to broaden the definition of ‘product’ to include (tangible) products with digital elements, but also purely ‘digital products’,[19] ie digital content as defined in the DCD. Restricting it to the former would result in inconsistencies if software is recognised as a component but not as a standalone product.

The definition of ‘digital content’ is, however, very broad and could cover a wide range of content. It would be possible to limit the scope of the revised product liability regimes to certain types of digital content, such as functional digital content (ie software and apps).[20]

(2) The special case of software-as-a-service (SaaS)

If product liability should be extended to include digital products such as software, providers of SaaS should also be held strictly liable for defects thereof, although it is not sold via a one-time transaction (subject to updates during at least a certain period thereafter; cf art 7(3) SGD and art 8(2) DCD). Functionally, the provision of SaaS is very similar to the sale of software with regular updates when it comes to the role of the developer (manufacturer). Also, the safety expectations as regards SaaS are most likely the same or at least similar to software distributed more traditionally. It would be difficult to explain why the developer of software that is sold to users who receive regular updates should be treated differently from the provider of SaaS.

(3) The special case of AI

In line with the ELI Innovation Paper, we propose that a reformed PLD should extend to personal injury or harm caused by AI. This can already be achieved by further extending the definition of ‘product’ to software, which includes artificial intelligence systems.[21]

However, such extension will be subject to the built-in limitations of the PLD regime, and thus only apply to losses covered by it and not, for example, to pure economic loss or to purely emotional harm. Also, the other requirements of liability will equally apply, such as the need for the victim to prove a defect in the AI as the cause of damage, as will the defences available to the developer of the algorithm.[22]

(4) Digital data

The regular supply of digital data required for the operation of a product (such as map data used for GPS systems) should not be attributed via the PLD to the original manufacturer of goods which use such data.[23] It is open to debate, however, whether the providers of such information should be held strictly liable under an (expanded) PLD for each data instalment put into circulation. While the definition of compensable harm may already reduce the scope of such an extension in practice, as most likely will the notion of reasonable safety expectations regarding such data, it is at present highly disputed whether flawed information as such[24] should trigger product liability.[25]

We suggest such data should not be covered by the PLD, in particular as such services will typically be rendered on a contractual basis, offering at least parties within the protective scope of such contracts the possibility to seek compensation on that basis.

c Refurbished products

The mere repair of a product is a service and already for that reason not a manufacturing process, as the person providing such service does not create something new that is intended for distribution thereafter. However, the refurbishment of a product (re-) creates a used product and makes it ready to re-enter the chain of distribution. In the case of a repair, the person who receives the product typically was the individual who initiated the repair process in the first place; it returns, therefore, to the person who commissioned the work, who can compare the state of the product before and after the repair, and who will typically be able to seek contractual remedies for flaws in the repair work. A refurbished product, on the other hand, does not return to its previous owner, but is sold to a third party after its functions and other qualities were restored to the extent possible, and the refurbisher is the one who distributes it in the course of their business. From a buyer’s perspective, the refurbisher is not different from the producer of a new product – both are expected to have been in control of the safety features of the product before greenlighting it for circulation.[26]

Refurbished products should therefore be included in the definition of a ‘product’ within the meaning of the PLD, but this should be stated expressly in order to avoid any misunderstandings in this regard. This would, in particular, require an adjustment of the list of liable persons, as argued below.[27]

d Pharmaceuticals

In connection with the ‘product’ issue, various stakeholders have raised the question whether pharmaceuticals should remain under the PLD, or if a new specific instrument should apply to them instead. It is certainly the case that pharmaceuticals are specific products in many respects and that this specificity had not yet been clearly identified at the time of the PLD’s adoption. The abundant case law on pharmaceuticals that has developed since then has shown that these products can raise specific challenges as regards, for example, defects (the definition of which at art 6 PLD may be more difficult to apply to pharmaceuticals), proof of causation, or the ten-year long-stop period.[28]

However, carving pharmaceuticals out of the PLD and creating a special instrument for pharmaceutical liability raises obvious problems. Not only would it likely be very difficult from a political point of view to provide for a specific instrument limited to pharmaceuticals, but the question of whether other products should also be the subject of a specific regime – starting with AI – would be raised and further, it would in fact question one implied but major policy choice behind the PLD, namely that all products should be covered by the same and unique regime.

Rather than removing pharmaceuticals from the scope of the PLD, a better solution seems, therefore, to adapt or modify the PLD, where needed, to accommodate the specificity of pharmaceuticals. As regards defects and proof of causation, the Boston Scientific Medizintechnik and Sanofi Pasteur cases, decided by the ECJ,[29] have shown that the PLD is probably flexible enough to deal adequately with pharmaceuticals. However, if a reversal of the burden of proving a defect is to be considered for certain products or circumstances, then the opportunity to extend this mechanism to pharmaceuticals, at least in some cases, should be discussed.

As to the most problematic aspect of the PLD with regard to pharmaceuticals, see below on the ten-year long-stop period of art 11 PLD.[30]

2 Persons liable – the notion of ‘producer’

Guiding Principle 5 of the ELI Innovation Paper contends that the category of persons liable towards an individual should be revised to better reflect the various actors commonly involved in the supply of products, including products with digital elements and wholly digital products. Inspiration can be drawn, eg, from art 2 (30)-(35) of the Medical Device Regulation (MDR)[31] as well as from art 3 (8)-(13) of the Market Surveillance Regulation (MSR)[32] and more recently art 3 (7)-(13) of the proposed General Product Safety Regulation (GPSR).[33] In this context, one should note that the MDR fully and explicitly includes software and provides for a broad range of special rules with regard to software.

a Refurbisher

Art 2(3) MDR extends the notion of ‘manufacturer’ to persons ‘fully refurbishing’ a device, while art 17(2) MDR extends liability under the PLD to the ‘reprocessor’ of a device. For all products, art 12 of the proposed GPSR provides that a person other than the manufacturer that ‘substantially modifies’ the product shall be considered a manufacturer and shall be subject to the obligations of the manufacturer for the part of the product affected by the modification or for the entire product if the substantial modification has an impact on its safety. These extensions of the notion of ‘manufacturer’ and of a manufacturer’s liability should also be reflected in a revised PLD.

The notion of a ‘producer’ should therefore be expanded to include those who return a used product to the market after restoring the features which are advertised and as such define the safety expectations.

Such definition would not only have to draw the line to providers of mere repair services, but also to mere sellers of used items who – unlike the refurbisher – do not alter the used product by restoring the original functionality and who do not thereby generate trust in its safety. Also, the question of recourse between the refurbisher and the manufacturer of the original product would have to be expressly addressed. However, the refurbisher should not be able to avail himself of a defence that the defect already existed in the original product or before it entered his own sphere, as the relationship between the refurbisher and the original producer are comparable to that between the manufacturer of a finished product and the contributors of raw material or components.

b Providers of updates to digital elements

Going one step further, and considering that many products nowadays depend on ongoing support by way of software updates and a broad range of digital services, it has been proposed to introduce, in the context of an operator’s liability, the notion of ‘backend operator’,[34] as the person who continuously defines safety relevant features of a product by providing essential and ongoing backend support.

This person may or may not be identical with the manufacturer of the final product, but he is usually identical with the manufacturer of relevant digital elements of the final product. Given that the safety of the product is, to a large extent, controlled by this person, he should be included in the list of addressees of liability, mainly by adopting rules that are similar to those introduced by the SGD and the DCD for certain contractual aspects.

While this person is often at least affiliated with the manufacturer of the finished product into which the software is included when put into circulation, the interplay between these two will also have to be defined, in particular whether the latter shall continue to be liable vis-à-vis the victim for a product whose subsequent updates are provided by another (arguably a person providing a ‘digital component’ of the original finished product). One way to address this could be to hold the manufacturer of the finished product liable also for defects triggering harm after such updates, but foresee a defence comparable to art 7(b) PLD (or to clarify that this applies to such cases as well). However, in the interest of the victim, the latter could also be expressly excluded, leaving the question of whether the product was made defective by an update for the redress action between the two producers.

c Online marketplaces

Recital 35 of the MDR suggests that, already now, the (mandatory) ‘authorised representative’ within the Union of a manufacturer located outside the Union shall be held jointly and severally liable, together with the manufacturer, under the PLD. Article 20 of the proposed GPSR includes a range of obligations for ‘online marketplaces’, complementing the obligations potentially following from the MSR and in the future from the Digital Services Act (DSA).[35] So far, obligations for online marketplaces are merely diligence obligations, while art 5(3) of the proposed DSA opens the door a crack for the introduction of direct liability, but only where the product or information in question appeared to originate from the online marketplace itself (see in this regard the ELI Model Rules on Online Platforms[36] and its more far-reaching approach in art 20 on liability of a platform operator with predominant influence). The ELI suggests considering whether the idea underlying art 20 of its Model Rules on Online Platforms can be made operational for the PLD and extended to non-contractual liability, at least for cases where an online marketplace enables end users to import products into the Union from a supplier established outside the Union.

3 The notion of ‘defect’

Guiding Principle 6 of the ELI Innovation Paper contends that the notion of ‘defect’ should be amended to reflect the particular features of digital products and products with digital elements. Currently, a product is defective under the PLD if it did not ‘provide the safety which a person is entitled to expect, taking all circumstances into account’.[37] Circumstances mentioned expressly in the PLD are: the presentation of the product, the uses to which the product could reasonably be expected to be put, and the time when the product was put into circulation (with the proviso that the mere fact that a ‘better product’ is subsequently put into circulation does not render earlier products defective[38]).

These factors work well with tangible products, as is the case under the PLD,[39] and those which are supplied at a single, fixed point in time. However, both products which rely on digital elements to perform their functionalities and purely digital products do not fit this model. In particular, such products will require regular updates (whether to improve functionality, fix bugs or deal with security issues). This might make the proviso regarding the later availability of ‘better products’ as not being relevant no longer workable.

Furthermore, the PLD repeatedly points to a product being ‘put into circulation’. This made sense when the PLD was adopted and when it was possible to identify a single moment when this occurred. In the case of products with digital elements and digital products, this no longer holds. Continuous monitoring and updating, particularly of digital elements, mean that the responsibility of the producer can extend well beyond the point when the product was initially put into circulation.

4 Revisions to the notion of ‘damage’

At present, the PLD covers two types of damage: (i) death or personal injury; and (ii) damage or destruction to an ‘item of property’ if that item was intended, and actually used, for private use/consumption. While we recommend maintaining this limitation in principle and to leave it to the Member States to provide for the indemnification of pure economic loss or standalone emotional harm, for example, certain adjustments to the delimitations of compensable harm under the PLD may nevertheless be necessary.

a Limitation to products used for private purposes?

The limitation to ‘items of property’ intended and actually used for private purposes no longer seems appropriate. It is increasingly difficult to maintain a dividing line between personal and professional use. Products are increasingly used for mixed purposes, just as the distinction between an individual acting in a professional or non-professional capacity is also no longer as clear as it once was. One can see this in the increased use of 3D-printing and of ‘hobbyists’ engaging in some commercial activity.[40] Digital technology and changes in the labour market increasingly blur the boundaries between professional and personal activities, whether that be in the fact that goods are used for ‘mixed purposes’ or the fact that individuals may be acting as ‘prosumers’. In other areas of EU law, this difficulty has already been taken into account; for instance, in dealing with ‘mixed purposes’, an individual would be acting as a consumer where the professional purpose is negligible,[41] or at least ‘so limited as not to be predominant’.[42]

b Damage to digital elements and data

Going beyond this general point of modernisation, the ELI Innovation Paper has argued that the notion of damage might be reviewed to include damage to digital elements and data. Today, damage might not only be caused to individuals and physical property, but also to digital items and data created by an individual, whether stored on a physical device or on a digital service (cloud). The growth of digital assets and tokens adds a further dimension.

A revision to the notion of ‘damage’ is required to ensure that a reformed product liability regime adequately covers the novel features of the digital era. In particular, loss of data should be brought within the scope of ‘damage’, as should damage to other digital content. This requires a careful definition of what exactly is recoverable for such damage, as this may be a loss of a mere copy of a still existing master copy or damage to the latter, in which case intellectual property issues as well as questions regarding the compensability of emotional harm (eg in the case of lost digital pictures) may arise. At least the latter should not be addressed by the PLD, however, but left to the Member States, similar to other instances of non-pecuniary loss.

c Non-pecuniary loss

Article 9 PLD already now foresees that it ‘shall be without prejudice to national provisions relating to non-material damage.’ This abstention from regulating non-pecuniary loss in the PLD should be preserved in light of substantial differences in the Member States, both with regard to the compensability of such damage as well as to the assessment of compensation. However, it should be made explicitly clear that the PLD regime only extends to consequential non-pecuniary losses such as pain and suffering triggered by bodily injury, and not to stand-alone immaterial harm such as purely emotional distress.

5 Burden of proof

The PLD requires that the person who has suffered harm is required to show: (i) that the product in question was defective and (ii) that this defect caused the harm suffered.

a Burden of proving defectiveness

Guiding Principle 8 of the ELI Innovation Paper has noted that this burden may become more difficult to overcome in the case of a defect in a product with a digital element or a purely digital product. If one takes a product with digital elements, a defect could be the result of a problem with the digital element or with the physical item. However, at least as long as the digital content is pre-installed and distributed together with the physical item, the manufacturer of the latter will undoubtedly be liable under the PLD as is, irrespective of whether the defectiveness of the product stems from its digital or non-digital components. After all, the victim only needs to prove the defectiveness of the finished product without specifying which part thereof was flawed, let alone what caused the defect.

Matters are more complex, though, in the case of an Internet of Things (IoT) system. Here, the interaction of multiple physical items and digital elements (both incorporated and stand-alone, but often from different producers) increases the difficulty for an individual to prove which element was defective, even though the separate components may have been designed by their producers to be combined and to interact (eg in the case of hardware parts which are designed according to specifications of an – often separate – developer of the operating system).

The ELI Innovation Paper has argued that, rather than burdening a claimant with the need to identify the precise element that was defective, it should suffice for an individual to establish that the combination of physical products and digital elements/products caused the damage and was therefore defective. However, this should only be the case if the producers of these digital and non-digital components are linked by contracts amongst themselves, as was argued by the NTF Report in its Key Findings 29 and 30, defining ‘commercial and technological units’. Key Finding 30 suggested the following aspects to be considered when identifying such a unit:

‘(a) any joint or coordinated marketing of the different elements;

(b) the degree of their technical interdependency and interoperation; and

(c) the degree of specificity or exclusivity of their combination.’

The ELI Innovation Paper has suggested that this could be adopted in the present context. Therefore, in the case of ‘commercial and technological units’, it would suffice if an individual showed that the unit as a whole was defective. All those participating by providing elements to such a unit could be held jointly and severally liable as suggested by Key Finding 29 of the NTF Report.

In addition to a definition of ‘commercial and technological units’ as suggested by the NTF Report,[43] art 4 PLD could be expanded by another sentence stating that ‘[I]n the case of a commercial and technological unit, the injured person shall be required to prove that the unit was defective and caused damage.’

A further challenge in the digital context is that a common feature of both products with digital elements and of IoT systems is a reliance on external data to determine how the product or system operates. Where such data is supplied from external sources, proving both defectiveness and a causal link with the injury or damage sustained becomes very difficult indeed. Externally-supplied data could cause a product or system to malfunction. The ELI Innovation Paper has argued that an individual should not be burdened with having to rule out the relevance of external data; instead, the producer should have the burden of proving that it was not the product/system itself that led to the injury or damage but that instead externally-supplied data did. In effect, therefore, this would mean introducing a new defence expanding art 7(b) PLD to that end, allowing the producer to evade liability where it can prove that external data rather than a defect attributable to its own sphere was responsible for the injury or damage and that the product was not reasonably expected to filter out or overcome such flawed data.

b Burden of proving causation

As has been discussed elsewhere,[44] Member States currently address questions of causation differently, starting already with procedural questions such as the standard of proof or the laws and practice of evidence. In areas of tort law where causation is not self-evident, to say the least, in particular if the technologies addressed by the consultation are involved, this means that such diversity will necessarily impact upon the outcome of cases in the absence of any modifications with the aim of approximating the various regimes.

This does not necessarily call for an all-encompassing interference with existing regimes in the various national regimes, however.

To begin with, certain existing procedural differences will not be within the reach of harmonisation in the first place. Sometimes, these national peculiarities may even come close to reversing the burden of proof, though.[45]

In product liability cases, the victim need not show what exactly within the product triggered his loss, let alone identify a particular stage of the design or manufacturing process that went wrong. However, he at least needs to prove that the product as a whole failed to meet reasonable safety expectations, ie was defective, and that this particular defect was the true cause of his loss. It is therefore not the mere involvement of the product that triggers liability, but a particular quality (or rather lack of quality) that counts. Defining industry (safety) standards ex ante could already help the victim along that way, if only by determining what exactly the expected outcome of a product’s use in practice should be. It may be obvious that a lawn mower is not supposed to cut off body parts of its user, but it may not be equally self-evident what to expect from the application of some AI algorithm.

One radical way to address problems of proving causation, though, is to merely shift the burden of proof to the opposing party, as suggested as one option (with variations) by the consultation in both of its sections. This, however, often equals determining the outcome of the case or at least of the answer to the causation question. Whatever is difficult for the claimant to prove, requiring the defendant to disprove it will pose similar challenges for the latter. This instrument therefore has to be chosen cautiously, in particular in light of possible equality concerns, if other victims of similar harm do not benefit from comparable advantages. One (though presumably not an exclusive) justification for taking that step may be that the person onto whom the burden of proof is shifted has substantially more insight into the facts of the case, in particular into the processes linked to the application of the technology which ultimately appear to have resulted in harm.

If such a reversal of the burden of proving causation should be the instrument of choice, it would therefore have to be applied cautiously and with certain specific limitations, eg to technologies with distinct features. Less drastic measures could include requirements for the defendant to produce log files or similar data that originate from within its sphere or are at least in its control or expertise, which could then be used by the claimant to proceed in proving his case. Failure to submit such data by the defendant could also explicitly lead to a presumption that the data, if it had been produced, would have proven helpful for the claimant, ie support the latter’s claim.[46] This could be made even more effective if a duty to log such information were imposed, although the feasibility of such an obligation depends upon the particular aspects of the technology concerned. In cases of mass production, even releasing claims histories could already prove helpful to the victim in order to identify patterns of defectiveness.

6 Defences

Above, we argued that a new defence might be required for instances where the malfunction of a product was due to externally-supplied data. In addition, reforms are needed to several of the other defences currently found in the PLD, as discussed in the ELI Innovation Paper.

a Art 7(b) PLD

First, the defence in art 7(b) PLD allows a producer to show that the defect did not exist at the time the product was put into circulation or only came into being afterwards. As noted earlier, products with digital elements or digital products which are subject to regular updating do not fit the idea of a product being ‘put into circulation’. One option would be to amend the defence to clarify its application in the context of goods with digital elements and, if the revised regime were to extend to purely digital products, also to such products. As long as the original manufacturer supplies the updates, the defence should only exonerate the manufacturer if it can prove that the defect neither existed at the moment when the original product was put into circulation nor when the most recent update was provided.

Art 7(b) PLD should therefore be adjusted along the lines of:

‘The producer shall not be liable as a result of this Directive if he proves: ...

(b) that, having regard to the circumstances, it is probable that the defect which caused the damage did not exist at the time when the product was put into circulation or updated by him or by an affiliated provider, or that this defect came into being after such moment; ...’

b Art 7(e) PLD

If the development risk defence of art 7(e) PLD should be maintained at all, not only would the moment when the product was put into circulation be adjusted along the lines just mentioned, but the ‘state of scientific and technical knowledge’ would have to be specified, considering the vast expansion of (particularly online) available information as compared to the late 1970s when the PLD was first drafted.

c Art 7(f) PLD

Article 7(f) PLD provides a defence for the manufacturer of a component where a defect is due to the design of the overall product into which the component has been fitted or to the instructions given by the manufacturer of the product. If purely digital products are recognised as falling within the scope of the PLD, their developer could also be held liable as a component manufacturer directly by the victim. In such case, the defence of art 7(f) PLD should also be available to this producer of a purely digital part of the finished product.

‘The producer shall not be liable as a result of this Directive if he proves:

(f) in the case of a manufacturer of a component or the developer of software incorporated into another product, that the defect is attributable to the design of the product in which the component has been fitted or the software installed, or to the instructions given by the manufacturer of that product into which the component was subsequently incorporated, irrespective of whether that product was distributed as finished or itself incorporated as a component into another product.’

7 Long-stop period of art 11 PLD

Perhaps the most problematic aspect of the PLD, certainly as regards pharmaceuticals, is the ten-year long-stop period of art 11. Some notorious examples of defective pharmaceuticals, such as Bendectin/DES, have shown that the side effects of these products can appear a very long time after victims have been exposed to them, and certainly after more than a period of ten years. It should be stressed, however, that the ten-year long-stop period is problematic not only in relation to pharmaceuticals. The Howald Moor case by the ECtHR[47] has shown that this comparatively short long-stop period potentially violates the right to a fair trial guaranteed by art 6 of the European Convention on Human Rights.[48]

The best solution would, therefore, likely be to remove altogether the ten-year long-stop period from the PLD, at least in the case of bodily injury. This would reflect the special status of bodily integrity as a protected interest in many European legal systems. It would also put an end to the current problematic situation where, in many Member States, a person who suffers bodily harm is less protected when his injuries were caused by a defective product and he sues for compensation on the basis of the PLD regime, as compared to when he incurred harm through another cause and/or can rely on another basis of liability. Should the PLD’s long-stop period be removed, national long-stop period periods instead would apply to bodily injuries caused by defective products, which are quite diverse, however.[49]

The ten-year period of art 11 PLD should be abolished altogether, though, irrespective of the type of harm. If it should still be preserved in cases not involving bodily harm, however, it would at least have to be converted from an extinction period (leaving not even a natural obligation) to a prescription period. The former is alien to the tort law systems of all Member States. Since this would then rather belong to the provision on limitation in art 10 PLD, a new paragraph would have to be inserted there between what is now paras 1 and 2:

(1a) Member States shall provide in their legislation that damage within the meaning of Art 9 lit b can no longer be recovered after ten years from the day on which the producer put into circulation the actual product which caused the damage.

8 Recourse

The final guiding principle of the ELI Innovation Paper contends that a revised product liability system should require or introduce a recourse system in order to allocate the financial consequences of a successful claim by an individual to the party to whom responsibility for that loss is ultimately attributable. Where such allocation is not possible, a mechanism for sharing the financial burden proportionally amongst all the relevant parties might be needed instead.

The current PLD defers this matter to national law, but this can produce a wide range of variation in the extent to which a party held liable by an injured individual is able to seek recourse from other parties. The ELI Innovation Paper noted the potential impact of statutory limitation periods for seeking recourse and argued that a clear and consistent recourse system is required, which should be introduced in a revised product liability system. Such a system could operate on a default basis. Parties could make alternative arrangements, although these might be subject to some degree of control (cf the Late Payment Directive).

9 Combining strict liability for defective products with liability for failure to comply with obligations under Product Safety and Market Surveillance Law

The PLD focuses on strict liability for defective products, and this should certainly remain the focus also under a revised PLD. However, under national laws, several competing liability regimes have survived, including general fault liability based on the defendant’s negligence in causing death, personal injury or property damage and/or special fault liability based on the defendant’s failure to comply with obligations under product safety or market surveillance law. For instance, where a product was safe when made available on the market, but later, due to subsequent developments, begins to pose a risk to the health and safety of consumers, a producer would be under an obligation to continue to monitor such developments and to take appropriate corrective measures (resulting, in the worst case, in a full recall of the product).

So far, it has been left to national law to provide for liability in such cases, arguing either that failure to comply with the obligation amounted to negligence (and that this negligence has caused death, personal injury or property damage), or that violating provisions in product safety law, whose purpose it is to prevent such harm, may in itself give rise to liability where the relevant risk has materialised. While a revised PLD should certainly not derogate all potentially competing liability regimes under national law (and it should especially not derogate contractual liability), consideration should be given to whether liability for failure to comply with certain obligations under product safety and market surveillance law should be harmonised in order to create more of a level playing field within the Union.

III Liability for artificial intelligence

While at least some applications of AI technology should be covered by an updated PLD as argued above,[50] others will fall outside its scope, while certain losses will not be indemnified under the PLD regime.[51] Liability for such residual risks may nevertheless deserve special attention and justify at least some legislative intervention due to the peculiar features of such technologies highlighted in the following.

As with other risks at present, the question also remains whether someone other than the manufacturer may be held liable for certain risks for different reasons, which also triggers the question of recourse between multiple tortfeasors. The best examples (which would also translate into the application of AI technology) are motor vehicles whose keepers are strictly liable in some (but not all) Member States, irrespective of the question of whether the risk of the car that materialised can be traced back to a product defect (in which case the victim has the choice of suing either the keeper or the manufacturer). These regimes would invariably continue to apply with respect to fully autonomous vehicles as they are triggered by a mere involvement of the car in the accident irrespective of whether it was driven by a human or by AI.[52]

A Peculiar features of emerging digital technologies that impact on liability

In the face of the extraordinary technological progress, the premise is that the law of liability should be able to cope also with emerging digital technologies, provided that victims are not uncompensated or undercompensated on the sole basis of the technology causing the damage, incentives to prevent harm and minimise risks are properly allocated, and victims can effectively access justice and claim damages under equal and fair conditions regardless of the technology employed. However, emerging technologies present certain distinctive features, each of which may be only gradual in nature, but whose dimension and combined effect results in disruption and may interfere in the effective achievement of traditional liability policy goals.

The following five distinctive features encapsulate the disruptive potential of AI in combination with other emerging technologies: complexity, increasing autonomy, opacity, openness, and vulnerability. These features would need special consideration when drafting a possible EU liability regime for risks not otherwise covered, eg by an updated PLD. Such common distinctive features[53] must be identified and compared with previous situations to which existing rules are well accommodated and accordingly assess the adequacy of existing solutions or the need of reform.[54]

Some key concepts underpinning traditional liability regimes could, in any case, be shaken by such disruptive features. That might render existing regimes insufficient or partially inadequate. The adequacy and completeness of liability regimes in the face of technological challenges have an extraordinary societal relevance. Should the liability system reveal insufficiencies, flaws and gaps in dealing with damage caused by emerging technologies, victims may end up totally or at least partially uncompensated. The social impact of a potential inadequacy of existing legal regimes to address new risks created by AI-driven technological ecosystems might then compromise the expected benefits and aggravate the social perception of risk undermining the acceptance rate of emerging technologies.

1 Complexity

Emerging technologies, specially integrated in sophisticated technological ecosystems, show a considerable level of complexity. Such a complexity manifests in three layers: internal logical complexity, plurality of participants and sources contributing to the operation of the system, and ecosystem of connected objects (eg sensors, networks, software, data collectors, platforms).

Algorithms driving sophisticated autonomous systems imply a high level of complexity in the design as well as in the operation. That adds opacity to the internal processing of the autonomous system, conceals the relevant criteria for the decision making, and reduces the comprehensibility of the outcomes. Yet, the opacity of the algorithm/AI schemes, due to the complexity and the lack of transparency of the whole procedure, normally entails the very unawareness by the addressee of the pre-conditions, the criteria, and the procedural aspects of the algorithmic decision.

Complexity also manifests externally. In the design, the operation, and the functioning of these ecosystems, a plurality of actors can participate or be involved in some way: software and app developers, algorithms’ designers, data providers, sensors’ manufacturers, system operators, producers of each device, part or component, DLT providers, monitoring service providers, etc.

Besides, complexity also relates to the multiplicity of parts, components, devices and systems integrating a technological ecosystem – ie, an autonomous car, a sophisticated surgical robot, a connected smart home system, or an algorithm-driven automated financial advisor.

From the combination of complexity and opacity, practical problems and legal challenges immediately arise.

(a) The issue of multiple actors. In all its facets, the increasingly high complexity embedded in new technologies’ applications triggers an obvious practical problem with legal relevance. Multiple actors could contribute to the causation of the damage. A plurality of actual or potential tortfeasors is certainly not a new problem for tort law that indeed provides for solutions.[55] However, in these cases, the multitude of players could act without prior coordination or planned intervention, the contribution could be occasional or spontaneous, and the participation of some players can be totally unknown or even unforeseen by the main operators (data provider, hacker, non-interoperable system, unexperienced user). Hence, under some circumstances, flawed functioning, harmful outcomes, or damaging operation of the system can be provoked by lack of interoperability among components, interaction with other unexpected components or software, wrong data, or inexpert or inadequate use by the user. In such scenarios, the damage cannot be easily attributed to a specific component, a well-defined cause, or a single actor. The damage derives from the ecosystem as a whole.

(b) The issue of multiple causes. Far from the classical monocausal conception of causing harm, an increasingly complex AI-driven system (in combination with other emerging technologies) also reveals a plurality of possible causes. Frequently, the damage results from a conjunction of intertwined effective causes and has been collectively triggered by multiple actors. This situation is not unfamiliar to current legal systems either. Rules dealing with damage caused by multiple causes are indeed presently provided for in all jurisdictions. Nonetheless, AI-driven systems add further intricacies to that well-known problem.

(c) The issue of successive causes. Unlike traditional products, once circulated by the original manufacturer and without the latter’s participation, subsequent actors can intervene in the marketing, the use, or the upkeep of sophisticated technological products without the participation of the original manufacturer. Accordingly, in the cycle of production-use of the product, subsequent activities, tasks, and causes can interfere with and contribute to the likelihood of damage being caused. That multi-layer process implies the overlapping and convergence of many sources of damage – software updates, personalising options by the end user, self-learning actions, data collection.

(d) The issue of opacity. Opacity adds further complexity thereto. In a context of low transparency and limited explainability (due partly to a system of association rather than but-for approach), it is difficult to unveil the cause. Not surprisingly, the process of discovery and evidence becomes costly and complicated, and not always feasible.

Accordingly, imposing logging duties on operators to facilitate evidence and introducing different solutions aimed at alleviating or shifting the burden of proof of the victim seem reasonable options to consider.

2 Increasing autonomy

The second challenge is linked to the level of autonomy and the machine-learning capabilities that algorithm/AI-driven systems, as intelligent agents, may have. Increasing autonomy of algorithm/AI-driven systems constitutes one of the most disruptive factors of the emerging digital technologies. Autonomy[56] is, nevertheless, a degree of a scale. It must be defined at which point traditional solutions for the allocation of legal effects and the attribution of liability become inadequate and new solutions are needed.

Autonomy embodies one of the most perturbing impacts on the classical liability regimes. The classical fault-based liability rules are inspired by an anthropocentric conception. Concepts such as fault, conduct, intention or standard of care have been conceived, developed, applied, and interpreted essentially for and in relation to humans. To whom is liability attributed if a harmful outcome is not predetermined by the programming but the result of an ‘autonomous decision’ of the AI system? How are the notions of fault to be applied to the ‘conduct’ of autonomous systems? What is the standard of care to assess the operation of an AI-driven autonomous system? None of these questions are unanswerable. Even more, the answers could not be necessarily disruptive.[57] A continuity approach is a valid and legitimate option. But a debate is needed. Consequences of alternative policy decisions should be considered and duly pondered – eg preservation of current liability regimes, orientation towards strict liability models, mandatory insurance, extension of defective product liability regimes, formulation of standards, creation of sectoral compensation funds, legal recognition of electronic personhood. Effects on innovation, production costs, acceptance rate of emerging technologies by population, and robustness of the system have to be assessed and included in the policy decisions equation.

3 Opacity

The complex set of instructions, criteria, weight factors, data or alternative options an AI-driven system operates on is not normally visible (nor easily understandable) for the end user.[58] Criteria on which decisions are based are often unknown, and the design of the underlying process opaque. Lack of transparency exacerbates the complexity and the uncertainty to allocate liability. Results are produced by association between unforeseen (or unforeseeable) factors, which cannot be explained by deductive or even non-deductive inferences.

In many cases, therefore, the mere transparency of such elements would not ensure sufficient comprehension of the criteria guiding the decision-making process, the reasons of malfunctioning, or the causes provoking the damage. In sum, the explainability of complex technological systems is limited, costly, and not always fully feasible in the whole extent.

4 Vulnerability

AI-driven technological ecosystems are technologically vulnerable. Vulnerability refers to two situations.

On the one hand, AI systems are heavily dependent upon data – collected data, test data, learning data, processed data, machine-generated data, user’s data, personalising data. Data determine the accuracy of outcomes, fuel decisions, feed the machine-learning process, and ensure the very operation of the system. Data dependency is a source of vulnerability. Insufficient, inaccurate, or biased data compromise the performance of the AI system.

On the other hand, AI systems are exposed to cybersecurity attacks or breaches. In sophisticated AI systems driving complex technological ecosystems – autonomous drones, autonomous vehicles, smart home systems – the consequences of a cybersecurity breach can be immense.

The vulnerability feature signals other weak points of AI systems and, therefore, the magnitude of the exposure. Dramatic personal injury can be caused by a poorly-performing surgical robot due to wrong data or a hacking attack. Likewise, the consequences of a cybersecurity breach disrupting the operation of a fleet of autonomous drones or autonomous vehicles can be catastrophic. Furthermore, liability impact could be also aggravated by the multiplying effect of automation and virality. The magnitude of the harm caused by AI magnifies, whereas damage can easily become viral and rapidly propagate in a densely-interconnected society.

5 Openness

Sophisticated technological ecosystems are not completed once put into circulation; often they need to interact with other systems or data sources in order to function properly. They therefore need to remain open by design and, unlike traditional products, evolve incorporating updates, additions, and upgrades throughout their life cycle and after circulation. Updates and upgrades may be delivered unharmoniously in relation to the different interconnected devices. Respective producers can react asymmetrically in providing updates, releasing security patches or solving vulnerabilities. And the proactive cooperation of the user or the operator may be required to complete or render effective the implementation.

All these distinctive features are increasingly disruptive and this dissuades from simplifying the analysis of new technologies. They are not simple, incremental evolutions of previous technologies. In some aspects, they reach the ‘point of disruption’, inviting clarifications, adjustment or reconsideration of existing concepts, rules, and methods. They may therefore be used as justifications for adjustments to the liability regimes if such technologies contribute to causing harm.

B Main challenges

Independently from the product liability regime addressed above, the question whether there might be a specific liability for AI arises in light of the particular features of this technology just mentioned.

Specific harm or losses resulting from the use of AI may possibly be related to the quality of the software (‘manufacturing defects’ or ‘design defects’), but they may also be the result of the interconnection with data available in the world, which, pooled together or used in a specific way, may produce an undesirable result and cause losses. These losses could be considered as ‘interconnectivity defects’, given that they result from the interconnection of the software. This latter type of defect might also be the result of the interconnection of one AI system with one or more other AI systems.

In the case of harm resulting directly or indirectly from the use of AI, two main challenges can be identified:

  1. CausationIs it possible to identify a causal link between the interaction with an AI, or even Machine Learning (‘ML’) device and the loss resulting from such interaction? Or, given that the AI may have been used in interaction with various other ML algorithms? Is it possible to attribute a specific negative result to one specific AI?[59]

  2. The triggering factor for the AI liabilityThe triggering factor of such liability could be to identify a breach of a duty of care or to assess the specific risk of such AI. The issue is therefore to determine what would be the most optimal triggering factor given the specificities of AI, its capacity to learn (through a supervised, unsupervised or enhanced process) and its relative autonomy (in particular for ML). This will be dealt with below.

  3. Drawing the line to specific problems of AI applications subject to specific regimesNoting that AI systems may cause damage that is potentially covered by special regimes other than product liability (or general extra-contractual regimes in certain jurisdictions), overlaps and interplays should be carefully considered. AI systems may cause harm related to or arising from privacy violation, infringements of the right to honour and or to image (reputational damage), discrimination, lost chances (recruitment, medical care, etc), purely economic losses, etc. Especially relevant are the digital risks of biases and ensuing restricted access to services, digital content, markets, or infrastructures due to ‘defective’ AI systems.

C Causation

In addition to what has already been said with regard to proving causation in product liability cases,[60] it is important to note that the impact of causal uncertainty significantly depends upon the bases of potential claims. In jurisdictions where liability for certain technologies at present does not depend upon the individual fault of some wrongdoer, for example, it is rather the mere involvement, use or activation of such technology that triggers liability, which as such is not necessarily challenging to prove. After all, in a car accident, for example, the fact that a car was implicated is obvious and cannot even be disproved by the defendant, and this does not depend upon the technology driving that car. If strict liability for some novel technology were introduced, it would typically also be linked to the mere impact of such technology upon the person or object harmed rather than a flaw within it that the victim has to prove.

However, without extending the range of strict liabilities to such new technologies, proving some flawed conduct as the origin of harm may pose a significant hurdle for the victim, particularly if the peculiar features of such technologies were known to present obstacles for an outsider to identify some inherent problem within the processes involved. One way to alleviate that burden could be to specify ex ante the duties of care that are expected under the circumstances, thereby allowing the victim to proceed from that particular yardstick without going on a fishing expedition.

As already argued above, shifting the burden of proving causation often determines the outcome of the case, in light of difficulties that both sides face. While the peculiar features of AI[61] may provide for justifications, one would still have to consider less drastic tools, including logging obligations coupled with a duty to disclose the information thereby collected. Key Finding 26 of the NTF Report suggested the following factors to consider when deciding where to place the burden of proving causation:

‘(a) the likelihood that the technology at least contributed to the harm;

(b) the likelihood that the harm was caused either by the technology or by some other cause within the same sphere;

(c) the risk of a known defect within the technology, even though its actual causal impact is not self-evident;

(d) the degree of ex-post traceability and intelligibility of processes within the technology that may have contributed to the cause (informational asymmetry);

(e) the degree of ex-post accessibility and comprehensibility of data collected and generated by the technology;

(f) the kind and degree of harm potentially and actually caused.’

D Possible liability regimes

If the suggestion to expand the notion of ‘product’ to software (and therefore also to AI) is accepted, there will still be cases that are not covered by the amended PLD regime, in particular if the losses concerned do not fall under the (perhaps amended) art 9 PLD.[62] Even for cases within the scope of the updated PLD regime, liability may also be established on other grounds under their respective conditions.[63]

A triggering aspect of any liability for AI is the prevalence of the autonomous capacity of the AI to trigger results, but also to cause potential losses to third parties.[64] Given the absence of any legal independence (or e-personality), which we also deem unnecessary for tort law purposes,[65] a fault-based liability regime may appear illusory in cases involving certain AI applications, as it may leave certain victims without compensation at all or at least undercompensated despite being deemed worthy of protection.[66]

1 Types of liability regimes

One can generally imagine alternatives to or at least variations of liability regimes that depend on personal misconduct which may be suitable for the risks triggered by AI systems. While strict liability for risk dispenses with the search for flawed human conduct altogether, liability for a presumed breach of an objective duty of care still proceeds from the idea of fault, but alleviates the position of the victim in pursuing that basis of liability. Vicarious liability also does not depend upon some personal wrongdoing of the defendant, but attributes risks on a different basis, which also may be worthwhile considering in this context.

a Strict liability for risk

A strict liability regime for risk is usually introduced for devices or behaviours that create specific risks for society. Liability is then commonly channelled onto the keeper of the dangerous object, ie the person who benefits from and who is in control of that object. Per se, the use of a given device may trigger a high risk of loss, specifically identifiable. ML/AI, however, is often not inherently dangerous, as it depends on the area and the extent of its use. There might be some specific situations or particular AI/ML which might be inherently dangerous by its mere use, though; for those, a strict liability regime might be envisaged. High risk AI pursuant to art 3 Draft AI Act would, however, likely not per se qualify as ‘inherently dangerous’, justifying a far-reaching strict liability regime.

b Presumed breach of a duty of care

As for other autonomous entities that may trigger losses to be attributed to a natural person or a legal entity, a rebuttable presumption of a breach of an objective duty of care might strike the right balance between the two extreme alternative outcomes – no effective liability at all (if liability requires proof of reproachable human conduct) and excessive liability (as in a far-reaching strict liability regime which does not distinguish between various degrees of dangerousness).[67]

First, the black-box characteristic of AI/ML[68] may mean that it is often impossible for the victim to prove a specific breach of an objective duty of care by the natural person or the legal entity of which the AI/ML is dependent. Such person may, however, have breached some specific duties of care, eg may have neglected to adopt some appropriate measures which would have allowed losses to be avoided. Those duties of care may be linked to the programming of the AI/ML, to its design, but also to monitoring or implementing safeguards as to its interconnectivity features.

Second, the liability regime should promote appropriate conduct without preventing the evolution of technologies. With a liability regime based on a presumed breach of a duty of care, designers or producers of such AI may have an incentive to develop AI/ML with safeguards as to potential harms, and to implement systems which may explain the processes that led to a given loss. In other words, designers or producers of such AI/ML may be in a better position to avoid the risks created by the use of AI; as such, they are therefore superior risk bearers, which may, in line with an economic analysis of law, justify imposing on them a burden to prove that objective duties of care have been complied with.

Third, the advantage of such a liability regime based on a presumed breach of an objective duty of care is that it is sufficiently dynamic to be adapted to the evolution of technology, which is very rapid in this field. Thus, to avoid being held liable, the natural person or a legal entity would need to prove that it had done everything that could be reasonably expected given the current status of knowledge and scientific evolution in the field at the time.

c Vicarious liability

A separate option to address the risks presented by AI is the potential expansion of the notion of vicarious liability, leaving the respective national regime of liability for others intact, but expanding it (either directly or by way of analogy) to functionally equivalent situations where use is made of AI instead of using a human auxiliary. We endorse the position of the NTF Report[69] suggested by its Key Findings 18 and 19, which may complement strict liability as well as liability for presumed breach of a duty of care or fault liability.

The scope and conditions for the application of vicarious liability vary from one country to another, as a result of the different ways national legal systems have developed and the resulting broader or narrower scope of application of strict liability they adopted. This is why a harmonised regime of AI liability should not disrupt existing legal systems in the Union more than is necessary and should not determine the details of when the act or omission of an auxiliary gives rise to liability on the part of the principal. However, the harmonised regime could provide that, where the use of a human auxiliary would give rise to the liability of a principal, the use of a digital technology tool instead should not allow the principal to avoid liability. Rather, it should give rise to such liability to the same extent.

As the laws stand in many jurisdictions, the notion of vicarious liability at present requires the auxiliary to have misbehaved or performed badly. In the case of AI, this triggers the question according to which benchmarks such ‘conduct’ should be assessed. In line with Key Finding 19 of the NTF Report, it is held that the benchmark for assessing performance by autonomous technology should primarily be the benchmark accepted for human auxiliaries, but once autonomous technology outperforms human auxiliaries in terms of preventing harm, the benchmark should be determined by the performance of comparable technology that is available on the market.

2 A possible parallel with liability for animals

Liability for animal keepers is often based on the presumed absence of an objective duty of care. As for AI, the absence of sufficient objective care is not necessarily easy to prove against an animal keeper, as the reprehensible behaviour of such animal may result from many previous actions of its keeper (eg during training), which often cannot be reduced to one single act. In addition, animals have a certain amount of autonomy of will and may therefore behave in unpredictable ways, outside of trained patterns. It is then difficult to prove a breach of a duty of care on the part of the animal keeper, but at the same time it is justified that the keeper should be held liable for losses caused by the animal under its control. Knowing the risk of such potential behaviour, the keeper is liable, as he should have taken all necessary measures to avoid any harm. The absence of diligent and objectively reasonable behaviour must therefore be presumed. In many European regimes, animal keepers have the possibility to prove that they took all reasonable measures to avoid the occurrence of the loss caused by the animal.[70] Of course, such proof is and should be more difficult to provide when the inherent risk of the animal is higher.

3 An adaptable regime

Since the use of AI does not necessarily produce a higher risk to society than other activities, liability for the use of AI could, under certain circumstances, be based on the presumption of a lack of an objective duty of care. We endorse the position of the NTF Report, however, that such presumptions should be applied cautiously and under conditions as suggested by its Key Finding 27, such as ‘if disproportionate difficulties and costs of establishing the relevant standard of care and of proving their violation justify it’.[71]

More generally, we confirm that no ‘one size fits all’ solution is suitable for ‘AI technology’ as such, since the applications of this technology are so diverse in practice and the risks these may bring about of such varying degrees and nature that a uniform liability regime for any application of AI seems overreaching.[72]

E Damage

As a general rule, we propose not to interfere with the definition of compensable harm in the Member States, and, in particular, as regards non-pecuniary harm or pure economic loss. These areas are currently regulated so differently throughout the EU that any attempt to approximate the laws of the Member States in that regard only in very distinct damage scenarios may lead to disruptions in light of the other areas of tort law where such adjustments would not (and could not) be made accordingly.

In particular, we strongly advise against the introduction of new hybrid definitions of damage such as ‘immaterial harm that results in a verifiable economic loss’ or any other terminology which will not easily fit into existing concepts of liability in the Member States without causing distortions of the overall understanding of compensable loss.

However, we would like to point out that AI technology may negatively affect interests of victims which fall outside the range of the types of harm traditionally addressed by tort law regimes, at least in some Member States, as already mentioned in the introduction above.[73] This includes, for example, the harm caused by biased recruitment software, which leads to questions of loss of a chance (which is recognised only in some, but not all Member States), pure economic loss (which some Member States’ tort laws are reluctant to indemnify), or violations of personality rights other than the interest in one’s bodily integrity. This should be borne in mind when designing any specific liability regime for certain AI technologies, and we believe that such cases should be dealt with separately and therefore excluded from a more general tort law solution.


Note

This response by the European Law Institute (ELI) to the public consultation was adopted by the Council of the ELI and published online at <https://europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/Public_Consultation_on_Civil_Liability.pdf>. The document is reproduced as disseminated on said website.


Published Online: 2022-05-13
Published in Print: 2022-05-09

© 2022 Bernhard A Koch et al, published by Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 29.2.2024 from https://www.degruyter.com/document/doi/10.1515/jetl-2022-0002/html
Scroll to top button