Navigating product liability in high-security sectors: Addressing AI-driven risks under German and European law

Alert
|
12 min read

New technologies meet new liability frontiers: Rapid technological change and increasing regulatory complexity are reshaping the risk landscape for companies in the high-security sector. As digital components, networked systems and AI become more central to high-security products, failures can cause significant operational, economic and national-security impacts. These developments raise new questions about liability, particularly where autonomous systems limit human intervention. The first official draft of a new German Product Liability Act (ProdHaftG-E),1 published in September 2025, intended to implement the revised Product Liability Directive (EU PLD),2 alongside the EU Artificial Intelligence Act (EU AI Act)3 will shape how companies manage product safety, digital innovation and risk allocation across the supply chain.

Liability risks and legal frameworks: current starting point

Traditionally, products in the high-security sector have been subject to strict regulations and close supervision, as they are used in high-risk settings where reliability, control and cybersecurity are essential. This sector covers not only products like defense and military systems, but also dual-use products4 deployed in both civilian and military settings, such as secure communications, cryptographic tools, surveillance technologies and advanced access control and cybersecurity systems – more broadly any system whose failure could impact national security, public safety or essential services. The regulatory framework is shaped by established European and national rules, including the German Product Liability Act (ProdHaftG), the German Product Safety Act (ProdSG) and sector-specific requirements (such as the German KRITIS-regulation,5 or the EU NIS 2 Directive,6 and the EU Cybersecurity Act7). Compliance with these regulations, together with general contractual and tort principles, is central to determining liability in the event of product failure or harm. The increased integration of digital components and AI in high-security products as well as complex supply chains introduce new risks and uncertainties, especially regarding control, defectiveness, human oversight and supply chain accountability. At the same time, regulatory developments such as the EU AI Act and the revised EU PLD establish new standards and liability rules, requiring companies to adapt compliance, documentation and risk management practices.

The evolving liability landscape

The recast EU PLD and the EU AI Act re-shape the regulatory and liability environment introducing significant adjustments and extensions to respond to emerging technologies and risks. In particular, both frameworks address digitalization and the adoption of AI-enabled systems, but with different objectives and scopes. Product liability is extended beyond hardware to include software updates, data quality, algorithmic behavior and cybersecurity aspects, which can all result in product's defectiveness even without physical faults. Consequently, high-security products must operate reliably in complex environments and maintain demonstrable control over AI-driven features. As supply chains become more interdependent, integrating third-party software and pretrained models adds further liability exposure, demanding heightened due diligence throughout development and deployment. Although defense contracts with government clients often limit liability due to mission-critical risks and procurement requirements, compliance with regulations remains crucial as these limitations may not override statutory liability under applicable product safety and liability laws and suppliers may still face exposure under the EU PLD for third-party claims or supply chain risks.

Two diverging regimes: EU AI act and EU PLD

The EU AI Act is a comprehensive regulatory framework, not a liability statute. It is designed to ensure that AI systems are trustworthy, safe and respect fundamental rights within the European Union. It introduces a risk-based approach, categorizing AI systems into minimal, limited, high and prohibited risk tiers, and imposes corresponding compliance obligations. The Act applies broadly, including to organizations outside the EU if their AI systems' outputs are used within the EU. However, it excludes AI systems used exclusively for military, defense, or national security purposes,8 but not dual-use or non-exclusive military AI systems.

The EU PLD establishes a strict liability regime for defective products protecting natural persons and property that is not used "exclusively for professional purposes".9 Governments or corporate buyers are not eligible to bring EU PLD claims as injured parties. However, claims may arise when these products are used in settings where civilians interact with the technology, or from cybersecurity breaches that affect critical infrastructure and therefore impact individuals. Furthermore, such claims may arise in dual-use scenarios, such as when security products are utilized by civilians. The EU PLD must be implemented by Member States by December 2026. The German draft bill of the new ProdHaftG-E largely mirrors the Directive, but the legislative process is at an early stage, and the final form and timing remain uncertain.

Key changes relevant for the high-security industry:

  • High-risk AI obligations: For companies operating in the high-security sector, the EU AI Act brings significant compliance obligations, especially if their AI systems are classified as high-risk. High-risk AI systems are subject to stringent requirements, including continuous risk management, robust data governance, technical documentation, event logging, transparency, human oversight and cybersecurity measures throughout the lifecycle of the AI system. Failure to comply can result in substantial financial penalties and reputational risks. An AI system is considered high-risk if it falls into specific categories outlined in the Act, such as being used in critical infrastructure, essential services, law enforcement or border control. Additionally, AI systems that are safety components of products regulated under certain EU harmonization laws and that require third-party conformity assessment, are also deemed high-risk. The classification is based on the intended purpose of the AI system, not solely on its technical characteristics or real-world risk level.10
  • Expanded "product" scope: Article 4 EU PLD broadens the definition of "product" to include software11 and digital manufacturing files and, thus, encompassing AI systems and digital components (standalone or embedded), such as those in defense platforms, surveillance systems or autonomous vehicles.
  • Defectiveness and lifecycle liability: Article 7 EU PLD expands the concept of defectiveness in several ways. Among other things, it now requires compliance with all relevant product safety requirements, including safety-relevant cybersecurity requirements. The reasonably foreseeable use of the product is now also covered, as well as the reasonably foreseeable effect on other products "including by means of inter-connection". Products can become defective after their initial release due to updates, retraining, self-learning or later integration,12 leading to ongoing liability.
  • Broader liability net: Article 8 EU PLD extends liability to entities that substantially modify a product, such as through upgrades or integration of new (digital) components (e.g., defense platforms with AI enabled decision-support-tools or sub-contractors adding analytics modules to existing platforms). Joint and several liability applies throughout the supply chain.13
  • New evidence disclosure (with presumption of defect as consequence): Article 9 EU PLD enables courts under certain low threshold-conditions to order disclosure of relevant technical information, which is particularly significant for AI systems with limited explainability. The protection of confidential data or trade secrets is possible but subject to the court's discretion. If the court finds that the defendant did not fully comply with its disclosure obligation, the defectiveness of the product is presumed.
  • Further low-threshold presumptions in favor of claimants under Article 10 EU PLD: First, the defectiveness of a product is presumed if the claimant (only) demonstrates that the product did not comply with mandatory product safety requirements. Second, the causal link between defect and damage is presumed where the damage caused is of a kind typically consistent with the defect in question. Third, if – despite disclosure of evidence – the technical or scientific complexity of the case makes it excessively difficult for the claimant to prove defect and/or causation, the claimant need only show it is likely that the product was defective and/or that there is a causal link. This third presumption is particularly relevant for defense sector due to its high complexity and can shift the burden regarding both the defect and the causal link to the manufacturer.
  • Expanded damage concept: Liability covers not just personal injury and property damage, but also destruction or corruption of non-professional data.14 This may arise, for instance, where software or AI defects corrupt or delete personal data of natural persons interacting with security systems, such as employees, visitors, travelers, or other civilian users of access-control, surveillance or critical infrastructure technologies. Compensation for immaterial losses is possible if provided under national law (as in Germany).

Key German law liability aspects

Liability for high-security and defense technologies under German law is primarily governed by contract law, general tort law, the ProdHaftG, and sector-specific safety requirements. Software and AI-related failures are a growing concern, especially when autonomous systems or complex supply chains cause security-critical malfunctions, such as misclassification by AI identification modules, vulnerabilities from software updates, unpredictable drone behavior due to poor training data, or unverified third-party models. Liability in this area is likely to arise primarily from breaches of contract in B2B and B2G contexts. However, as outlined above, liability towards natural persons may also occur under the EU PLD/ProdHaftG or tort law.

Contractual liability

Contractual liability under Sections 280 et seq. German Civil Code (BGB) is central in high-security projects due to detailed agreements outlining performance, integration duties, or disclosure requirements among the relevant parties such as contractors, suppliers, integrators and government entities. Liability typically arises from breaches of such express or ancillary contract terms. In German contract law, fault is typically presumed if a breach has been established, unless the contractor proves otherwise. This presumption increases the difficulty of avoiding liability, especially when working with complex AI systems. While "fault" cannot be attributed to AI itself, liability attaches to the company based on organizational failings, lack of oversight, or insufficient compliance measures. Compliance with applicable regulatory requirements can hence become a key factor in determining whether a breach of duty has occurred or whether the contractor can exculpate itself. Contractors are generally liable for employees and agents (Section 278 BGB), including subcontractors and external providers, such as those involved in testing, training, or deploying AI models. Recourse agreements and indemnification clauses can shift liability, but exclusions for intent or gross negligence are subject to strict legal limitations and must be drafted with care. Furthermore, product liability under the ProdHaftG cannot be waived by contract.

Tort liability under German law adds further exposure, particularly where third parties are affected. Section 823 (1) BGB (general tort liability) covers harm to protected interests (life, health, property, personality rights). This can become relevant where system failures cause injury or damage, such as an AI-supported identification system misclassifying an individual or an autonomous surveillance asset causing property damage. Section 823 (2) BGB applies where damage results from breaching statutory duties intended to protect individuals. In the high-security context, this may include violations of cybersecurity obligations under the IT Security Act or the NIS 2 Directive, data protection duties where personal data are processed, safety norms incorporated into technical regulations, or the requirements under the EU AI Act. In tort, fault must be proven, but a breach of such rules may constitute negligence per se, lowering the evidentiary burden. Liability also extends to individuals performing tasks under company control (Section 831 BGB), with exoneration possible only if adequate selection, instruction and supervision are proven. Adherence to relevant regulatory standards is also essential in this context.

Risk mitigation and strategic recommendations

To address the evolving liability landscape and mitigate exposure risks, companies should implement a holistic risk management approach that combines compliance, operational and contractual measures:

  • Comprehensive Risk Assessment: Regularly map and assess liability risks across all products and systems, with a focus on digital and AI components, supply chain dependencies and dual-use scenarios.
  • Robust Documentation and Traceability: Ensure documentation of all stages of the AI system lifecycle, including its development, testing, updates, and AI-supported decision processes, to demonstrate compliance and facilitate defense in case of claims or regulatory inquiries.
  • Integrated Cybersecurity and AI Assurance: Embed cybersecurity and AI risk management into product safety frameworks. Continuously monitor vulnerabilities and ensure that updates, retraining and third-party integrations meet regulatory and contractual standards.
  • Contractual and Insurance Safeguards: Update contract templates and supply chain agreements to reflect new liability triggers, documentation and obligations that must be passed down to subcontractors and other parties within the supply chain. Review and maintain adequate insurance coverage (e.g., E&O, cyber liability) and ensure clear allocation of responsibilities and indemnities throughout the supply chain, within the limits of applicable law.
  • Internal Compliance and Training: Establish internal protocols for incident response, regulatory cooperation and evidence preservation. Train compliance, legal and engineering teams on new legal requirements, especially regarding documentation, cybersecurity and AI assurance.
  • Litigation and Process Strategy: Involve technical experts early in disputes, particularly for complex AI-related claims, and anticipate court-ordered disclosure of sensitive technical information in future EU PLD claims, including training data, model details and logs. Proactively prepare to protect confidential information and trade secrets, as courts are required to balance disclosure obligations with the legitimate interests of all parties and may implement measures to preserve confidentiality. Early involvement of technical and legal experts is recommended to ensure that sensitive information is adequately safeguarded throughout the dispute process.

1 See the draft bill of the new German Product Liability Act (ProdHaftG-E) available here.
2 Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC,
available here.
3 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act),
available here.
4 In this context it is also important for companies to assess whether their (AI) application falls within the scope of the EU Dual-Use Regulation (Regulation (EU) 2021/821 of the European Parliament and of the Council of 20 May 2021 setting up a Union regime for the control of exports, brokering, technical assistance, transit and transfer of dual-use items (recast),
available here).
5 KRITIS-regulation (Verordnung zur Bestimmung Kritischer Infrastrukturen nach dem BSI-Gesetz),
available here.
6 Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No 910/2014 and Directive (EU) 2018/1972, and repealing Directive (EU) 2016/1148 (NIS 2 Directive),
available here.
7 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act),
available here.
8 Article 2 (3) EU AI Act.
9 Article 6(1) (b) (iii) of the Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC,
available here.
10 For further guidance see the White & Case EU AI Act Handbook, in particular Chapter 6-8,
available here.
11 Except to free and open-source software that is developed or supplied outside the course of a commercial activity, see Article 2 (2) EU PLD.
12 Article 7 (1), (2) (c) EU PLD, see also Recitals (19), (50) and (52).
13 Article 12 (1) EU PLD.
14 Article 6 (1) (c) EU PLD.

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2025 White & Case LLP

Top