National security implications in ISDS vis-à-vis AI regulation

Alert
|
9 min read

Technological advances create novel security risks, prompting States to adopt national security measures that restrict foreign investors in this space. As Artificial Intelligence (“AI”) becomes embedded in critical infrastructure, States are increasingly regulating AI systems. This note surveys recent cases where investors have challenged technology-related national security measures and considers how future Investor-State Dispute Settlement (“ISDS”) arbitral tribunals may address analogous AI-related issues.

ISDS and the Technology Sector

A State’s right to regulate foreign investment extends to the technology sector. For instance, several States have restricted Huawei’s participation in the deployment of 5G networks, including the Czech Republic (2018), Sweden (2020), the United Kingdom (2020 and 2022), and Poland (2025).1 Huawei threatened arbitration against most of these States, and proceedings against Sweden are underway (Huawei v. Sweden). In that case, Huawei seeks damages for alleged breaches of the bilateral investment treaty (“BIT”) between China and Sweden arising from its exclusion from Sweden’s planned 5G rollout.2 Over the last year, other investors have threatened arbitration in connection with technology-related disputes:

  • Wingtech and the Netherlands: In September 2025, the Dutch government invoked the Goods Availability Act against Wingtech’s Dutch subsidiary, Nexperia, alleging improper transfer of European production assets, financial resources, and intellectual property in breach of Dutch and European economic security rules. Local judicial proceedings remain ongoing, with an interim CEO appointed and most shares placed in a trust. Wingtech has issued a notice of dispute under the China-Netherlands BIT.3
  • Hikvision and Canada: In June 2025, Canada ordered Hikvision to cease all operations under the Investment Canada Act, citing national security concerns linked to Hikvision’s obligations under Chinese law to assist State security entities and the associated espionage risk. Hikvision has challenged the order before the Federal Court and threatened to bring a claim under the Canada-China BIT.4
  • Coupang’s investors and South Korea: Investors in US technology and online retail company Coupang have filed notices of dispute under the US-Korea Free Trade Agreement (KORUS), alleging that South Korea treated Coupang’s Korean subsidiary more harshly than comparable Korean and Chinese companies following a data breach.5

When reviewing national security restrictions on investments, arbitral tribunals have considered whether the applicable investment treaty allows the host State to limit an investor’s rights in order to protect its essential security interests. This issue arose in Devas v. India and Deutsche Telekom v. India, both of which concerned the annulment by Antrix (an entity wholly owned by India), at the direction of the Government of India on national security grounds, of an agreement with Devas concerning the lease of certain satellite spectrum. The two tribunals reached divergent conclusions: the Deutsche Telekom tribunal rejected India’s essential security defense, while the Devas tribunal reduced the damages awarded, having concluded that 60% of the termination of the Devas-Antrix agreement was motivated by India’s essential security concerns.6

National Security Risks arising from AI

As noted in our AI Watch: Global regulatory tracker, the AI regulatory landscape is evolving rapidly. In a future AI-related investment dispute, key factors for consideration will include the nature, purpose, and effect of the relevant AI regulation, as well as the language of the applicable investment treaty.

AI-derived risks that may prompt States to act on national security grounds, and which could affect foreign investors, include the following:

  • Data protection and export controls: These concerns arise from the storing, accessing and distribution of sensitive data by AI undertakings, including potential cross-border export of data and AI products.
    • AI undertakings must comply with the data protection obligations of the host State, such as those under the EU General Data Protection Regulation (“GDPR”), and may also face additional, stricter data storage and transparency requirements, as explained in our EU AI Act Handbook. Some EU Member States and the UK have also imposed fines on foreign AI companies that use biometric data from social media to provide law enforcement agencies with facial recognition tools.7
    • Foreign AI undertakings also remain subject to the national security and emergency laws of their home States, regardless of where they operate. They can be compelled to grant their home government access to data collected abroad, including personal data of foreign users in the host State. For instance, the US Clarifying Lawful Overseas Use of Data Act (“CLOUD Act”) requires all companies subject to US jurisdiction to preserve and disclose any communications and records in their possession, custody or control, while China’s National Intelligence Law mandates that all Chinese organisations cooperate with the State’s intelligence efforts.
    • States may also introduce legislation or impose regulations regarding AI. For instance, the US is strengthening its domestic AI infrastructure and has imposed stringent controls on exports of advanced computing chips used for AI to China (including Hong Kong and Macau), among other specified countries, as well as to entities with an ultimate parent located in or headquartered in China (including Hong Kong and Macau), among other specified countries.8 It remains to be seen how such restrictions may interact with the rights of foreign investors and with potential claims under applicable investment treaties.
    • Foreign AI undertakings are also subject to bulk sensitive data rules that States may implement. For instance, as explained in our previous alerts,9 the US Department of Justice has issued a final rule restricting bulk transfer of sensitive personal data – including genomic, biometric, health, financial, and geolocation data – to certain countries determined by the United States Government to be foreign “countries of concern,” such as China (including Hong Kong and Macau), Cuba, Iran, North Korea, Russia and Venezuela, as well as individuals and entities under their control. Aimed to preserve US national security, this rule imposes compliance obligations on US persons and entities transacting with foreign AI undertakings, prohibiting or effectively limiting the volume and categories of data that can be shared across borders.
    • States may also prohibit or restrict outbound investment into entities engaged in frontier AI model training abroad, and this can impact AI undertakings operating in multiple jurisdictions. For instance, US person investors are subject to the relatively new Outbound Investment Program, which prohibits US persons from engaging in, or in some cases requires US persons to notify the US Department of the Treasury of, certain equity and debt transactions involving Chinese entities (including Hong Kong and Macau) that engage in specified activities related to the development of AI systems. Recent legislation, once implemented through regulations, will extend these prohibitions and notification requirements to additional countries of concern, including Cuba, Iran, North Korea, Russia, and Venezuela, as discussed previously.10
  • AI bias, model training, and harmful content: Large Language Models (“LLMs”) and other generative AI systems are trained on large datasets that may reflect multiple forms of bias. AI outputs in a host State can be shaped by political, cultural, and ideological assumptions embedded in the training data from an AI undertaking’s home State.11 This includes AI-generated content that is politically aligned with a particular State, even when users seek a neutral response. Where a host State may determine that biased AI outputs threaten public order or national security, it may restrict or ban the relevant AI system entirely, which may curtail operations of foreign AI undertakings.
  • Military use: While litigation proceedings between Anthropic and the US government have disclosed certain possible military uses of AI by States, such as mass domestic surveillance and lethal autonomous warfare,12 the full extent of potential applications of AI to advance military capabilities remain largely undisclosed to the public. Recent developments suggest that defense and intelligence State agencies may seek unrestricted access to advanced AI integrated equipment and technologies beyond the scope of contractual agreements with foreign AI investors.13

Considerations for Investors

While States may have legitimate grounds to regulate AI systems on national security or public policy bases, foreign AI undertakings operating in regulated markets would be well advised to consider the protections available under applicable investment treaties. Depending on the treaty framework and the nature of the measures in question, the following protections may be relevant across a range of scenarios:

  • Fair and Equitable Treatment: Investment treaties commonly afford foreign investors protection against measures that are arbitrary, disproportionate, or lacking in transparency. This may be particularly pertinent where AI-related restrictions are imposed without public justification.
  • National Treatment: Treaty frameworks typically require extension of the same protection to foreign investors as the host State extends to domestic counterparts in like circumstances. This may be a relevant consideration where regulatory scrutiny is applied unevenly across domestic and foreign AI undertakings and can also touch upon issues of censorship and freedom of speech, under the umbrella of constitutional and human rights protections.
  • Expropriation: Both direct measures affecting foreign-owned AI infrastructure (e.g., outright seizure of local servers in host State territory) and indirect measures, i.e., where the investment is not seized but its value and economic viability is effectively destroyed, may also engage treaty protections relating to expropriation.

Conclusion

Stakeholders, including multinational AI undertakings and State entities, are advised to monitor developments closely as the global AI regulatory landscape continues to evolve. Arbitral developments in the technology sector, particularly cases involving national security concerns, will also shape how future tribunals address AI-related investment disputes.

 

Natalia Gracia Gómez (Stagiaire, White & Case, Paris) contributed to the development of this publication.

1 Jack Ballantyne, Huawei threatens Poland over cybersecurity law, Global Arbitration Review, 24 October 2025.
2 See Huawei Technologies Co., Ltd. v. Kingdom of Sweden, ICSID Case No. ARB/22/2, Request for Arbitration (7 January 2022).
3 Lisa Bohmer, China’s Wingtech reportedly submits notice of dispute in response to Dutch government’s intervention in semiconductor producer Nexperia, IAReporter, 23 December 2025.
4 Hristina Todorovic, Chinese surveillance camera manufacturer puts Canada on notice of treaty dispute, IA Reporter, 26 September 2025.
5 Hristina Todorovic, South Korea is served with additional notice of intent under KORUS over data breach measures, IA Reporter, 12 February 2026.
6 See also
Global Telecom Holding S.A.E. v. Canada, ICSID Case No. ARB/16/16, Award, 27 March 2020 (where the tribunal showed significant deference to Canada’s national security decision under the Investment Canada Act prohibiting GTH, as a non-Canadian entity, from converting its non-voting shares into voting shares in its joint venture with a local partner.)
7 Privacy International, Tribunal Confirms Clearview AI Bound by GDPR, 13 October 2025; GDPR, Article 9; Théodore Christakis & Pankaj Raj, DeepSeek One Year Later: Regulatory Storm, Global Surge, AI-Regulation, 29 January 2026.
8 See Code of Federal Regulations (C.F.R.), Title 15, § 744.23.
9 See F. Paul Pittman, Hope Anderson, David H. Lim, Yuhan Wang,
DOJ issues final rule prohibiting and restricting transfers of bulk sensitive personal data, 27 February 2025; Jiang Liu, F. Paul Pittman, Hope Anderson, David H. Lim, Yuhan Wang, DOJ issues guidance on bulk sensitive data rules, 25 April 2025.
10 See Farhad Jalinous, Laura Black, Ryan Brady, David Jividen, Timothy Sensenig, Wes Hutson, Grace Hochstatter,
NDAA for FY 2026 and the impending changes to the US Outbound Investment Security Program, 2 February 2026.
11 Marie Lamensch, Chinese AI Models and the High-Stakes Fight for AI Neutrality, Centre for International Governance Innovation, 14 January 2026; Sara Harrison, Study finds perceived political bias in popular AI models, Stanford Report, 21 May 2025.
12 See
Anthropic PBC v. U.S. Department of War, No. 26-1208 (D.C. Cir.), 8 April 2026; UN Secretary-General Report A/80/78, Artificial intelligence in the military domain and its implications for international peace and security, June 2025; European Commission, Artificial Intelligence (AI) in Defence, December 2025.
13 See Andrea Shalal, Jeffrey Dastin & Ryan Patrick Jones, Trump directs US agencies to toss Anthropic’s AI as Pentagon calls startup a supply risk, Reuters, 27 February 2026.

 

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2026 White & Case LLP

Top