Eye on AI: How Can Humans Stay in the Loop?

Alert
|
2 min read

AI has surpassed human performance in many areas once thought to be too complex for any computer to master. As AI evolves it will continue to surpass human abilities, and a central question will be what degree of autonomy we give it and how much human oversight we require. China implemented their interim measures for generative AI this August, the EU released draft language of their regulation, the "AI Act", and in the US the White House recently secured voluntary commitments from leading AI companies to promote safety, security and trust in AI. None of these provide much guidance on the degree of human involvement required or recommended to oversee potential issues with AI.

The Chinese law and EU draft legislation have requirements for providers of AI services, which includes the full range of companies, from small to those creating foundation models or large language models such as OpenAI or Google. The White House’s voluntary commitments are with a growing list of large companies and developers of foundation models and LLMs. As of yet, though, the measures, legislation, and commitments do not specifically address the role of human involvement or oversight, and how much of it may be desirable for achieving the regulatory goals being crafted.

Even without the regulations having specific requirements for human involvement in AI decision making, companies that deploy AI will need to ensure compliance with these regulatory schemes. Not to mention that all the existing traditional law for contract, tort, unfair competition and intellectual property will, of course, apply. Absent assignment of liability by regulation, the courts will have to assign liability among providers, developers, and users under these traditional legal regimes.

Some leading technology companies have already identified the need for a new type of position: an AI ethics officer. AI ethics teams will likely need to be active across a wide variety of areas, and each company will need to tailor their team to address the issues specific to their industry and AI use cases. These AI ethics officers will also have to consider what courts might require as reasonable measures to address foreseeable risks from AI use in particular cases. Commonalities exist, however, such as how to acquire and handle the training data or the degree of oversight and review to apply to supervised versus unsupervised machine learning. Moreover, the AI regulatory and legal landscape is certain to continue to evolve and the technology typically advances faster than regulators and the courts can keep up, so it may be important for the team to have not only a good current understanding of regulations and existing legal regimes, but a vision of how regulations and court precedents will change and how the organization may have to adapt. With the possibility that countries may seek to harmonize AI regulations, organizations may also find they need to comply with additional sets of industry regulations as AI continues to expand around the world.

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2023 White & Case LLP

Top