Back
2017 Annual Review
Insight
2017 Annual Review

Algorithms and bias: Outcomes may matter as much as intentions

New interpretations of the law could increase the risks of letting artificial intelligence make decisions where liability for unfair practices exists

For decades, financial services companies have used algorithms to trade securities, predict financial markets, identify prospective employees and assess potential customers and borrowers.

Newer algorithms incorporating artificial intelligence (AI) seek to avoid the failures of rigid instructions-based models of the past (such as those linked to 2010’s "Flash Crash") because they can "learn" and operate independently. But the use of AI also increases the potential for a program to act in a way its developer never intended, exposing its owner to certain risks. And recently, the risk has grown.

Unfair lending claims historically required plaintiffs to prove that an institution intentionally treated a protected class of individuals less favorably than others. But recently the government and other plaintiffs have advanced “disparate impact” claims that focus on the effect, not the intention, of lending policies.

In Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc., a nonprofit organization sued the Texas agency that allocates federal low-income housing tax credits for allegedly perpetuating segregated housing patterns by allocating too few credits to housing in suburban neighborhoods relative to inner-city neighborhoods. The US Supreme Court held that a disparate impact theory of liability was available for claims under the Fair Housing Act (FHA), stating that plaintiffs need only show that a policy had a discriminatory impact on a protected class, and not that the discrimination was intentional.

The Court also imposed safeguards designed to protect defendants from being held liable for discriminatory effects that they themselves did not create, intentionally or otherwise.

To the extent that disparate impact claims remain prevalent, financial services companies may need to keep a closer watch on the actions and decisions of their algorithms, to ensure that the programs do not apply discriminatory policies that the company never intended, and that could expose the company to both financial and reputational damage.

 

 

Read the source article

Back to the Annual Review landing page