Financial services find rewards—and risks—in artificial intelligence
To ride the wave of AI, financial services companies will have to navigate evolving standards, regulations and risk dynamics—particularly regarding data rights, algorithmic accountability and cybersecurity
Well positioned to leverage artificial intelligence (AI) technology, financial services institutions have already begun to incorporate AI in parts of their business, such as algorithmic trading. But to best integrate AI into their operations, the industry will have to address three core questions:
Who owns the data?
Consumers, technology companies, third-party data providers and regulators all are stakeholders with complicated and competing interests in questions about who owns the data and for what purpose, what it's worth and how it can rightfully be used.
Who's responsible for AI decisions and actions?
Because AI algorithms have an increasing ability to act independently, it may become difficult to assign responsibility to humans for decisions or actions AI takes.
What are AI's implications for cybersecurity?
AI can open vulnerabilities, particularly when it depends on interfaces within and across organizations that inadvertently create opportunities for access by nefarious agents.
To stay focused amid the welter of activity in this space, companies should follow three broad guidelines:
- Set out clear principles and document strategies and processes. Companies that articulate their objectives and show that they have made a concerted effort to comply with regulations and respect consumers can put themselves in a position of strength.
- Manage technology with technology. Technology will become increasingly important
as a means of managing technological complexity as data flows continue their exponential growth trajectories.
- Keep people front and center. Humans must oversee the machines they deploy to ensure choices made by algorithms make sense and align with social principles and regulatory rules.