FCA and Federal Reserve Express Concerns Over AI in Financial Services

FCA and Federal Reserve Express Concerns Over AI in Financial Services

By

Regulators at the Financial Conduct Authority (FCA) in the UK and the Board of Governors of the Federal Reserve System (Federal Reserve Board) in the US have raised alarm over the use of artificial intelligence (AI) in the financial services sector. This shared concern comes amid an increasingly collaborative global effort to address consumer protection risks in the adoption of AI technology.

In the wake of a joint statement by US Consumer Financial Protection Bureau (CFPB) Director Rohit Chopra and European Commission Commissioner for Justice and Consumer Protection Didier Reynders on July 17, 2023, senior officials from the FCA and the Federal Reserve Board have publicly articulated their apprehensions regarding AI deployment within financial services. This ongoing communication reflects a commitment to international cooperation in mitigating shared consumer protection risks.

Key Concerns Highlighted by the FCA

In a speech delivered from London, Nikhil Rathi, Chief Executive of the FCA, emphasized concerns surrounding the use of AI and the significant role played by Big Tech companies in controlling financial data access. Rathi's remarks specifically addressed operational risks posed by Big Tech in payments, retail services, and financial infrastructure. He also raised awareness of potential biases impacting consumer behavior due to AI systems. Despite acknowledging the potential benefits of partnering with Big Tech, Rathi expressed caution regarding the role of "critical third parties" and their access to sensitive data, such as biometrics and social media. In light of these concerns, Rathi announced the FCA's intention to regulate critical third parties in collaboration with the Bank of England and the Prudential Regulation Authority to ensure security and resilience.

Rathi further delved into the complex trade-offs and risks introduced by AI in financial markets. He highlighted the risk of integrity, pricing, transparency, and fairness issues arising due to AI deployment. Rathi also pointed to the increasing volatility in trading across markets, particularly in the context of fraud, cyberattacks, and identity fraud. While AI may offer solutions to these challenges, there is apprehension that AI could exacerbate existing problems. Addressing the need for transparency, Rathi stressed the importance of explainable AI models and data bias mitigation. However, he also emphasized the potential benefits of AI in enhancing financial models, providing accurate information, personalizing services, and combating fraud and money laundering.

Rathi called for a coordinated global approach to AI regulation that encourages innovation while upholding trust and confidence in financial services. He highlighted the establishment of a "Digital Sandbox" by the FCA, providing a platform for fintech companies to test new technologies with real transactions and synthetic data. The Consumer Duty, effective from July 31, 2023, requires firms to design products and services that ensure positive consumer outcomes. The Senior Managers and Certification Regime (SM&CR) holds senior managers accountable for AI-related activities. There have also been calls for a bespoke SM&CR program for senior individuals managing AI systems, recognizing their increasing role in decision-making.

US Concerns Regarding AI in Mortgage Underwriting

Vice Chair of the Federal Reserve Board, Michael Barr, echoed similar concerns in the US, particularly related to AI's impact on mortgage origination and underwriting. Barr highlighted the potential for AI to perpetuate discriminatory practices and violate fair housing and credit opportunity legislation. He cautioned against the reliance on flawed or incomplete data, which could amplify biases. While acknowledging the potential for AI to expand credit access, Barr warned of the risk of steering underrepresented borrowers towards suboptimal financial products.

The alignment of regulatory concerns across the UK and US signifies an ongoing commitment to international cooperation in addressing consumer protection risks in the AI-driven financial services landscape. As regulators intensify their focus on legal and compliance risks, companies utilizing AI are urged to prioritize bias mitigation, fair lending, and transparency within their models. The spotlight on AI's potential risks is set to elevate the standard for AI deployment within the financial sector, compelling organizations to adopt comprehensive risk management strategies and engage in testing and training through resources like digital sandboxes.