U.S. Department of the Treasury Releases Report on Managing AI Risks in Financial Sector
Today, the U.S. Department of the Treasury unveiled a comprehensive report aimed at navigating the burgeoning intersection of artificial intelligence (AI) and cybersecurity within the financial services sector. The release comes in response to Presidential Executive Order 14110, emphasizing the imperative of safe, secure, and trustworthy AI development and application.
Under the guidance of Under Secretary for Domestic Finance Nellie Liang, the Treasury's Office of Cybersecurity and Critical Infrastructure Protection (OCCIP) spearheaded the formulation of the report. Tasked with managing the Treasury Department's Sector Risk Management Agency responsibilities for the financial services sector, OCCIP meticulously crafted the document to address the evolving landscape of AI-related operational risk, cybersecurity, and fraud challenges.
Liang emphasized the transformative role AI is playing in reshaping cybersecurity and fraud prevention in financial services, affirming the Biden Administration's commitment to harnessing emerging technologies while safeguarding against operational disruptions and threats to financial stability. She highlighted the report as a pivotal step in fostering a secure environment for financial institutions to leverage AI technologies effectively.
The report delineates several key areas of focus and recommendations to fortify the sector's resilience against AI-specific risks:
1. Addressing Capability Disparities: The report underscores the widening gap between large and small financial institutions in developing in-house AI systems. While larger entities possess the resources for AI development, smaller institutions face hurdles due to limited data access. Additionally, migration to the cloud confers advantages in AI system deployment, further accentuating the divide.
2. Bridging Data Gaps in Fraud Prevention: A significant challenge lies in the scarcity of data for training AI models, particularly in fraud prevention. The report highlights the disparity in data availability among firms, favoring larger institutions with extensive historical data. Smaller entities, lacking comparable resources and expertise, confront obstacles in building anti-fraud AI models.
3. Regulatory Coordination: Collaboration between financial institutions and regulators is crucial in navigating evolving AI landscapes. However, concerns persist regarding regulatory fragmentation across state, federal, and international jurisdictions. The report advocates for streamlined regulatory frameworks to foster effective oversight amidst rapid technological advancements.
4. Expanding AI Risk Management Framework: The report proposes the expansion of the National Institute of Standards and Technology (NIST) AI Risk Management Framework to encompass tailored guidelines pertinent to the financial services sector, enhancing governance and risk mitigation strategies.
5. Enhancing Data Supply Chain Mapping and Transparency: Emphasizing the importance of monitoring data supply chains, the report advocates for the development of best practices and standardized "nutrition labels" akin to food labeling, facilitating transparency regarding data usage in AI models.
6. Promoting Explainability in AI Solutions: Transparency remains a critical challenge in comprehending black-box AI systems. The report underscores the necessity for research and development in explainability solutions, alongside the adoption of best practices in utilizing opaque AI systems.
7. Addressing Human Capital Gaps: The rapid evolution of AI underscores the urgent need for skilled professionals adept in AI development and risk management. The report recommends best practices and role-specific training to bridge the talent gap across various disciplines within financial institutions.
8. Establishing Common AI Lexicon: Standardizing AI terminology is imperative for effective communication and regulatory clarity across stakeholders within the financial sector.
9. Advancing Digital Identity Solutions: The report advocates for cohesive standards in digital identity solutions to bolster cybersecurity measures while combatting fraud.
10. International Collaboration: Recognizing the global nature of AI risks, the report emphasizes ongoing engagement with international partners to foster cohesive regulatory approaches and risk mitigation strategies.
Informed by in-depth interviews with 42 financial services and technology companies, the report provides a comprehensive overview of current AI use cases, best practices, and recommendations without imposing mandates on AI adoption.
Looking ahead, the Treasury commits to collaborative efforts with private sector entities, federal agencies, regulatory bodies, and international partners to address AI-related challenges comprehensively. While the report primarily focuses on operational risk, cybersecurity, and fraud, Treasury remains dedicated to exploring broader implications of AI, including its impact on consumers and marginalized communities.
The release of this report marks a pivotal milestone in safeguarding the financial sector against AI-specific risks, signaling a proactive stance in navigating the evolving technological landscape while upholding principles of security and trustworthiness.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.