AI Takes Off in Sweden’s Financial Sector, but Risk Management Lags
AI is reshaping Sweden’s financial industry at a breakneck pace. From streamlining operations to analyzing mountains of data, generative AI tools like ChatGPT, Copilot, and Gemini are becoming indispensable for firms looking to gain an edge. But while the technology is rapidly becoming a fixture, the frameworks to manage the risks it brings are lagging dangerously behind.
That’s the key takeaway from a recent report by Finansinspektionen (FI), Sweden’s financial watchdog, which found that while AI is widely used, only a minority of firms have laid the groundwork to manage its risks effectively.
A Tale of Two Trends: Growing AI Use, Sparse Policies
FI’s Innovation Center surveyed 234 financial firms this fall to gauge their use of AI and assess how prepared they are for the challenges it poses. The results? A striking 84% of firms said their employees are already using generative AI tools. Yet, only 22% have AI systems in production or development, and another 46% are in experimental or pilot stages.
While the sector is clearly intrigued by AI’s potential, policies governing its use remain an afterthought. Among firms with AI systems in production, just 41% have formal policies in place. And when it comes to employee use of public AI tools, guidelines are even scarcer.
Marie Jesperson, who heads FI’s Innovation Center, made it clear that this gap won’t suffice in the long run:
“We place high demands on the financial sector to know what risks could arise as the use of AI begins to increase. Financial firms need to ensure that they have the competence to understand and manage the risks.”
Regulation Is Coming—Are Firms Ready?
Complicating matters further is the EU’s AI Regulation, which took effect in August 2024. While most provisions won’t kick in until 2026, the regulation already sets the tone: AI systems will face a risk-based classification, from banned “unacceptable risk” systems to highly regulated “high-risk” ones.
The stakes are high, and firms seem to know it. FI’s survey found that 91% of companies using AI have either begun preparing for the regulation or plan to do so soon. But aligning with the new rules won’t just be a matter of checking boxes—it’ll require a fundamental shift in how firms approach AI development and deployment.
What’s at Stake: Opportunities, Vulnerabilities, & Trust
The financial sector has much to gain from AI, but the risks of adopting it without proper safeguards are just as significant. Vulnerabilities in AI systems could expose firms to compliance breaches, operational failures, and reputational damage—potentially undoing the very benefits they’re chasing.
Jesperson emphasized the importance of striking the right balance:
“AI introduces considerable opportunities for the industry to streamline its work. Firms now need to establish policies and procedures to ensure that the systems do not become vulnerable in conjunction with this.”
The message from FI is clear that enthusiasm for AI must be matched by accountability. Firms that fail to act risk being blindsided by the very technology they’re embracing. With the AI Regulation looming and the technology evolving faster than ever, the financial sector has little room for complacency.
Yes, AI can revolutionize the industry. But without the right policies, expertise, and foresight, it’s a revolution that could just as easily unravel. As Jesperson and FI have made plain, the time to act is now—and the cost of inaction is only growing.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.