Introduction
In May 2025, the Banque Centrale du Luxembourg (BCL) and the Commission de Surveillance du Secteur Financier (CSSF) published their second thematic review on the use of artificial intelligence in Luxembourg’s financial sector. The scope was significantly expanded compared to the 2023 review, and now provides the most comprehensive regulatory snapshot to date. While the report itself offers valuable empirical insight into adoption trends, technological maturity, and regulatory preparedness, it leaves open important governance questions, particularly regarding board-level oversight.
This paper offers expert reflections on the report’s findings and outlines what we believe to be the key implications for independent directors serving on the boards of financial institutions in Luxembourg.
1. Board Oversight of AI: CSSF/BCL Findings
The CSSF and BCL observe that governance of AI remains heavily concentrated at the group level. According to the report:
- Only 24 percent of responding institutions had a digital strategy formally approved at the Luxembourg board level.
- Most AI investments were executed at the group level, with only limited engagement from local boards.
- AI activities are increasingly driven by centralized data science teams situated within group structures.
These findings indicate that many Luxembourg-based boards are not yet systematically involved in decisions related to the adoption, management, or oversight of AI within their legal entities.
Based on these observations, we believe that independent directors should actively elevate AI oversight to a formal board-level matter. While group-level capabilities can and should be leveraged, the Luxembourg legal entity board retains fiduciary and regulatory responsibility under local law.
We would recommend that independent directors:
- Require that the AI strategy applicable to the Luxembourg entity be formally reviewed, discussed, and approved at the Luxembourg board level.
- Ensure that AI adoption is explicitly integrated into the institution’s business plan, risk appetite, and operational governance.
- Document board-level discussions and decisions related to AI oversight, as evidence of sound governance practices for supervisors and stakeholders.
2. Governance Frameworks: Findings and Our Proposed Enhancements
The CSSF/BCL review highlights that:
- 43 percent of respondents have adopted a formal policy addressing AI.
- 60 percent of entities that allow employee access to generative AI tools (such as large language models) do not have any specific policy governing such usage.
Based on these findings, we would recommend that independent directors request management to:
- Develop a comprehensive AI governance framework addressing the full model lifecycle: design, validation, deployment, monitoring, update, and decommissioning.
- Incorporate controls addressing data privacy, confidentiality, explainability, security, ethical risks, vendor reliance, and employee usage.
- Establish formal incident management and escalation protocols specific to AI failures, bias events, or ethical breaches.
- Special attention must be paid to “shadow AI” — unauthorized GenAI use by staff.
3. Regulatory Classification: CSSF/BCL Observations and Our Governance View
The CSSF/BCL report reveals significant inconsistency in how institutions classify AI use cases under the European Union (EU) Artificial Intelligence Act (AI Act):
- Only 5 percent of AI use cases were classified as “high risk”.
- Some use cases which the AI Act defines as high risk (e.g. credit scoring for natural persons) were not identified as such by respondents.
- Conversely, several use cases not listed as high risk in the AI Act were conservatively—but incorrectly—classified as high risk.
These inconsistencies suggest that many firms do not yet fully understand their regulatory obligations under the AI Act, which begins phased application from August 2026.
In our expert view, independent directors should not assume management’s classifications are fully reliable without review. We would recommend that boards:
- Require a formal legal mapping of all AI use cases against the AI Act’s risk-based framework.
- Request compliance and legal functions to validate all classifications.
- Establish clear internal procedures for periodic reassessment of classifications as models or regulations evolve.
4. Local Expertise and Group Dependencies: CSSF/BCL Findings and Our View
The CSSF/BCL report indicates that:
- 63 percent of institutions using AI rely on dedicated data science teams.
- 55 percent of these teams operate at group level; only 3 percent are located exclusively at the Luxembourg entity level.
This growing concentration of technical expertise outside the regulated entity risks creating knowledge asymmetries between local boards and the complex AI systems deployed within their legal perimeter.
We would recommend that independent directors:
- Require full transparency into all group-developed AI models deployed at the Luxembourg entity.
- Request periodic reporting at the board level on model ownership, validation results, deployment status, and monitoring outcomes.
- Consider whether local technical capabilities—either internal or through external advisory resources—are sufficient to enable proper board-level oversight.
5. Human Oversight of AI Decisions: CSSF/BCL Findings and Emerging Governance Standards
The CSSF/BCL report shows that 90 percent of AI use cases remain under human oversight. Nevertheless, certain high-stakes use cases—such as credit scoring—are beginning to operate autonomously.
Given this shift, we believe that independent directors should proactively oversee the institution’s approach to human oversight as automation intensifies. Specifically, we would recommend that boards:
- Require policies that clearly delineate which decision types may be fully automated versus those requiring mandatory human validation.
- Insist that any transition toward autonomous decision-making be reviewed at board level and, where appropriate, discussed with regulators.
- Monitor ongoing developments under the AI Act, which will impose specific obligations on human oversight by August 2026.
6. Auditability, Explainability, and Monitoring: CSSF/BCL Findings and Our Recommendations
The CSSF/BCL reports that:
- 56 percent of use cases report good or very good auditability.
- 54 percent report good or very good explainability.
- Model performance monitoring is in place for 56 percent of use cases; however, monitoring is notably weaker for generative AI models.
In light of these findings, we would recommend that boards:
- Request a full inventory of AI models, including explainability assessments, auditability scoring, and monitoring protocols.
- Mandate periodic internal audit reviews of AI governance processes and controls.
- Escalate for board discussion any models classified as having low explainability or limited auditability.
7. Ethical Risks, Security Threats, and Bias Management: CSSF/BCL Findings and Board Responsibilities
The CSSF/BCL report identifies:
- 54 percent of respondents have implemented AI-specific security measures addressing threats such as adversarial attacks and model poisoning.
- Only 45 percent have implemented bias prevention or detection across applicable use cases.
- Many institutions place significant reliance on model providers for managing bias, particularly in large language models.
We believe that ethical risks and security exposures arising from AI demand board-level attention. Independent directors should, in our view:
- Require comprehensive ethical risk assessments across the full AI model inventory.
- Ensure that AI-specific security threats are addressed within the institution’s cybersecurity framework.
- Clarify contractual allocation of responsibility for bias detection and prevention between the institution and third-party model vendors.
- Confirm that compliance, risk, legal, data protection, and information security functions are directly involved in AI oversight structures.
8. Vendor Dependencies and Third-Party Concentration: CSSF/BCL Observations and Our Governance Position
The CSSF/BCL finds that:
- 75 percent of generative AI use cases rely on commercial models.
- 38 percent of machine learning use cases involve significant reliance on third-party vendors for model development or data preparation.
Such vendor concentration poses systemic, operational, and regulatory risks. In our expert opinion, independent directors should:
- Request periodic third-party risk assessments covering key AI service providers.
- Ensure contracts with vendors address audit rights, intellectual property, data protection, model governance, versioning control, and service continuity.
- Require that management maintain contingency plans in case external AI providers become unavailable or non-compliant.
Summary: Recommended Board-Level Oversight Agenda for AI
Key Domain | Recommended Board-Level Action |
---|---|
AI Strategy | Require Luxembourg board approval of the AI strategy and its integration into business plans and risk appetite statements. |
Governance Framework | Mandate comprehensive AI governance policies addressing full model lifecycle and emerging ethical risks. |
Regulatory Classification | Oversee validated mapping of all AI models against the EU AI Act risk classifications. |
Model Inventory | Maintain full board visibility on model ownership, validation results, explainability, and monitoring status. |
Auditability and Internal Audit | Instruct internal audit to periodically review AI governance controls and model risk management. |
Ethical, Security, and Bias Controls | Oversee institution-wide ethical AI frameworks, security defenses, and bias detection capabilities. |
Vendor Management | Direct third-party AI vendor risk assessments, contract governance, and contingency planning. |
Conclusion
The CSSF/BCL 2024 thematic review confirms that AI adoption is advancing rapidly within Luxembourg’s financial sector, but governance maturity remains uneven. In our expert view, independent directors must now treat AI as a core part of their fiduciary oversight role, not a peripheral technology issue.
While the CSSF/BCL report highlights areas of current practice, the supervisory expectations are clearly evolving. Boards that proactively strengthen their AI oversight — across strategy, risk, compliance, legal, and ethics — will not only reduce future supervisory risk, but also position themselves as credible stewards of responsible AI deployment as the EU AI Act comes into full application.
At Fund Guardian , my colleagues Dr. Angelina Pramova, CESGA®, Guillem Liarte, and I, support firms in executing their AI and oversight strategies — offering tools, analytics, and expertise to accelerate implementation, reduce risk, and build long-term governance capability. Contact us here .
The full report can be accessed here.