国际清算银行-管理解释:监管机构如何解决人工智能的可解释性(英)
Occasional PaperNo 24 Managing explanations: how regulators can address AI explainability by Fernando Pérez-Cruz, Jermy Prenio, Fernando Restoy, Jeffery Yong September 2025 JEL classification: C60, G29, G38, O30 Keywords: artificial intelligence, machine learning, model risk management, risk governance FSI Occasional Papers aim to contribute to international discussions on a wide range of topics of relevance to the financial industry and its regulation and supervision. The views expressed in them are solely those of the authors and do not necessarily reflect those of the BIS or the Basel-based standard-setting bodies. This publication is available on the BIS website (www.bis.org). To contact the BIS Global Media and Public Relations team, please email media@bis.org. You can sign up for email alerts at www.bis.org/emailalerts.htm. © Bank for International Settlements 2025. All rights reserved. Brief excerpts may be reproduced or translated provided the source is stated. ISSN 1020-9999 (online) Managing explanations: how regulators can address AI explainability iii Abstract The increasing adoption of artificial intelligence (AI) by financial institutions is transforming their operations, risk management and customer interactions. Nevertheless, the limited explainability of complex AI models, particularly when used in critical business applications, poses significant challenges and issues for financial institutions and regulators. Explainability, or the extent to which a model’s output can be explained to a human, is essential for transparency, accountability, regulatory compliance and consumer trust. Yet, complex AI models, such as deep learning and large language models (LLMs), are often difficult to explain. While there are existing explainability techniques that can help shed light on complex AI models’ behaviour, these techniques have notable limitations, including inaccuracy, instability and susceptibility of misleading explanations. Limited model explainability makes managing model risks challenging. Global standard-setting bodies have issued – mostly high-level – model risk management (MRM) requirements. However, only a few national financial authorities have issued specific guidance, and they tend to focus on models used for regulatory purposes. Many of these existing guidelines may not have been developed with advanced AI models in mind and do not explicitly mention the concept of model explainability. Rather, the concept is implicit in the provisions relating to governance, model development, documentation, validation, deployment, monitoring and independent review. It would be challenging for complex AI models to comply with these provisions. The use of third-party AI models would exacerbate these challenges. As financial institutions expand their use of AI models to their critical business areas, it is imperative that financial authorities seek to foster sound MRM practices that are relevant in the context of AI. Ultimately, there may b
国际清算银行-管理解释:监管机构如何解决人工智能的可解释性(英),点击即可下载。报告格式为PDF,大小0.6M,页数23页,欢迎下载。