2025年生成式AI红队百次测试经验白皮书-微软(英)

Lessons from red teaming 100 generative AI products Authored by: Microsoft AI Red TeamAuthorsBlake Bullwinkel, Amanda Minnich, Shiven Chawla, Gary Lopez, Martin Pouliot, Whitney Maxwell, Joris de Gruyter, Katherine Pratt, Saphir Qi, Nina Chikanov, Roman Lutz, Raja Sekhar Rao Dheekonda, Bolor-Erdene Jagdagdorj, Eugenia Kim, Justin Song, Keegan Hines, Daniel Jones, Giorgio Severi, Richard Lundeen, Sam Vaughan, Victoria Westerhoff, Pete Bryan, Ram Shankar Siva Kumar, Yonatan Zunger, Chang Kawaguchi, Mark Russinovich2Lessons from red teaming 100 generative AI productsTable of contents304Abstract07Red teaming operations09Case study #1 Jailbreaking a vision language model to generate hazardous content12Lesson 4 Automation can help cover more of the risk landscape05Introduction08Lesson 1 Understand what the system can do and where it is applied10Lesson 3 AI red teaming is not safety benchmarking12Lesson 5 The human element of AI red teaming is crucial05AI threat model ontology 08Lesson 2 You don’t have to compute gradients to break an AI system 11Case study #2 Assessing how an LLM could be used to automate scams13Case study #3 Evaluating how a chatbot responds to a user in distressLessons from red teaming 100 generative AI products14Case study #4 Probing a text-to-image generator for gender bias14Lesson 6 Responsible AI harms are pervasive but difficult to measure15Lesson 7 LLMs amplify existing security risks and introduce new ones 16Case study #5 SSRF in a video-processing GenAI application17Lesson 8 The work of securing AI systems will never be complete18ConclusionAbstractIn recent years, AI red teaming has emerged as a practice for probing the safety and security of generative AI systems. Due to the nascency of the field, there are many open questions about how red teaming operations should be conducted. Based on our experience red teaming over 100 generative AI products at Microsoft, we present our internal threat model ontology and eight main lessons we have learned:1. Understand what the system can do and where it is applied 2. You don’t have to compute gradients to break an AI system 3. AI red teaming is not safety benchmarking4. Automation can help cover more of the risk landscape5. The human element of AI red teaming is crucial6. Responsible AI harms are pervasive but difficult to measure 7. Large language models (LLMs) amplify existing security risks and introduce new ones8. The work of securing AI systems will never be completeBy sharing these insights alongside case studies from our operations, we offer practical recommendations aimed at aligning red teaming efforts with real world risks. We also highlight aspects of AI red teaming that we believe are often misunderstood and discuss open questions for the field to consider.4Lessons from red teaming 100 generative AI products5IntroductionAs generative AI (GenAI) systems are adopted across an increasing number of domains, AI red teaming has emerged as a central practice for assessi

立即下载
综合
2025-03-10
21页
1.38M
收藏
分享

2025年生成式AI红队百次测试经验白皮书-微软(英),点击即可下载。报告格式为PDF,大小1.38M,页数21页,欢迎下载。

本报告共21页,只提供前10页预览,清晰完整版报告请下载后查看,喜欢就下载吧!
立即下载
本报告共21页,只提供前10页预览,清晰完整版报告请下载后查看,喜欢就下载吧!
立即下载
水滴研报所有报告均是客户上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作商用。
相关图表
图2. 银行总资产中政府债券所占份额
综合
2025-03-10
来源:不丹:技术援助报告——利率走廊框架的开发
查看原文
图1. 银行储备与RMA
综合
2025-03-10
来源:不丹:技术援助报告——利率走廊框架的开发
查看原文
2023 年、2030 年嵌入式 AI 蜂窝模组在蜂窝物联网模组出货量中的占比变化
综合
2025-03-10
来源:2025年广域物联—中国蜂窝-卫星物联产业研究白皮书
查看原文
移远通信、广和通、美格智能主营业务收入(亿元)及毛利率
综合
2025-03-10
来源:2025年广域物联—中国蜂窝-卫星物联产业研究白皮书
查看原文
截至 2024 年不同蜂窝技术的领先芯片企业
综合
2025-03-10
来源:2025年广域物联—中国蜂窝-卫星物联产业研究白皮书
查看原文
历年蜂窝物联网模组 TOP5 企业在全球范围内的市场份额(按出货量)
综合
2025-03-10
来源:2025年广域物联—中国蜂窝-卫星物联产业研究白皮书
查看原文
回顶部
报告群
公众号
小程序
在线客服
收起