{"id":125899,"date":"2022-06-21T16:08:57","date_gmt":"2022-06-21T16:08:57","guid":{"rendered":"https:\/\/news.microsoft.com\/?p=446581"},"modified":"2022-06-21T16:08:57","modified_gmt":"2022-06-21T16:08:57","slug":"microsofts-framework-for-building-ai-systems-responsibly","status":"publish","type":"post","link":"https:\/\/sickgaming.net\/blog\/2022\/06\/21\/microsofts-framework-for-building-ai-systems-responsibly\/","title":{"rendered":"Microsoft\u2019s framework for building AI systems responsibly"},"content":{"rendered":"<p><a href=\"https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/1_Header.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-65274 size-large\" src=\"https:\/\/www.sickgaming.net\/blog\/wp-content\/uploads\/2022\/06\/microsofts-framework-for-building-ai-systems-responsibly.jpg\" alt=\"Responsible AI graphic\" width=\"995\" height=\"472\"><\/a><\/p>\n<p><span data-contrast=\"none\">Today we are sharing publicly <\/span><a href=\"https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf\" target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">Microsoft\u2019s <\/span><span data-contrast=\"none\">Responsible AI Standard<\/span><\/a><span data-contrast=\"none\">, a framework to guide how we build AI systems<\/span><i><span data-contrast=\"none\">.<\/span><\/i><span data-contrast=\"none\"> It is an important step in our journey to develop better, more trustworthy AI. We are releasing our latest Responsible AI Standard to share what we have learned, invite feedback from others, and contribute to the discussion about building better norms and practices around AI.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><b><span data-contrast=\"auto\">Guiding product development towards more responsible outcomes<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\"><br \/><\/span><span data-contrast=\"auto\">AI systems are the product of many different decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.&nbsp;&nbsp;&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"auto\">The Responsible AI Standard sets out our best thinking on <\/span><i><span data-contrast=\"auto\">how<\/span><\/i><span data-contrast=\"auto\"> we will build AI systems to uphold these values and earn society\u2019s trust. It <\/span><span data-contrast=\"none\">provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"none\">The Standard details concrete goals or outcomes that teams developing AI systems must strive to secure. These goals help break down a broad principle like \u2018accountability\u2019 into its key enablers, such as impact assessments, data governance, and human oversight. Each goal is then composed of a set of requirements, which are steps that teams must take to ensure that AI systems meet the goals throughout the system lifecycle. Finally, the Standard maps available tools and practices to specific requirements so that Microsoft\u2019s teams implementing it have resources to help them succeed.&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<figure id=\"attachment_65269\" aria-describedby=\"caption-attachment-65269\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.sickgaming.net\/blog\/wp-content\/uploads\/2022\/06\/microsofts-framework-for-building-ai-systems-responsibly-10.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-65269 size-large\" src=\"https:\/\/www.sickgaming.net\/blog\/wp-content\/uploads\/2022\/06\/microsofts-framework-for-building-ai-systems-responsibly-1.jpg\" alt=\"Core components of Microsoft\u2019s Responsible AI Standard graphic\" width=\"995\" height=\"560\"><\/a><figcaption id=\"caption-attachment-65269\" class=\"wp-caption-text\">The core components of Microsoft\u2019s Responsible AI Standard<\/figcaption><\/figure>\n<p><span data-contrast=\"auto\">The need for this type of practical guidance is growing. AI is becoming more and more a part of our lives, and yet, our laws are lagging behind. They have not caught up with AI\u2019s unique risks or society\u2019s needs. While we see signs that government action on AI is expanding, we also recognize our responsibility to act. We believe that we need to work towards ensuring AI systems are responsible <\/span><i><span data-contrast=\"auto\">by design<\/span><\/i><span data-contrast=\"auto\">.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><b><span data-contrast=\"none\">Refining our policy and learning from our product experiences<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\"><br \/><\/span><span data-contrast=\"none\">Over the course of a year, a multidisciplinary group of researchers, engineers, and policy experts crafted the second version of our Responsible AI Standard. It builds on <\/span><span data-contrast=\"auto\">our <\/span><a href=\"https:\/\/blogs.microsoft.com\/on-the-issues\/2021\/01\/19\/microsoft-responsible-ai-program\/\"><span data-contrast=\"none\">previous responsible AI efforts<\/span><\/a><span data-contrast=\"auto\">, <\/span><span data-contrast=\"none\">including the first version of the Standard that launched internally in the fall of 2019, as well as the latest research and some <\/span><span data-contrast=\"auto\">important lessons learned from our own product experiences.&nbsp;&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><i><span data-contrast=\"auto\">Fairness in Speech-to-Text Technology&nbsp;<\/span><\/i><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"auto\">The potential of AI systems to exacerbate societal biases and inequities is one of the most widely recognized harms associated with these systems. In March 2020, an academic <\/span><a href=\"https:\/\/fairspeech.stanford.edu\/\" target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">study<\/span><\/a><span data-contrast=\"auto\"> revealed that speech-to-text technology across the tech sector produced error rates for members of some Black and African American communities that were nearly double those for white users. We stepped back, considered the study\u2019s findings, and learned that our pre-release testing had not accounted satisfactorily for the rich diversity of speech across people with different backgrounds and from different regions. After the study was published, we engaged an expert sociolinguist to help us better understand this diversity and sought to expand our data collection efforts to narrow the performance gap in our speech-to-text technology. In the process, we found that we needed to grapple with challenging questions about how best to collect data from communities in a way that engages them appropriately and respectfully. We also learned the value of bringing experts into the process early, including to better understand factors that might account for variations in system performance.&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"auto\">The Responsible AI Standard records the pattern we followed to improve our speech-to-text technology. As we continue to roll out the Standard across the company, we expect the Fairness Goals and Requirements identified in it will help us get ahead of potential fairness harms.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><i><span data-contrast=\"none\">Appropriate Use Controls for Custom Neural Voice and Facial Recognition<\/span><\/i><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"none\">Azure AI\u2019s <\/span><a href=\"https:\/\/speech.microsoft.com\/customvoice\"><span data-contrast=\"none\">Custom Neural Voice<\/span><\/a><span data-contrast=\"none\"> is another innovative Microsoft speech technology that enables the creation of a synthetic voice that sounds nearly identical to the original source. AT&amp;T has brought this technology to life with an award-winning in-store <\/span><a href=\"https:\/\/blogs.microsoft.com\/ai-for-business\/custom-neural-voice-ga\/\"><span data-contrast=\"none\">Bugs Bunny<\/span><\/a> <span data-contrast=\"none\">experience, and<\/span> <a href=\"https:\/\/news.microsoft.com\/transform\/progressive-gives-voice-to-flos-chatbot-and-its-as-no-nonsense-and-reassuring-as-she-is\/\"><span data-contrast=\"none\">Progressive has brought Flo\u2019s voice<\/span><\/a> <span data-contrast=\"none\">to online customer interactions, among uses by many other customers<\/span><span data-contrast=\"none\">.<\/span><span data-contrast=\"none\"> This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"none\">Our review of this technology through our Responsible AI program, including the Sensitive Uses review process required by the Responsible AI Standard, led us to adopt a layered control framework: we restricted customer access to the service, ensured acceptable use cases were proactively defined and communicated through a <\/span><a href=\"https:\/\/docs.microsoft.com\/en-us\/legal\/cognitive-services\/speech-service\/custom-neural-voice\/transparency-note-custom-neural-voice\"><span data-contrast=\"none\">Transparency Note<\/span><\/a><span data-contrast=\"none\"> and <\/span><a href=\"https:\/\/docs.microsoft.com\/en-us\/legal\/cognitive-services\/speech-service\/tts-code-of-conduct?context=%2Fazure%2Fcognitive-services%2Fspeech-service%2Fcontext%2Fcontext\"><span data-contrast=\"none\">Code of Conduct<\/span><\/a><span data-contrast=\"none\">, and established technical guardrails to help ensure the active participation of the speaker when creating a synthetic voice. Through these and other controls, we<\/span><span data-contrast=\"auto\"> helped protect against misuse, while maintaining beneficial uses of the technology.&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"auto\">Building upon what we learned from Custom Neural Voice, we will apply similar controls to our facial recognition <\/span><a href=\"http:\/\/aka.ms\/AAh9oye\"><span data-contrast=\"none\">services<\/span><\/a><span data-contrast=\"auto\">. After a transition period for existing customers, we are limiting access to these services to managed customers and partners, narrowing the use cases to pre-defined acceptable ones, and leveraging technical controls engineered into the services.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><i><span data-contrast=\"auto\">Fit for Purpose and Azure Face Capabilities<\/span><\/i><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"auto\">Finally, we recognize that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve. As part of our work to align our Azure Face service to the requirements of the Responsible AI Standard, we are also <\/span><a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/responsible-ai-investments-and-safeguards-for-facial-recognition\/\"><span data-contrast=\"none\">retiring capabilities<\/span><\/a><span data-contrast=\"auto\"> that infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup.&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"auto\">Taking emotional states as an example, we have decided we will not provide open-ended API access to technology that can scan people\u2019s faces and purport to infer their emotional states based on their facial expressions or movements. Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of \u201cemotions,\u201d the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability. We also decided that we need to carefully analyze <\/span><i><span data-contrast=\"auto\">all<\/span><\/i><span data-contrast=\"auto\"> AI systems that purport to infer people\u2019s emotional states, whether the systems use facial analysis or any other AI technology. The Fit for Purpose Goal and Requirements in the Responsible AI Standard now help us to make system-specific validity assessments upfront, and our Sensitive Uses process helps us provide nuanced guidance for high-impact use cases, grounded in science.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"auto\">These real-world challenges informed the development of Microsoft\u2019s Responsible AI Standard and demonstrate its impact on the way we design, develop, and deploy AI systems.&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"none\">For those wanting to dig into our approach further, we have also made available some key resources that support the Responsible AI Standard: our <\/span><a href=\"https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Template.pdf\" target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">Impact Assessment template<\/span><\/a><span data-contrast=\"none\"> and <\/span><a href=\"https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Guide.pdf\" target=\"_blank\" rel=\"noopener\"><span data-contrast=\"none\">guide<\/span><\/a><span data-contrast=\"none\">, and a collection of Transparency Notes. Impact Assessments have proven valuable at Microsoft to ensure teams explore the impact of their AI system \u2013 including its stakeholders, intended benefits, and potential harms \u2013 in depth at the earliest design stages.<\/span><span data-contrast=\"auto\"> Transparency Notes are a new form of documentation in which we disclose to our customers the capabilities and limitations of our core building block technologies, so they have the knowledge necessary to make responsible deployment choices.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<figure id=\"attachment_65268\" aria-describedby=\"caption-attachment-65268\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.sickgaming.net\/blog\/wp-content\/uploads\/2022\/06\/microsofts-framework-for-building-ai-systems-responsibly-15.jpg\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-65268 size-large\" src=\"https:\/\/www.sickgaming.net\/blog\/wp-content\/uploads\/2022\/06\/microsofts-framework-for-building-ai-systems-responsibly-2.jpg\" alt=\"Core principles graphic\" width=\"995\" height=\"560\"><\/a><figcaption id=\"caption-attachment-65268\" class=\"wp-caption-text\">The Responsible AI Standard is grounded in our core principles<\/figcaption><\/figure>\n<p><b><span data-contrast=\"none\">A multidisciplinary, iterative journey<\/span><\/b><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\"><br \/><\/span><span data-contrast=\"none\">Our updated Responsible AI Standard reflects hundreds of inputs across Microsoft technologies, professions, and geographies. <\/span><span data-contrast=\"auto\">It is a significant step forward for our practice of responsible AI because it is much more actionable and concrete: it sets out practical approaches for identifying, measuring, and mitigating harms ahead of time, and requires teams to adopt controls to secure beneficial uses and guard against misuse. <span class=\"TextRun SCXW8132715 BCX8\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW8132715 BCX8\" data-ccp-charstyle=\"normaltextrun\">You can learn more about the development of the Standard in this <\/span><a href=\"https:\/\/www.youtube.com\/watch?v=lkIlsgrIMtU\" target=\"_blank\" rel=\"noopener\"><span class=\"NormalTextRun CommentStart CommentHighlightPipeRest CommentHighlightRest SCXW8132715 BCX8\" data-ccp-charstyle=\"normaltextrun\">video<\/span><\/a><span class=\"NormalTextRun CommentHighlightPipeRest SCXW8132715 BCX8\" data-ccp-charstyle=\"normaltextrun\">.<\/span><\/span><span class=\"TextRun SCXW8132715 BCX8\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"none\"><span class=\"NormalTextRun SCXW8132715 BCX8\">&nbsp;<\/span><\/span><span class=\"TextRun SCXW8132715 BCX8\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"none\"><span class=\"NormalTextRun SCXW8132715 BCX8\">&nbsp;<\/span><\/span><span class=\"EOP SCXW8132715 BCX8\" data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/span><\/p>\n<p><span data-contrast=\"none\">While our Standard is an important step in Microsoft\u2019s responsible AI journey, it is just one step. As we make progress with implementation, we expect to encounter challenges that require us to pause, reflect, and adjust. Our Standard will remain a living document, evolving to address new research, technologies, laws, and learnings from within and outside the company.&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"none\">There is a rich and active global dialog about how to create principled and actionable norms to ensure organizations develop and deploy AI responsibly. We have benefited from this discussion and will continue to contribute to it. We believe that industry, academia, civil society, and government need to collaborate to advance the state-of-the-art and learn from one another. Together, we need to answer open research questions, close measurement gaps, and design new practices, patterns, resources, and tools.&nbsp;<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n<p><span data-contrast=\"none\">Better, more equitable futures will require new guardrails for AI. Microsoft\u2019s Responsible AI Standard is one contribution toward this goal, and we are engaging in the hard and necessary implementation work across the company. We\u2019re committed to being open, honest, and transparent in our efforts to make meaningful progress.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:259}\">&nbsp;<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Today we are sharing publicly Microsoft\u2019s Responsible AI Standard, a framework to guide how we build AI systems. It is an important step in our journey to develop better, more trustworthy AI. We are releasing our latest Responsible AI Standard to share what we have learned, invite feedback from others, and contribute to the discussion [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":125900,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[49],"tags":[135,50],"class_list":["post-125899","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-microsoft-news","tag-artificial-intelligence","tag-recent-news"],"_links":{"self":[{"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/posts\/125899","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/comments?post=125899"}],"version-history":[{"count":0,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/posts\/125899\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/media\/125900"}],"wp:attachment":[{"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/media?parent=125899"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/categories?post=125899"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sickgaming.net\/blog\/wp-json\/wp\/v2\/tags?post=125899"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}