As AI becomes more deeply embedded in our everyday lives, it is incumbent upon all of us to be thoughtful and responsible in how we apply it to benefit people and society. A principled approach to responsible AI will be essential for every organization as this technology matures. As technical and product leaders look to adopt responsible AI practices and tools, there are several challenges including identifying the approach that is best suited to their organizations, products and market.
Today, at our Azure event, Put Responsible AI into Practice, we are pleased to share new resources and tools to support customers on this journey, including guidelines for product leaders co-developed by Microsoft and Boston Consulting Group (BCG). While these guidelines are separate from Microsoft’s own Responsible AI principles and processes, they are intended to provide guidance for responsible AI development through the product lifecycle. We are also introducing a new Responsible AI dashboard for data scientists and developers and offering a view into how customers like Novartis are putting responsible AI into action.
Introducing Ten Guidelines for Product Leaders to Implement AI Responsibly
Though the vast majority of people believe in the importance of responsible AI, many companies aren’t sure how to cross what is commonly referred to as the “Responsible AI Gap” between principles and tangible actions. In fact, many companies actually overestimate their responsible AI maturity, in part because they lack clarity on how to make their principles operational.
To help address this need, we partnered with BCG to develop “Ten Guidelines for Product Leaders to Implement AI Responsibly”—a new resource to help provide clear, actionable guidance for technical leaders to guide product teams as they assess, design, and validate responsible AI systems within their organizations.
“Ethical AI principles are necessary but not sufficient. Companies need to go further to create tangible changes in how AI products are designed and built,” says Steve Mills, Chief AI Ethics Officer, BCG GAMMA. “The asset we partnered with Microsoft to create will empower product leaders to guide their teams towards responsible development, proactively identifying and mitigating risks and threats.”
The ten guidelines are grouped into three phases:
1. Assess and prepare: Evaluate the product’s benefits, the technology, the potential risks, and the team.
2. Design, build, and document: Review the impacts, unique considerations, and the documentation practice.
3. Validate and support: Select the testing procedures and the support to ensure products work as intended.
With this new resource, we look forward to seeing more companies across industries embrace responsible AI within their own organizations.
Launching a new Responsible AI dashboard for data scientists and developers
Operationalizing ethical principles such as fairness and transparency within AI systems is one of the biggest hurdles to scaling AI, which is why our engineering teams have infused responsible AI capabilities into Azure AI services, like Azure Machine Learning. These capabilities are designed to help companies build their AI systems with fairness, privacy, security, and other responsible AI priorities.
Today, we’re excited to introduce the Responsible AI (RAI) dashboard to help data scientists and developers more easily understand, protect, and control AI data and models. This dashboard includes a collection of responsible AI capabilities such as interpretability, error analysis, counterfactual, and casual inferencing. Now generally available in open source and running on Azure Machine Learning, the RAI dashboard brings together the most used responsible AI tools into a single workflow and visual canvas that makes it easy to identify, diagnose, and mitigate errors.
0 comments:
Post a Comment