In a week that saw Microsoft fanfare for its ChatGPT integration ‘Copilot’ into Microsoft 365 and Teams, it has come to light from internet news sources that it dissolved its Ethics & Society team during a recent purge of 10,000 jobs.
Microsoft has maintained that AI responsibility and security are foremost on its agenda. At the same time, it begins to unleash its OpenAI-created potential into the world via its products, development APIs and supercomputer storage facilitations.
The Ethics & Society team were responsible for identifying risks in products integrated with AI, along with the Office of Responsible and AI advisory committee, Aether, tasked with setting guidelines for AI training, implementation and use.
Microsoft has issued the following statement:
“Microsoft remains committed to developing and designing AI products and experiences safely and responsibly. As technology has evolved and strengthened, so has our investment, which has sometimes meant adjusting team structures to be more effective.
“For example, over the past six years, we have increased the number of people within our product teams dedicated to ensuring we adhere to our AI principles.”
“We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers.”
- Microsoft Launches Copilot: ‘The Most Powerful Productivity Tool on the Planet.’
- The Supercomputing Power Behind The Power of OpenAI
- Microsoft Mesh Avatars to Roll Out in May for Teams and Zoom
AI Safeguards v First to Market
The timing of the layoffs of such a key component of AI safeguarding arrives when the race to be the first to market between Microsoft and Google appears to be hotting up. At the same time, the tech industry and the global economy attempt to stave off an imminent meltdown.
Microsoft has also admitted that the new Copilot integration will make mistakes.
Responded Jaime Teevan, Chief Scientist and Technical Fellow, Microsoft:
“When the system gets things wrong, or has biases, or is misused, we have mitigations in place.”
Microsoft has always maintained that the ethics and security around the use of AI have been a priority and has outlined six critical principles for itself, organisations, admins, developers and business consumers.
Microsoft’s Six Key Principles for more responsible AI
The tech firm’s six principles for better AI use come under two subheadings: ‘ Ethical’ and ‘Explainable’. It says, “These principles are essential to creating responsible and trustworthy AI as it moves into more mainstream products and services.”
Established in 2017, Aether is the committee advising on issues, technologies, processes, and best practices related to AI, ethics, and effects in engineering and research.
Aether has stated:
“We didn’t have a perfect responsible AI governance system on day one. Our governance system continues to evolve to this day. And that’s as it should be. A governance system should adapt to the changing nature of technology and the business.”
Aether report to the Office of Responsible AI, which in turn report to the Senior Leadership Team at Microsoft.
The Senior Leadership Team comprises executives and directs Microsoft on responsible AI, setting the company’s AI principles, values, and commitment to human rights. The group decides on the most sensitive, cutting-edge, and critical AI development and adoption issues. There is a Senior Leadership Team for the UK and, most presumably, for all regions.
It is safe to assume that the final decisions come from the board of directors and the top execs, including CEO Satya Nadella, Microsoft President and Chief Legal Officer Brad Smith, Executive Vice President and Chief Financial Officer Amy Hood and Chairman John Thompson.
Ethical
- Accountability – “People should be accountable for AI systems”: Insists that those deploying AI should be accountable for their actions. Microsoft has stated: “Organizations should consider establishing an internal review body that provides oversight, insights, and guidance about developing and deploying AI systems.”
- Inclusiveness – “AI systems should empower everyone and engage people”: This principle looks to stop the exclusion of anyone because of barriers and seeks to include all human races’ experiences. The trust documentation states: “Where possible, speech-to-text, text-to-speech, and visual recognition technology should be used to empower people with hearing, visual, and other impairments.”
- Reliability and Safety – “AI systems should perform reliably and safely”: The documentation suggests that for AI systems to be trusted, they must be reliable and secure. Rigorous testing and validation must be deployed to handle exceptional cases, and A/B testing and champion/challenger methods should be incorporated into the evaluation process. “An AI system’s performance can degrade over time, so a robust monitoring and model tracking process needs to be established to reactively and proactively measure the model’s performance and retrain it, as necessary, to modernise it.”
Explainable
This area deals with the remit for data scientists or auditors of AI work who should be able to explain to stakeholders the accuracy. At the same time, business leaders need to ensure models are transparent for trust purposes.
To this end, Microsoft has developed several open-source tools to help with this: Fairlearn, an Azure Machine Learning integration which supports data scientists and developers; while LIME or SHapley Additive exPlanations (SHAP) are explainers for black box models; and Explainable Boosting Machine for glass-box models.
Meanwhile, here are the three sub-areas for what it terms ‘explainability’ Microsoft measure AI use:
- Fairness – “AI systems should treat all people fairly”: According to the tech firm: “Key checks and balances need to make sure that the system’s decisions don’t discriminate or run a gender, race, sexual orientation, or religion bias toward a group or individual.” To check this, there is a five-stage fairness list: envision, prototype, build, launch, and evolve.
- Transparency – “AI systems should be understandable”: The team must understand the data and algorithms used to train the model, the transformation logic applied to the data, the final model produced, and the associated assets. The focus is on transparency, which encourages the need to look at insights when training models.
- Privacy and Security – “AI systems should be secure and respect privacy”: An area that many will keep returning to, especially with crucial business data and dealing with consumer and private data. Microsoft states: “A data holder is obligated to protect the data in an AI system, and privacy and security are an integral part of this system.”
For further information on Microsoft’s guidance and principles on AI management, there is some excellent documentation affecting AI designers, AI admins, officers, and AI business consumers here.