Microsoft Introduces Azure AI Content Safety
Microsoft’s Azure AI Content Safety is an AI-powered content moderation platform designed to create safer online experiences. The new service, available through the Azure AI platform, relies on state-of-the-art AI models to detect toxic content across text and images.
Sarah Bird, Microsoft’s responsible AI lead, introduced Azure AI Content Safety at the tech giant’s annual Build conference. Developed from the technology that powers the safety system of the chatbot incorporated into Microsoft’s Copilot, Bing, and its code-generating tool on GitHub, Azure AI Content Safety is as a standalone product. Prices start at $0.75 per 1,000 text records and $1.50 per 1,000 images.
When the tool detects something inappropriate, it will flag it, classify it (sexist, racist, hateful, etc.), assign it a severity score, and notify moderators to take action. In addition to detecting harmful content in AI systems, Azure AI Content Safety can control inappropriate content in online communities, forums, and gaming platforms.
The language models that power Azure AI Content Safety can currently understand text in English, German, French, Spanish, Japanese, Portuguese, Chinese, and Italian.
Azure AI Content Safety is not an entirely new concept among toxicity detection services. Google has Perspective, a similar product that uses machine learning to mitigate toxicity online.
Since ChatGPT’s integration into Microsoft Bing in early February, the AI has had its fair share of tantrums. Users have reported a huge number of cases where Bing generated various forms of inappropriate content. Microsoft has collected massive feedback and, according to the company, improved Azure AI Content Safety to keep toxic content in check.
Still, there’s a fair amount of evidence to be skeptical about AI-powered toxicity detectors. A Penn State study, for instance, has found that AI toxicity detection models are often biased toward specific groups. The study unveiled that content related to people with disabilities can often be perceived as negative or inappropriate by these models. Another study detected a racial bias in Google’s Perspective tool.
These biases often come from the people responsible for creating labels and adding them to the datasets used to train the AI systems. To fight this challenge, Microsoft has teamed up with linguistic and fairness experts and added filters to refine context.