Microsoft has launched a new AI-powered censorship tool to detect inappropriate content in texts and images. Dubbed โAzure Content Safety,โ the tool has been trained to understand different languages, English, French, Spanish, Italian, Portuguese, Chinese, and Japanese.
The tool gives flagged content a severity score from one to a hundred, which will help moderators know which content to address.
During a demonstration at Microsoftโs Build conference, Microsoftโs head of responsible AI, Sarah Bird, explained that Azure AI Content Safety is a commercialized version of the system powering the Bing chatbot and Githubโs AI-powered code generator Copilot. Pricing of the new tools begins at $0.75 for 1,000 texts and $1.5 for 1,000 images.
The aim of the tool is to give developers the ability to introduce it into their platforms.
โWeโre now launching it as a product that third-party customers can use,โ Bird said in a statement.
In a statement to TechCrunch, a spokesperson for Microsoft said: โMicrosoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems werenโt effectively taking into account context or able to work in multiple languages.
โNew [AI] models are able to understand content and cultural context so much better. They are multilingual from the start โฆ and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.โ
Microsoft claims that the new tool has been trained to understand context.