Using the Content Moderator API, we can add monitoring to user-generated content. The API is created to assist with flags and to assess and filter offensive and unwanted content.
We will quickly go through the key features of the moderation APIs in this section.
Note
A reference to the documentation for all APIs can be found at https://docs.microsoft.com/nb-no/azure/cognitive-services/content-moderator/api-reference.
The Image Moderation API allows you to moderate images for adult and racy content. It can also extract textual content and detect faces in images.
When using the API to evaluate adult and racy content, the API will take an image as input. Based on the image, it will return a Boolean value, indicating if the image is either adult or racy. It will also contain a corresponding confidence score between 0 and 1. The Boolean value is set based on a set of default thresholds.
If the image contains any text, the...