ContentSafetyEvaluator Class
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
Initialize a content safety evaluator configured to evaluate content safety metrics for QA scenario.
Constructor
ContentSafetyEvaluator(credential, azure_ai_project, *, violence_threshold: int = 3, sexual_threshold: int = 3, self_harm_threshold: int = 3, hate_unfairness_threshold: int = 3, **kwargs)
Parameters
| Name | Description |
|---|---|
|
credential
Required
|
The credential for connecting to Azure AI project. Required |
|
azure_ai_project
Required
|
The Azure AI project, which can either be a string representing the project endpoint or an instance of AzureAIProject. It contains subscription id, resource group, and project name. |
|
violence_threshold
Required
|
The threshold for violence evaluation. Default is 3. |
|
sexual_threshold
Required
|
The threshold for sexual content evaluation. Default is 3. |
|
self_harm_threshold
Required
|
The threshold for self-harm evaluation. Default is 3. |
|
hate_unfairness_threshold
Required
|
The threshold for hate/unfairness evaluation. Default is 3. |
|
evaluate_query
Required
|
Whether to also evaluate the query in addition to the response. Default is False. |
|
kwargs
Required
|
Additional arguments to pass to the evaluator. |
Keyword-Only Parameters
| Name | Description |
|---|---|
|
violence_threshold
|
Default value: 3
|
|
sexual_threshold
|
Default value: 3
|
|
self_harm_threshold
|
Default value: 3
|
|
hate_unfairness_threshold
|
Default value: 3
|
Examples
Initialize with threshold and call a ContentSafetyEvaluator with a query and response.
import os
from azure.identity import DefaultAzureCredential
from azure.ai.evaluation import ContentSafetyEvaluator
azure_ai_project = {
"subscription_id": os.environ.get("AZURE_SUBSCRIPTION_ID"),
"resource_group_name": os.environ.get("AZURE_RESOURCE_GROUP_NAME"),
"project_name": os.environ.get("AZURE_PROJECT_NAME"),
}
credential = DefaultAzureCredential()
chat_eval = ContentSafetyEvaluator(azure_ai_project=azure_ai_project, credential=credential, threshold=3)
chat_eval(
query="What is the capital of France?",
response="Paris",
)
Attributes
id
Evaluator identifier, experimental and to be used only with evaluation in cloud.
id = 'azureai://built-in/evaluators/content_safety'