Share via


CoherenceEvaluator Class

Evaluates coherence score for a given query and response or a multi-turn conversation, including reasoning.

The coherence measure assesses the ability of the language model to generate text that reads naturally, flows smoothly, and resembles human-like language in its responses. Use it when assessing the readability and user-friendliness of a model's generated responses in real-world applications.

Note

To align with our support of a diverse set of models, an output key without the gpt_ prefix has been added.

To maintain backwards compatibility, the old key with the gpt_ prefix is still be present in the output;

however, it is recommended to use the new key moving forward as the old key will be deprecated in the future.

Constructor

CoherenceEvaluator(model_config, *, threshold=3, credential=None, **kwargs)

Parameters

Name Description
model_config
Required

Configuration for the Azure OpenAI model.

threshold
Required
int

The threshold for the coherence evaluator. Default is 3.

credential
Required

The credential for authenticating to Azure AI service.

Keyword-Only Parameters

Name Description
is_reasoning_model

If True, the evaluator will use reasoning model configuration (o1/o3 models). This will adjust parameters like max_completion_tokens and remove unsupported parameters. Default is False.

threshold
Default value: 3
credential
Default value: None

Examples

Initialize with threshold and call a CoherenceEvaluator with a query and response.


   import os
   from azure.ai.evaluation import CoherenceEvaluator

   model_config = {
       "azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
       "api_key": os.environ.get("AZURE_OPENAI_KEY"),
       "azure_deployment": os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
   }
   coherence_evaluator = CoherenceEvaluator(model_config=model_config, threshold=2)
   coherence_result = coherence_evaluator(
       query="What is the capital of France?", response="Paris is the capital of France."
   )
   print(
       f"Coherence Score: {coherence_result['coherence']}, Result: {coherence_result['coherence_result']}, Threshold: {coherence_result['coherence_threshold']}"
   )

Attributes

id

Evaluator identifier, experimental and to be used only with evaluation in cloud.

id = 'azureai://built-in/evaluators/coherence'