OllamaPromptExecutionSettings Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Ollama Prompt Execution Settings.
[System.Text.Json.Serialization.JsonNumberHandling(System.Text.Json.Serialization.JsonNumberHandling.AllowReadingFromString)]
public sealed class OllamaPromptExecutionSettings : Microsoft.SemanticKernel.PromptExecutionSettings
[<System.Text.Json.Serialization.JsonNumberHandling(System.Text.Json.Serialization.JsonNumberHandling.AllowReadingFromString)>]
type OllamaPromptExecutionSettings = class
inherit PromptExecutionSettings
Public NotInheritable Class OllamaPromptExecutionSettings
Inherits PromptExecutionSettings
- Inheritance
- Attributes
Constructors
| OllamaPromptExecutionSettings() |
Properties
| ExtensionData |
Extra properties that may be included in the serialized execution settings. (Inherited from PromptExecutionSettings) |
| FunctionChoiceBehavior |
Gets or sets the behavior defining the way functions are chosen by LLM and how they are invoked by AI connectors. (Inherited from PromptExecutionSettings) |
| IsFrozen |
Gets a value that indicates whether the PromptExecutionSettings are currently modifiable. (Inherited from PromptExecutionSettings) |
| ModelId |
Model identifier. This identifies the AI model these settings are configured for e.g., gpt-4, gpt-3.5-turbo (Inherited from PromptExecutionSettings) |
| NumPredict |
Maximum number of output tokens. (Default: -1, infinite generation) |
| ServiceId |
Service identifier. This identifies the service these settings are configured for e.g., azure_openai_eastus, openai, ollama, huggingface, etc. (Inherited from PromptExecutionSettings) |
| Stop |
Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a model file. |
| Temperature |
The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) |
| TopK |
Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) |
| TopP |
Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) |
Methods
| Clone() |
Creates a new PromptExecutionSettings object that is a copy of the current instance. (Inherited from PromptExecutionSettings) |
| Freeze() |
Makes the current PromptExecutionSettings unmodifiable and sets its IsFrozen property to true. (Inherited from PromptExecutionSettings) |
| FromExecutionSettings(PromptExecutionSettings) |
Gets the specialization for the Ollama execution settings. |
| ThrowIfFrozen() |
Throws an InvalidOperationException if the PromptExecutionSettings are frozen. (Inherited from PromptExecutionSettings) |