Hi Vivek,
The behavior you are experiencing with Copilot Studio—where the agent provides inconsistent answers or hallucinates information—is a known challenge when working with large or complex documents in document-based AI agents. Copilot Studio agents use a combination of document embeddings and generative AI to answer questions. If the agent cannot find explicit answers in the uploaded content, it may generate responses that seem plausible but are not directly supported by the document, which causes hallucinations or inconsistent results across users or repeated queries.
To reduce these issues, consider the following steps:
Check Document Quality and Structure: Ensure the uploaded HR policy document is well-structured, clean, and clearly segmented. Long, unstructured documents increase the likelihood of inconsistent embeddings.
Use Proper Chunking: Large documents should be split into smaller chunks when ingested. Copilot Studio agents perform better when each chunk contains self-contained context.
Configure Retrieval Settings: Use stricter retrieval or context limits so the agent only considers relevant sections of the document when answering questions.
Test with Ground Truth Questions: Validate the agent’s responses against specific questions with known answers. Adjust chunking or indexing if hallucinations persist.
Review Agent Logs and Embeddings: Check the agent’s monitoring and analytics to see which sections are being used for answers. Misalignment in embeddings often leads to hallucinated outputs.
Consider Fine-Tuning / Prompt Guidance: If the bot needs very strict adherence to the uploaded document, provide explicit prompt instructions like: “Answer only using the content from the uploaded HR policy. If the answer is not present, respond: ‘The policy does not provide this information.’”
Currently, hallucinations and inconsistent responses are expected behavior for generative AI agents when the answer is ambiguous or not explicitly documented. Following the above best practices can significantly improve reliability and consistency.
References:
- https://v4.hkg1.meaqua.org/en-gb/answers/questions/5620401/microsoft-copilot-agent-struggle-with-hallucinatio
- https://v4.hkg1.meaqua.org/en-us/microsoft-copilot-studio/guidance/
- https://v4.hkg1.meaqua.org/en-us/microsoft-copilot-studio/fundamentals-what-is-copilot-studio
- https://v4.hkg1.meaqua.org/en-us/microsoft-copilot-studio/nlu-boost-node
Thank you.
Karan Shewale.
If this response resolves your issue, please Accept the answer and, if helpful, click the “Upvote” button. Your feedback helps us improve and assist others more effectively.