If you use CasePilot or DocPilot often, you might notice this: The same question asked at a different time gives a similar but not identical answer. This is not a mistake — it’s the nature of modern AI systems.
AI works with probabilities, not fixed rules
Traditional software follows fixed rules: input A always produces output B. AI language models work differently — they generate answers based on statistical patterns. Each answer is the result of a probabilistic process, not a deterministic retrieval.
This means: CasePilot can answer the same question with different wording, structure, or focus — and still be correct in content.
Variation doesn’t mean unreliability
That answers can vary doesn’t mean they are unreliable. The answers always stay within the scope of the case documents — similar to how an experienced claims adjuster would never phrase the same question exactly the same way but always bases their answer on the same facts.
Important: CasePilot does not make up information. All statements are anchored in the case documents. The source references under each answer show exactly which documents were used as the basis — so you can verify every statement directly.
What you can do
If you get an answer that doesn’t quite fit:
Ask the question again, phrased a bit differently
Use the Rephrase answer function (slider icon under the answer) to adjust format or length
Click on the sources to check the answer directly in the document
Give feedback using the thumbs-down button — this helps CasePilot learn to answer such questions better
💡 amaise automatically checks every answer for plausibility. If statements cannot be clearly traced back to the cited sources, a source chip appears in red — a hint to read this answer especially critically.
