Plain series · page 5 / 7
# 05 — AI risks you must handle
Part 5 of 7 · ← What you can't do · Index · Next → Safe patterns
20 risks, grouped. One line + one insurance example each. Mitigations in article 6.
Truth risks
- Hallucination — AI invents facts. "Your policy covers cyber."
- Outdated data — old rates, discontinued products.
- Unchecked numbers — "€500 deductible" with no source.
- Fabricated citations — invented Code des Assurances articles.
- Silent partial answer — backend failed, AI filled the gap.
Behaviour risks
- Advice leaking — "You should take…" slips out of a Q&A.
- Overconfident framing — "typically covers" → "covers".
- Overeager AI — volunteers unsolicited claim-filing advice.
- Estimate vs quote confusion — user thinks it's binding.
- Banned phrases — "guaranteed returns", "100%".
User-classification risks
- Misclassifying — "freelance dev" read as "IT company, 20 employees".
- Jurisdiction mixing — German rules applied to a French user.
- Translation error — FR franchise ≠ EN franchise.
- Bias — different answer based on a name.
System risks
- Prompt injection — user manipulates the model.
- Data leakage — PII echoed back or written to logs.
- Sensitive data in logs — health info in dashboards.
- No audit trail — dispute months later, no record.
- Model drift — provider updates, outputs change silently.
- Load failure — campaign day, everything breaks.
Five patterns cover most of this → next page.