phidea
Plain series · page 5 / 7

# 05 — AI risks you must handle

Part 5 of 7 · ← What you can't do · Index · Next → Safe patterns

20 risks, grouped. One line + one insurance example each. Mitigations in article 6.

Truth risks

  1. Hallucination — AI invents facts. "Your policy covers cyber."
  2. Outdated data — old rates, discontinued products.
  3. Unchecked numbers"€500 deductible" with no source.
  4. Fabricated citations — invented Code des Assurances articles.
  5. Silent partial answer — backend failed, AI filled the gap.

Behaviour risks

  1. Advice leaking"You should take…" slips out of a Q&A.
  2. Overconfident framing"typically covers""covers".
  3. Overeager AI — volunteers unsolicited claim-filing advice.
  4. Estimate vs quote confusion — user thinks it's binding.
  5. Banned phrases"guaranteed returns", "100%".

User-classification risks

  1. Misclassifying — "freelance dev" read as "IT company, 20 employees".
  2. Jurisdiction mixing — German rules applied to a French user.
  3. Translation error — FR franchise ≠ EN franchise.
  4. Bias — different answer based on a name.

System risks

  1. Prompt injection — user manipulates the model.
  2. Data leakage — PII echoed back or written to logs.
  3. Sensitive data in logs — health info in dashboards.
  4. No audit trail — dispute months later, no record.
  5. Model drift — provider updates, outputs change silently.
  6. Load failure — campaign day, everything breaks.

Five patterns cover most of this → next page.