Plain series · page 3 / 7
03 — An app in insurance
Part 3 of 7 · ← What's an app? · Index · Next → What you can't do
The app architecture is the same as any other ChatGPT App. What changes is what you are allowed to say and what you cannot get away with saying by accident.
Your app lives under US rules
Insurance in the US is regulated state by state. Your app has to live under all of these at once:
- State producer licensing. In every US state, recommending a specific policy to a specific person is a licensed activity. Your app is not licensed. Design so it never crosses that line by accident.
- State DOI advertising and solicitation rules. Every state has rules about how insurance is advertised. Material misrepresentation of coverage is a deceptive-trade-practice violation in most of them; some cases are criminal.
- NAIC Model Bulletin on AI (2023). Adopted in about 25 state insurance departments as of 2026. Carriers and brokers must govern AI in consumer-facing interactions: documented policies, testing, monitoring, consumer redress.
- Gramm-Leach-Bliley Act (GLBA). Protects non-public personal information (NPI) in financial services, including insurance. Different framework than HIPAA; health information in an insurance context can trigger both.
- Record retention. Varies by state — typically 5 to 10 years for producer records. Your app's conversation logs may need to meet the bar.
The risk that kills: false statements about coverage
The liability that matters is not a data leak. It is this:
The user asks "does this policy cover X?" Your app confidently says yes for an X that is not covered. The user relies on it, files a claim later, gets denied. Then their lawyer reads the transcript.
That single failure mode can produce, stacked:
- Deceptive-trade-practice claims under state UDAP statutes.
- Misrepresentation actions against the producer license behind the app, not against the app itself.
- E&O claims against the producing agency or carrier.
- State DOI complaints with regulatory investigation and fines.
- Bad-faith claim-handling exposure if the carrier later denies the underlying claim.
An LLM that is never wrong does not exist. The work is not making the app infallible. The work is architecting so this failure mode cannot happen at all.
Guardrails, starting here
Three things to internalise before you design the app:
- The app narrates. Your data owns the facts. The LLM never sources coverage terms, limits, or exclusions. Those come from your policy forms or product database and are quoted verbatim, with citations.
- Every coverage answer cites a source. If your tool cannot return a source (form number, product code, endorsement), the app should decline to answer — not improvise.
- Anything advice-shaped escalates to a licensed producer. "Should I buy this?" is not a question the app answers.
Chapter 6 expands these into the five safe patterns that cover ~80% of the 20 AI risks listed in Chapter 5.
In plain terms
- An app can inform — describe, compare, explain — with citations to your own source material.
- An app cannot advise — recommend a specific product to a specific person — without the licensed producer in the loop.
That one line drives every other design decision.
Before you build anything, spend 30 minutes with whoever owns compliance at your carrier, brokerage, or MGA. Bring the "risk that kills" framing. Half an hour now = a month of rework saved later.