Contents
As AI becomes more deeply embedded in the insurance industry, the question is no longer "Can AI do this?" but rather "Can we trust it when it does?" That shift in thinking is why explainable AI (XAI) is becoming a key priority across underwriting, claims, pricing, and customer interaction workflows.
This need for trust was the central theme of a recent roundtable we hosted with Swiss Re, Sompo, Swiss InsurTech Hub, and CAPCO. Participants from across the industry emphasized that AI systems must do more than perform well. They need to be transparent, able to show their logic, and smart enough to ask for human help when they’re not confident.
Why Confidence Scoring and Transparency Matter
In regulated industries like insurance, low-confidence outputs can introduce real risk. CAPCO highlighted how, during KYC processes, mismatches in ownership data or ambiguous name variations can be automatically flagged for human review. This kind of system doesn’t just make decisions, but also knows when to pause and escalate.
That principle matters more than ever. A 2024 McKinsey survey found that 40% of AI leaders cited explainability as a top concern in deploying generative AI.
Audit trails are another non-negotiable. By logging the data used, decisions made, and human interventions along the way, AI systems provide the kind of traceability regulators and internal risk teams increasingly expect.
Human-in-the-Loop as Standard, Not Optional
Participants agreed: in insurance, the human-in-the-loop (HITL) model isn’t a best practice—it’s a baseline requirement. AI should assist experts, not replace them. Systems need to flag uncertainty and route edge cases to humans who can make judgment calls, particularly when outcomes impact customers, finances, or compliance.
This mirrors Starmind’s approach: when confidence is low, queries are routed directly to relevant experts. It’s a design choice that keeps humans at the heart of decision-making while still benefiting from the speed and scale of AI.
XAI in Insurance: A Growing Focus
A review in the journal Risks highlighted some of the most promising ways insurers are starting to make AI more explainable. For example, in claims processing, rule-based techniques help clarify which specific criteria triggered a claim to be flagged, such as:
- Unusually high payout amounts
- mismatched policy number
- or inconsistencies in reported information
In underwriting, some models are designed to show which customer attributes most influenced a decision, such as income, claim history, or coverage level. This helps underwriters, auditors, and even customers understand the why behind a decision, whether it's a policy approval, rejection, or rate assignment.
These approaches help break down the black box. Instead of just outputting a decision, the AI provides a rationale that a human can understand and act on. That’s essential for regulators, risk officers, and business leaders who need visibility into how decisions are made.
The study underscores the insurance sector’s need for trust and transparency, noting that XAI not only improves model interpretability but also plays a key role in communication, auditability, and customer-facing decisions.
It's Not Just Technical, It's Cultural
The biggest barriers to trustworthy AI aren’t always technical. Knowledge silos, unclear ownership of data, and limited incentives for cross-team collaboration all slow down adoption. That’s why many roundtable participants emphasized knowledge infrastructure: even the best AI systems underperform if they don’t have access to well-organized, human-verified institutional knowledge.
Explainability as Strategy
According to McKinsey, trust in AI "must be supported by strong pillars" including explainability, governance, information security, and human-centricity. Their recommendation: build cross-functional XAI teams with data scientists, compliance leads, domain experts, and designers all involved early.
When done right, explainability isn’t just risk mitigation. It’s a foundation for wider adoption, better user confidence, and smarter AI implementation.
Where Starmind Fits In
Explainability in AI isn’t just about surfacing logic. It’s about surfacing people. Starmind enables explainable AI by making the right human context available at the right moment.
At the center of this is the Starmind Knowledge Engine, a platform that continuously maps expertise and organizational language based on how people actually work. It builds a living model of who knows what and how that knowledge evolves. This provides the foundation for explainability by grounding AI decisions in real, human expertise.
With this engine in place, Starmind offers:
- Knowledge Suite: A structured environment to ask, answer, and reuse internal questions. This builds a verified knowledge base aligned with your company’s real workflows.
- Expert Finder: Connects users to the right person in real time to validate AI-generated outputs or provide missing context, especially when confidence scores are low.
- StarGPT: Combines internal Q&A and contextual insights with GPT-4 via Retrieval-Augmented Generation (RAG), generating responses grounded in trusted enterprise knowledge.
- API Integrations: Plug Starmind’s human intelligence layer into your copilots, assistants, and AI platforms to enrich search, validation, and decision support systems with real-time, explainable input.
By connecting human knowledge to automated systems, Starmind ensures that when AI makes a recommendation, your team understands why (and knows who to ask when it doesn’t).
- Faster access to internal knowledge
- More accurate answers, fewer repeated questions
- Higher confidence in AI-generated content
- Resilience through better knowledge continuity
- Stronger collaboration across silos and regions
Trust in AI doesn’t emerge from a single dashboard or model update. It comes from thoughtful design, continuous validation, and making sure that when AI isn't sure, people are still in the loop.
Ready to Make Your AI Explainable?
If your organization is exploring AI in underwriting, claims, or compliance but struggling to ensure transparency, trust, and human oversight, Starmind can help. Let’s talk about how our Knowledge Engine and enterprise-ready tools can strengthen your AI strategy.