Life insurance has always been careful with new technology. That caution has helped protect customers and preserve trust. But 2025 is not business as usual. Artificial intelligence is already embedded in underwriting, claims, service, and risk management.
For executives, the challenge is clear. AI is shaping the future of the sector, but its impact so far is uneven. Adoption is broad, but customers are not yet feeling the benefits. The firms that move beyond pilots and take scale, trust, and learning seriously will lead. Those that hesitate risk falling behind.
This article sets out where adoption stands, what AI is doing well, what holds it back, and the practical steps executives and boards should take now.
Where AI adoption stands in 2025
- 69% of insurers say they have deployed AI in some form, but only 36% of customers report better digital experiences
- 62% of executives believe the main advantage is in high-volume tasks like claims and customer service
- Only 50% of customers trust insurers to provide accurate, personalised quotes
- A Conning survey in 2025 found that 90% of insurers are exploring generative AI, with 55% already in early or full adoption.
- A BCG study showed only 7% of insurers have scaled AI across the enterprise. Most remain stuck in pilots.
- The NAIC found 84% of health insurers use AI or ML, and 92% have governance principles in place. Life carriers will need to reach a similar level of discipline.
- Swiss Re reported that AI models in life underwriting now identify non-disclosed smokers with over 95% accuracy, allowing targeted testing on 1 in 15 applicants instead of 1 in 3.
- In Morgan Stanley’s AI Adopter survey, insurance firms increased their participation from 48% to 71% in just six months, with early adopters seeing stronger returns.
- PwC’s Insurance Banana Skins 2025 ranked AI as the second most pressing risk, just behind cyber.
The picture is consistent. AI is no longer optional, but adoption is shallow. Customers are not yet convinced, and the real advantage is only beginning to emerge.
Where AI is already delivering
AI is already delivering meaningful results in life insurance. They may not be at full scale, but they show what is possible.
Underwriting
This is where AI has made the clearest impact. Carriers are moving from rule-based decision trees toward predictive models that can combine multiple data sources. Electronic health records, prescription histories, credit-related datasets, and wearables are now part of the underwriting conversation.
For executives, the Swiss Re case study is the most practical signal. Their models spot non-disclosed smokers with more than 95% accuracy. Instead of testing a third of applicants, only one in fifteen need to be tested. The result is lower costs, a faster process for the customer, and better risk selection.
The broader shift is about speed and precision. Faster triage reduces cycle times. More precise risk scoring means less leakage and more consistent pricing. This is the kind of improvement that can move the customer trust dial if communicated well.
Claims
AI is changing claims from a reactive process to one that is both faster and more targeted. Triage systems can score claims for complexity and fraud risk as soon as they arrive. Routine cases are fast-tracked, complex ones are escalated, and fraud indicators are flagged for review.
The impact is twofold. First, operational efficiency — adjusters spend more time where human judgment is needed. Second, customer experience — customers see faster resolution, which matters in a stressful moment.
Some carriers are experimenting with proactive claims, where early signals of a potential claim are flagged, and customers are contacted before a formal claim is filed. If adopted safely, this could change the dynamic of claims entirely, from reaction to prevention.
Customer engagement
Digital service tools are already common. Chatbots, virtual assistants, and AI-driven helpdesks can answer routine queries. But the research is clear: customers still want people when stakes are high.
While 41% of executives report using AI for customer support, 59% of customers expect to speak to a person in a crisis. Only 10% are comfortable being left with a chatbot when a human is not available. The lesson is simple: use AI to make service faster and more accurate, but never remove the human option when it matters most.
Risk management and fraud detection
Fraud has long been a cost pressure. AI is already scanning datasets to flag suspicious behaviour patterns that manual processes miss. Mortality forecasting is also improving, allowing carriers to manage capital more effectively. These improvements are often invisible to customers but vital for balance sheet resilience.
What is holding the industry back
The benefits are real, yet most firms are stuck at pilot stage. The reasons are not primarily technical. They are leadership and organisational challenges.
Governance and oversight
AI is harder to govern than earlier technologies. Traditional model risk frameworks are not enough when models learn from dynamic datasets or when agentic AI tools act with more autonomy. Regulations vary across regions, which complicates global carriers’ ability to standardise.
Health insurers are ahead, with 92% already reporting formal governance principles. Life insurers will need to catch up fast. Regulators are moving, and boards will be expected to show clear oversight structures.
Data quality and infrastructure
Without strong data foundations, scale is impossible. Many carriers still rely on siloed legacy systems. Data is inconsistent, incomplete, or inaccessible. Executives know this, which is why more than half say they are investing in data quality improvements over the next three years. But the problem is not only technology, it is also ownership. Without clear accountability for data quality, projects stall or models deliver unreliable outcomes.
Talent and fluency
The AI talent gap is well known, but in life insurance it takes on a sharper edge. Underwriters, claims handlers, and distribution leaders are expected to use AI tools safely, yet only 2% of insurers report that nearly all staff are AI fluent. Upskilling is not a nice-to-have. It is a necessity for adoption to stick.
Change management
Executives admit this is the weakest muscle. Leading people through change is harder than buying software. If employees are unclear about why AI is being used, or if they do not trust the outputs, they will revert to old ways of working. Without visible leadership and clear communication, the technology alone will not deliver.
Four imperatives for leaders
The sector does not need more pilots. It needs scale, trust, and clear direction. For executives, four imperatives stand out.
1. Fix data quality and access
Data is the foundation. Leaders should prioritise:
- Establishing ownership for key data sources.
- Setting quality standards and service levels.
- Investing in integration to make data accessible across departments.
- Building privacy and compliance into processes from the start.
Without this, no amount of investment in AI will matter.
2. Build clear governance
Governance is not a box-ticking exercise. It is the structure that will decide whether regulators, boards, and customers trust how AI is used.
Practical steps include:
- Forming AI councils that include business, risk, finance, and technology leaders.
- Creating model inventories and clear approval cadences.
- Requiring audit trails and incident playbooks.
- Running scenario rehearsals to prepare for regulatory questions.
Health insurers already show what good looks like. Life carriers will be expected to meet the same standard.
3. Invest in fluency across roles
Executives should make fluency a core part of talent strategy. That means:
- Role-specific training for underwriters, claims handlers, and distribution staff.
- Teaching leaders how to ask the right questions about fairness, drift, and reliability.
- Building a culture where staff understand that AI is a tool, not a threat.
The goal is not to make everyone a data scientist. It is to ensure that everyone who uses AI tools can use them responsibly and confidently.
4. Tie adoption to outcomes customers can feel
Pilots with no link to business outcomes are wasted effort. Leaders should demand that every AI initiative ties directly to measurable goals. That could mean:
- Faster cycle times in underwriting or claims.
- Improved accuracy in risk selection.
- Reduced fraud leakage.
- Higher customer satisfaction scores.
The most powerful outcome will be customer trust. If customers feel the process is fairer, simpler, and faster, they will notice. If not, the investment will not deliver.
The role of the board
Boards cannot treat AI as a back-office project. It is a strategic shift that affects growth, risk, culture, and customer trust. Oversight must go beyond approving budgets or monitoring compliance.
Strategic direction
Boards should ask whether AI is being applied in ways that align with the company’s long-term goals. Are we using it only to trim costs, or are we also using it to improve customer trust, distribution strength, and underwriting quality? Strategic alignment is what separates firms that see AI as a tool from those that use it as a driver of competitive edge.
Risk and governance
AI introduces new forms of risk — bias, explainability gaps, and operational failures when models drift. Boards need clear visibility of how management is governing these risks. That means understanding the firm’s model inventory, oversight cadence, and escalation paths. The board does not need to inspect every model, but it must be able to show regulators that it has asked the right questions and tested management’s answers.
Talent and culture
AI is only as effective as the people who use it. Boards should ask whether investment in training is real and sustained. Do underwriters and claims teams understand how to use the tools safely? Are leaders equipping people with the confidence to challenge outputs when needed? Culture matters here — a workforce that feels threatened by AI will resist adoption, while one that sees it as support will move faster.
Customer trust
Boards must insist on customer metrics, not just efficiency metrics. Faster claims handling or more accurate underwriting means little if customers believe the process is unfair or opaque. Trust indicators should be tracked alongside financial outcomes.
Measuring progress
Finally, boards should expect clarity on progress. How many projects have moved beyond pilots? How are we learning from failures? Are we scaling what works across regions and products? The reality is that only 7% of insurers have achieved scale. Boards must keep asking why and ensure their firms are not left behind.
Conclusion
AI is already changing life insurance. Adoption is widespread, but scale is still rare, and customer trust has not yet caught up with the pace of change. Regulators are paying close attention, and only a handful of firms have managed to move from pilots to production at scale. For executives, the priorities are clear: strengthen data foundations, establish governance that works in practice, build fluency across teams, and focus on outcomes that customers can see and feel.
This moment is not about chasing the newest tool. It is about building trust and creating long-term advantage. The leaders who approach AI with focus and discipline will define the standard for the industry in the years ahead, while those who hesitate will be left explaining why their pilots never turned into progress. At Eliot Partnership, we see how leadership choices shape these outcomes every day. AI is not a future concern; it is today’s leadership test.