If you’ve spent even a few years in claims or fraud investigation, you know one thing for certain: fraud evolves with technology. Today, the newest tool in the fraudster’s kit is generative AI.
Across multiple markets and product lines, internal investigations teams and fraud units are reporting a striking trend. An estimated 20–30% of insurance claims now include some form of AI-altered media (Shift Technology report). It includes manipulated images, fabricated repair invoices, synthetic documents, or even AI-generated videos meant to support fraudulent claims.
For insurers, this isn’t just another fraud tactic. Photorealistic images represent a structural shift in how fraud is executed. The tools required to generate convincing fake evidence are now cheap, accessible, and improving at a rapid pace.
But the good news is that the same technological wave enabling fraud can also help insurers fight back. Here is a closer look at what’s happening—and how the industry is responding.
Traditionally, P&C claims fraud relied on staged accidents, exaggerated damages, or forged paperwork. Those tactics still exist, but AI has dramatically lowered the barrier to creating convincing supporting evidence.
42% of U.S. insurance carriers report AI and digital tools being exploited for fraud. Nearly half, flag claims tied to AI-generated documents."
— TrueScreen, March 2026
Today, a large percentage of auto and property claims start with photo estimating apps or digital FNOL submissions. Fraudsters can now use generative AI tools to:
Modify accident photos to exaggerate damage
Create fake repair estimates or invoices
Alter timestamps or location metadata
Produce entirely fabricated images or videos of incidents
In our everyday life, we are bombarded with images and videos that look real but are AI-generated. It takes us time to notice the signs that they are fake, and often it slips by, but when it comes to fraudulent claims, the stakes are high.
Let's talk about how AI deepfakes blew up insurance fraud last year. 2025 was wild, with scammers getting scarily good at faking everything from voices to crash scenes.
According to the Coalition Against Insurance Fraud, non-health insurance fraud costs the U.S. more than $40 billion each year, affecting auto, property, and other P&C lines. That cost ultimately flows back to consumers, adding an estimated $400 to $700 annually to the average household’s insurance premiums.
These fraudulent claims from April 2025 detail fraudsters sourcing photos of salvaged vehicles from online auctions similar to Copart, then using generative AI to insert real license plates and fabricate collision damage for auto claims. Zurich Insurance's SIU ran a forensic analysis that revealed metadata timestamps predating the claimed accidents by years, as well as pixel-level anomalies from AI editing. The claims were denied outright, exposing a fraudulent ring before any money moved.
Scammers in 2025 bombarded claims hotlines with deepfake audio impersonating policyholders or witness statements on auto and liability claims, using stolen SSNs and cloned voices (sourced from social media clips) to pass knowledge-based authentication and request payout redirects. West Coast carriers deployed liveness detection software that flagged unnatural vocal cadence and spectral patterns unique to AI synthesis. No disbursements occurred; the attacks got blocked in real-time. TruthScan detailed this tactic as a growing P&C threat, proving voice biometrics with anomaly scoring protects high-volume phone verification without slowing legitimate claims.
In one of the first documented P&C claim slip-ups, a Midwest property carrier approved an $85K homeowners claim based on a remote video walkthrough showing "severe storm damage" to a roof and interior—water stains, missing shingles, the full visual package. Post-payout audit using advanced frame analysis uncovered it as a deepfake that stitched real footage with AI-generated damage overlays; the inconsistent lighting from consumer tools like those popularized in 2025 was caught in the audit. Verisk's ClaimSearch trends flagged similar image/video fraud patterns that year, highlighting why initial remote inspections need embedded forensics from day one.
These photorealistic image examples trace the 2025 learning curve: Early gaps let a handful through, but by mid-year, integrated AI detection flipped the script, catching 95% of attempts per reports.
The real shift in fraud detection isn’t just better tools, it’s where and how those tools are being used.
A few years ago, fraud detection largely sat downstream. Claims would be processed, and only suspicious ones would make their way to Special Investigation Units (SIU). That model doesn’t hold anymore. When AI can generate or alter claim evidence in seconds, detection has to happen at the point of entry, not after the fact.
At the core platform level, especially at FNOL, insurers are now running real-time verification on every piece of submitted evidence.
Rather than operating as separate investigative tools, fraud detection capabilities are increasingly embedded directly within claims management systems through APIs. When policyholders upload images or documents through a mobile claims portal, those files can automatically pass through media analysis, metadata verification, and anomaly detection engines in seconds.
AI-based image forensics models (typically CNNs and vision transformers) don’t assess what the image shows; they detect how it was constructed, flagging inconsistencies in pixel structure, lighting gradients, and texture patterns that often result from generative AI edits.
In parallel, metadata parsing engines extract and validate EXIF data. They identify whether a file has been edited, detect mismatches between device signatures and claim narratives, and flag abnormal timestamp sequences. Techniques like error level analysis (ELA) and noise mapping are used to isolate localized edits, especially when only part of an image has been manipulated.
These signals are then combined using machine learning fraud scoring models embedded directly within the claims system. Instead of static rules, these models (often gradient boosting or deep learning classifiers} evaluate multiple weak signals together. A slightly altered image, combined with metadata inconsistencies and estimate deviations, can elevate the overall fraud risk score.
For the adjuster, this appears as a simple risk indicator within their workflow, but underneath, it’s a multi-layered, real-time evaluation happening at intake.
Beyond the core system, insurers are deploying more advanced detection capabilities designed to identify patterns that only emerge at scale.
Technologies like perceptual hashing (pHash) and image embedding models allow insurers to detect near-duplicate images—even when they’ve been resized or subtly altered using AI. This is critical for identifying reused or slightly modified images across multiple claims.
At the same time, graph analytics engines map relationships across claims, linking shared devices, vendors, and submission behaviors. This is how insurers are increasingly uncovering coordinated fraud by identifying networks, not just individual anomalies.
Document AI models add another layer, analyzing invoices and repair estimates for structural and linguistic patterns typical of AI-generated or templated documents.
All of these capabilities are connected through API-driven integrations. Files are automatically routed to external detection engines, and outputs—fraud scores, anomaly flags, similarity matches—are returned in real time and embedded back into the claims workflow.
What makes this effective is that detection systems are now trained on generative AI outputs themselves. Using adversarial training, models learn the statistical fingerprints left behind by synthetic media and continuously evolve as new manipulation techniques emerge.
This is the shift: fraud may be powered by AI but so is detection. And in modern P&C claims environments, that battle is already happening in real time, at scale, and directly inside the claims process.