Deepfake Threat Index
The quarterly measure of how dangerous the environment is for your digital identity right now. Scored 0–100 across confirmed, sourced incidents.
The deepfake conversation is noisy and fragmented. People feel the threat but cannot quantify it. The Deep Fake Threat Index translates a fast-moving, technically complex problem into a single, trackable quarterly score. Not a collection of scary headlines. A benchmark. The kind of number journalists cite, insurers reference, and individuals use to understand their risk.
Financial theft, political manipulation, and national security exposure score Critical (3). Personal harm and reputational damage score High (2). Moderate harm with no measurable effect scores 1.
Global incidents with millions exposed score 3. National mainstream coverage scores 2. Regional exposure scores 1. Contained incidents with small group exposure score 0.
Viral content trending across platforms before takedown scores 2. Regionally noticed with some press scores 1. Contained incidents with slow spread score 0. Speed matters because spread precedes damage.
Four confirmed incidents. All sourced to named outlets including CNN, NBC News, CNBC, Reuters, and the California Attorney General's official press release. No unverified reports. No speculation.
xAI's Grok generated 4.4 million images in 9 days, with 1.8 million being sexualized depictions of real women. Children were targeted. California's AG launched a formal investigation. 35 state AGs sent a coordinated demand letter. Multiple countries banned the platform. Three lawsuits filed, including a class-action on behalf of minors.
A deepfake of Nvidia CEO Jensen Huang attracted 95,000 simultaneous viewers during Nvidia's real keynote, which had only 12,000. YouTube's algorithm surfaced the fraudulent stream above the verified broadcast. Viewers were directed to scan a QR code and send cryptocurrency to a scam wallet.
The National Republican Senatorial Committee posted an 85-second AI-generated video of Texas Senate candidate James Talarico. UC Berkeley forensics professor Hany Farid examined it: "hyper-realistic and I don't think most people would immediately know it is fake." No federal law prohibits it. At least 5 confirmed deepfake incidents across the 2026 midterms.
An attacker used AI-generated video to impersonate a trusted business partner during live calls with UXLINK. The deepfake was convincing enough to gain system-level access. The attacker then took control of a smart contract, minted billions of tokens, and drained the treasury. Forensic analysis confirmed the deepfake attack vector.
The complete DFTI Q1 2026 Intelligence Brief includes deep dives on each incident, the full scoring breakdown, quarterly trend analysis, a benchmarking data table, and a practical toolkit for enterprises and individuals. Free. No strings.
Q1 2026 | Free Intelligence Brief
Published quarterly by Social Impressions Inc.
No spam. Unsubscribe anytime.
From the creators of DFTI