Emerging Threat Alert
Palo Alto Networks Unit 42 researchers have documented a new attack technique where seemingly benign webpages use client-side API calls to legitimate LLM services to generate malicious JavaScript in real-time. This leaves behind no static, detectable payload [1].
The phishing landscape has fundamentally changed. Where attackers once had to carefully craft malicious pages and host them on suspicious domains, they can now weaponize generative AI to create convincing phishing content on-the-fly, assembled piece by piece in your browser using legitimate AI services.
According to threat intelligence analysis, 82.6% of phishing emails now use some form of AI-generated content, with over 90% of polymorphic attacks leveraging large language models [2]. The result: phishing attacks that are nearly impossible to detect using traditional signature-based security tools.
This article examines how these LLM-powered runtime assembly attacks work, why they're so effective at evading detection, and what your business can do to defend against this new generation of AI-powered threats.
How LLM-Powered Runtime Assembly Attacks Work
Traditional phishing attacks require pre-built malicious content. Attackers create a fake login page, host it somewhere, and hope it evades security scanners long enough to capture credentials. Security tools can detect these pages by analyzing their code, checking URLs against blocklists, or identifying known malicious patterns.
Runtime assembly attacks flip this model entirely. Instead of delivering pre-built malicious content, attackers deliver what appears to be a harmless webpage containing hidden prompts. When the victim's browser loads the page, it makes API calls to legitimate AI services that generate the malicious code in real-time [1].
The Attack Workflow (Unit 42 PoC)
Palo Alto Networks Unit 42 researchers documented the following proof-of-concept attack flow:
LLM Runtime Assembly Attack Chain
The key innovation is that the malicious content never exists until the moment of execution. There's no malicious file to scan, no suspicious URL to block, and no known signature to match. The attack leverages trusted AI infrastructure to deliver the payload.
Why Traditional Security Fails
This attack technique is particularly dangerous because it defeats multiple layers of traditional security:
1. Signature-Based Detection
Traditional antivirus and email filters rely on known malicious signatures. Because the LLM generates syntactically unique code for each visit, there are no static signatures to match. The non-deterministic output of AI models provides inherent polymorphism [1].
2. URL Filtering and Blocklists
The initial webpage appears completely benign. The malicious content is delivered from trusted LLM API domains (OpenAI, Anthropic, Google), which can't be blocked without breaking legitimate business applications.
3. Sandbox Analysis
Some advanced attacks use context-aware payloads that behave benignly when scanned by security bots but deploy phishing content only when accessed by human users [4]. The page can detect automated analysis and respond differently.
4. Static Code Analysis
Because the malicious JavaScript is generated at runtime and assembled in the browser, static analysis of the original webpage reveals nothing suspicious. Unit 42 noted that their identified samples "had not been observed on VirusTotal" despite being functional phishing attacks [5].
Blob URIs: Another Evasion Technique
Attackers are also using blob URIs to construct phishing pages locally within the victim's browser. This means there's no actual URL for traditional filters to block until the page renders. Combined with LLM-generated content, this creates nearly undetectable attacks [4].
The Scale of AI-Powered Phishing
The threat is not theoretical. AI-generated phishing has become the dominant attack vector:
- 1 malicious email every 42 seconds: The Cofense Phishing Defense Center tracked this rate in 2024, with many being polymorphic attacks [3]
- 1,265% surge: Phishing attacks linked to generative AI trends have exploded [2]
- 54% click-through rate: AI-generated phishing emails achieved this rate compared to just 12% for traditional campaigns in research studies [6]
- $17.4 billion: Global financial losses from phishing in 2024, a 45% year-over-year increase [2]
- 97% of security professionals: Fear their organization will face an AI-driven incident [7]
Real-World AI Attack Examples
$25 Million Deepfake Video Conference
In February 2024, a finance worker at engineering firm Arup transferred $25 million to fraudsters after attending what appeared to be a legitimate video conference with the company's CFO and leadership team. Every participant except the victim was an AI-generated deepfake created using publicly available footage [8].
AI Voice Clone Targets LastPass
In April 2024, a LastPass employee was targeted by an AI voice-cloning scam that convincingly impersonated CEO Karim Toubba. The employee fortunately recognized the attempt and didn't fall for the attack [8].
Developer-Targeted Spear Phishing
In September 2025, attackers used an AI-written spear phishing email to target a developer at a major software company. The email referenced specific GitHub commits and used the developer's preferred coding terminology, leading to credential theft and hijacking of NPM packages with billions of weekly downloads [4].
How to Protect Your Business
Defending against AI-powered phishing requires adapting your security strategy to address threats that traditional tools miss.
1. Deploy Browser-Based Runtime Protection
According to Unit 42, "The most effective defense against this new class of threat is runtime behavioral analysis that can detect and block malicious activity at the point of execution, directly within the browser" [1]. Solutions like browser isolation and real-time JavaScript analysis can catch attacks that static analysis misses.
2. Implement AI-Trained Detection
Fight AI with AI. Palo Alto Networks retrained their malicious JavaScript classifier on "tens of thousands of LLM-rewritten samples" and now detects "thousands of new phishing and malware webpages per week" [5]. Look for security solutions that specifically address AI-generated threats.
3. Strengthen Email Security
While email authentication (SPF, DKIM, DMARC) helps, 89% of malicious emails bypassed these methods in recent studies [4]. Layer additional protections:
- AI-powered email analysis that detects linguistic patterns
- Behavioral analysis that flags unusual sender patterns
- Link rewriting and time-of-click analysis
- Attachment sandboxing with anti-evasion capabilities
4. Update Security Awareness Training
Traditional phishing training focused on obvious red flags like poor grammar and suspicious URLs. AI-generated attacks don't have these tells. Update training to emphasize:
New Training Focus Areas:
- Verify requests through out-of-band channels (call back on a known number)
- Be suspicious of any urgent financial or credential requests, even if well-written
- Question unexpected communications, even from "known" contacts
- Report suspicious activity immediately, even if uncertain
- Understand that AI can now perfectly mimic writing styles and voices
5. Implement Zero Trust Principles
Assume breach. Even if an employee's credentials are stolen through an AI-powered attack:
- MFA everywhere: Require multi-factor authentication for all applications (though be aware of MFA bypass techniques)
- Least privilege access: Limit what any single compromised account can access
- Continuous verification: Don't trust sessions indefinitely
- Network segmentation: Contain potential breaches
6. Monitor for Credential Exposure
Deploy dark web monitoring and credential exposure alerting. If employee credentials appear in breaches or are sold on criminal forums, you can reset them before attackers use them.
Is Your Business Protected Against AI Threats?
Take our free IT Security Assessment to evaluate your defenses against AI-powered phishing and other emerging threats.
Get Your Free AssessmentCISA Guidance on AI Threats
CISA has released multiple guidance documents addressing AI-related cybersecurity threats:
- AI Cybersecurity Collaboration Playbook: Provides guidance for sharing AI-related threat information through the Joint Cyber Defense Collaborative (JCDC) [9]
- Joint Guidance on Deploying AI Systems Securely: Co-authored with FBI, NSA, and Five Eyes partners, this guidance addresses mitigations for known AI vulnerabilities [10]
- AI Data Security Guidance (May 2025): Outlines best practices for managing data security risks in AI systems, including data supply chain risks and maliciously modified data [11]
CISA's AI resource page provides ongoing updates on AI-related threats and defensive guidance.
Key Takeaways
Summary for IT Leaders:
- Attackers now use GenAI to create phishing pages in real-time, in the victim's browser
- Traditional signature-based detection is ineffective against polymorphic AI attacks
- 82.6% of phishing emails now use AI-generated content
- Runtime behavioral analysis in the browser is the most effective defense
- AI-powered detection must fight AI-powered attacks
- Security awareness training must evolve beyond "look for typos"
- Zero Trust principles help contain damage when attacks succeed
The AI Arms Race
We're in the early stages of an AI-powered security arms race. Attackers are using generative AI to create more convincing, more evasive, and more scalable attacks. Defenders must respond with AI-powered detection that can identify malicious behavior even when the code itself constantly changes.
For small and medium businesses, this means working with security partners who are investing in AI-powered defense capabilities. The days of "set and forget" security are over. Protecting your business now requires adaptive defenses that evolve as quickly as the threats.
At LocalEdgeIT, we help Denver businesses implement layered security strategies that address both traditional and AI-powered threats. From endpoint protection with behavioral analysis to security awareness training that addresses modern attack techniques, our team can help you stay ahead of evolving threats.
Ready to strengthen your defenses? Take our free IT Security Assessment to identify gaps in your current protection, or contact us to discuss your security needs.
Sources & Additional Resources
- The Next Frontier of Runtime Assembly Attacks: Leveraging LLMs to Generate Phishing JavaScript in Real Time - Palo Alto Networks Unit 42, 2024
https://unit42.paloaltonetworks.com/real-time-malicious-javascript-through-llms/
Primary research on LLM-powered runtime assembly attacks. - AI-Generated Phishing: The Top Enterprise Threat of 2026 - StrongestLayer, 2025
https://www.strongestlayer.com/blog/ai-generated-phishing-enterprise-threat
Enterprise threat analysis and statistics. - Polymorphic Phishing Attacks Flood Inboxes - Cofense/Help Net Security, May 2025
https://www.helpnetsecurity.com/2025/05/16/polymorphic-phishing-attacks-cofense/
Cofense Phishing Defense Center statistics. - Phishing Trends in 2026: The Rise of AI, MFA Exploits and Polymorphic Attacks - Managed Services Journal, 2026
https://managedservicesjournal.com/articles/phishing-trends-in-2026-the-rise-of-ai-mfa-exploits-and-polymorphic-attacks/
Industry analysis of emerging phishing techniques. - Now You See Me, Now You Don't: Using LLMs to Obfuscate Malicious JavaScript - Palo Alto Networks Unit 42
https://unit42.paloaltonetworks.com/using-llms-obfuscate-malicious-javascript/
Research on LLM-based malware obfuscation. - Generative AI Makes Social Engineering More Dangerous - IBM, 2024
https://www.ibm.com/think/insights/generative-ai-social-engineering
IBM research comparing AI vs human phishing effectiveness. - AI Cyber Attack Statistics 2025 - Tech Advisors
https://tech-adv.com/blog/ai-cyber-attack-statistics/
Compiled statistics on AI-powered attacks. - How Phishing Attacks Are Evolving With AI And Deepfakes In 2025 - Kelser Corp
https://www.kelsercorp.com/blog/how-phishing-attacks-evolved-ai-2025
Real-world case studies of AI-powered attacks. - AI Cybersecurity Collaboration Playbook - CISA
https://www.cisa.gov/resources-tools/resources/ai-cybersecurity-collaboration-playbook
Official CISA guidance on AI threat information sharing. - Joint Guidance on Deploying AI Systems Securely - CISA/FBI/NSA, April 2024
https://www.cisa.gov/news-events/alerts/2024/04/15/joint-guidance-deploying-ai-systems-securely
Multi-agency guidance on AI security. - AI Data Security Guidance - CISA/NSA, May 2025
https://media.defense.gov/2025/May/22/2003720601/-1/-1/0/CSI_AI_DATA_SECURITY.PDF
Joint guidance on AI data security risks.