Ami Luttwak, chief technologist at cybersecurity firm Wiz, recently detailed how the rapid enterprise adoption of artificial intelligence is fundamentally changing cyberattacks. By integrating AI, companies are inadvertently creating new opportunities for malicious actors and expanding their corporate attack surface.
Luttwak describes cybersecurity as a “mind game,” a dynamic interplay where any new technology wave presents new opportunities for attackers. The current proliferation of AI introduces novel vulnerabilities that security professionals must race to understand.
AI integration creates new vulnerabilities
Enterprises are embedding AI into their workflows through techniques like “vibe coding” (using natural language prompts to generate code), deploying autonomous AI agents, and adopting new AI-powered tools. While these boost productivity, each new model or tool represents a potential entry point for attackers.
The speed of AI-driven development is a primary driver of risk. The ability to ship code faster can lead to shortcuts, causing developers to overlook critical security steps like rigorous code reviews and secure configuration. Tests conducted by Wiz on applications built using vibe coding revealed a recurring problem: insecure implementation of authentication systems.
“Vibe coding agents do what you say, and if you didn’t tell them to build it in the most secure way, it won’t,” Luttwak explained. Because AI models lack inherent security consciousness, they produce functional code based on the prompt, which is often structurally insecure unless the developer explicitly specifies detailed security requirements.
Attackers are also using AI
Attackers are adopting the same technologies with equal enthusiasm. They are using vibe coding to generate malware, employing prompt-based techniques for phishing attacks, and deploying their own AI agents to automate exploits.
The offensive use of AI is becoming more direct, with attackers now using prompts to attack an organization’s own AI systems. By manipulating the prompts fed into a company’s internal chatbot or AI agent, an attacker can trick the system into executing destructive commands or exfiltrating sensitive data.
Supply-chain attacks and real-world examples
This dynamic creates a dangerous form of supply-chain attack. Third-party AI tools often require broad access to corporate data to function. If an attacker compromises one of these services, they can inherit its extensive permissions to pivot deep into the client’s infrastructure.
- Drift breach: Attackers breached Drift, a startup selling AI chatbots, exposing the Salesforce data of enterprise customers. They used stolen digital keys to impersonate the company’s AI chatbot, which had legitimate access to customer environments. Luttwak confirmed that the attack code itself was created using vibe coding.
- s1ingularity attack: This attack targeted the Nx build system used by JavaScript developers. Attackers injected malware designed to detect and hijack AI-powered developer tools like Claude and Gemini, using them to autonomously scan compromised systems for valuable data and credentials.
Despite AI adoption in the enterprise being in its early stages—estimated at around 1%—Wiz is already observing AI-implicated attacks “every week that impact thousands of enterprise customers.”
Advice for startups and enterprises
Luttwak cautions enterprises against entrusting critical data to new, small SaaS companies that may not have mature security practices. He argues that startups must operate as secure organizations from their inception.
“From day one, you need to think about security and compliance,” he advised, recommending that startups hire a CISO even if they have only five employees. Establishing secure processes and achieving compliance like SOC2 is far more manageable for a small team than retrofitting it later.
He also emphasized the importance of architecture, advising AI startups to design systems that allow customer data to remain in the customer’s environment, significantly mitigating the risk of data exfiltration.