Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI toolsNEW
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Wiz: AI vibe coding leads to insecure authentication

As AI speeds up coding and tool deployment, companies risk insecure systems and expanded attack surfaces.

byAytun Çelebi
September 29, 2025
in Cybersecurity
Home News Cybersecurity
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Ami Luttwak, chief technologist at cybersecurity firm Wiz, recently detailed how the rapid enterprise adoption of artificial intelligence is fundamentally changing cyberattacks. By integrating AI, companies are inadvertently creating new opportunities for malicious actors and expanding their corporate attack surface.

Luttwak describes cybersecurity as a “mind game,” a dynamic interplay where any new technology wave presents new opportunities for attackers. The current proliferation of AI introduces novel vulnerabilities that security professionals must race to understand.

AI integration creates new vulnerabilities

Enterprises are embedding AI into their workflows through techniques like “vibe coding” (using natural language prompts to generate code), deploying autonomous AI agents, and adopting new AI-powered tools. While these boost productivity, each new model or tool represents a potential entry point for attackers.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

The speed of AI-driven development is a primary driver of risk. The ability to ship code faster can lead to shortcuts, causing developers to overlook critical security steps like rigorous code reviews and secure configuration. Tests conducted by Wiz on applications built using vibe coding revealed a recurring problem: insecure implementation of authentication systems.

“Vibe coding agents do what you say, and if you didn’t tell them to build it in the most secure way, it won’t,” Luttwak explained. Because AI models lack inherent security consciousness, they produce functional code based on the prompt, which is often structurally insecure unless the developer explicitly specifies detailed security requirements.

Attackers are also using AI

Attackers are adopting the same technologies with equal enthusiasm. They are using vibe coding to generate malware, employing prompt-based techniques for phishing attacks, and deploying their own AI agents to automate exploits.

The offensive use of AI is becoming more direct, with attackers now using prompts to attack an organization’s own AI systems. By manipulating the prompts fed into a company’s internal chatbot or AI agent, an attacker can trick the system into executing destructive commands or exfiltrating sensitive data.

Supply-chain attacks and real-world examples

This dynamic creates a dangerous form of supply-chain attack. Third-party AI tools often require broad access to corporate data to function. If an attacker compromises one of these services, they can inherit its extensive permissions to pivot deep into the client’s infrastructure.

  • Drift breach: Attackers breached Drift, a startup selling AI chatbots, exposing the Salesforce data of enterprise customers. They used stolen digital keys to impersonate the company’s AI chatbot, which had legitimate access to customer environments. Luttwak confirmed that the attack code itself was created using vibe coding.
  • s1ingularity attack: This attack targeted the Nx build system used by JavaScript developers. Attackers injected malware designed to detect and hijack AI-powered developer tools like Claude and Gemini, using them to autonomously scan compromised systems for valuable data and credentials.

Despite AI adoption in the enterprise being in its early stages—estimated at around 1%—Wiz is already observing AI-implicated attacks “every week that impact thousands of enterprise customers.”

Advice for startups and enterprises

Luttwak cautions enterprises against entrusting critical data to new, small SaaS companies that may not have mature security practices. He argues that startups must operate as secure organizations from their inception.

“From day one, you need to think about security and compliance,” he advised, recommending that startups hire a CISO even if they have only five employees. Establishing secure processes and achieving compliance like SOC2 is far more manageable for a small team than retrofitting it later.

He also emphasized the importance of architecture, advising AI startups to design systems that allow customer data to remain in the customer’s environment, significantly mitigating the risk of data exfiltration.


Featured image credit

Tags: AI vibe codingFeatured

Related Posts

AWS outage disrupts Fortnite and Steam

AWS outage disrupts Fortnite and Steam

December 25, 2025
Aflac data breach affected 22.65M customers

Aflac data breach affected 22.65M customers

December 24, 2025
Nissan data breach is real and you might be affected

Nissan data breach is real and you might be affected

December 23, 2025
Spotify data breach: 86 million audio files leaked online

Spotify data breach: 86 million audio files leaked online

December 22, 2025
Google-featured VPN extension harvested and sold ChatGPT and Claude conversations

Google-featured VPN extension harvested and sold ChatGPT and Claude conversations

December 19, 2025
Cisco tells customers to wipe and rebuild hacked appliances

Cisco tells customers to wipe and rebuild hacked appliances

December 18, 2025

LATEST NEWS

Xbox Developer Direct returns January 22 with Fable and Forza Horizon 6

Dell debuts disaggregated infrastructure for modern data centers

TikTok scores partnership with FIFA for World Cup highlights

YouTube now lets you hide Shorts in search results

Google transforms Gmail with AI Inbox and natural language search

Disney+ to launch TikTok-style short-form video feed in the US

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Whitepapers
  • AI tools
  • Newsletter
  • + More
    • Glossary
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.