RSS News Feed

AI Is Exposing a Security Gap Companies Aren’t Staffed for: Researcher


Companies may have cybersecurity teams in place, but many still aren’t prepared for how AI systems actually fail, says an AI security researcher.

Sander Schulhoff, who wrote one of the earliest prompt engineering guides and focuses on AI system vulnerabilities, said on an episode of “Lenny’s Podcast” published Sunday that many organizations lack the talent needed to understand and fix AI security risks.

Traditional cybersecurity teams are trained to patch bugs and address known vulnerabilities, but AI doesn’t behave that way.

“You can patch a bug, but you can’t patch a brain,” Schulhoff said, describing what he sees as a mismatch between how security teams think and how large language models fail.

“There’s this disconnect about how AI works compared to classical cybersecurity,” he added.

That gap shows up in real-world deployments. Cybersecurity professionals may review an AI system for technical flaws without asking: “What if someone tricks the AI into doing something it shouldn’t?” said Schulhoff, who runs a prompt engineering platform and an AI red-teaming hackathon.

Unlike traditional software, AI systems can be manipulated through language and indirect instructions, he added.

Schulhoff said people with experience in both AI security and cybersecurity would know what to do if an AI model is tricked into generating malicious code. For example, they would run the code in a container and ensure the AI’s output doesn’t affect the rest of the system.

The intersection of AI security and traditional cybersecurity is where “the security jobs of the future are,” he added.

The rise of AI security startups

Schulhoff also said that many AI security startups are pitching guardrails that don’t offer real protection. Because AI systems can be manipulated in countless ways, claims that these tools can “catch everything” are misleading.

“That’s a complete lie,” he said, adding that there would be a market correction in which “the revenue just completely dries up for these guardrails and automated red-teaming companies.”

AI security startups have been riding the wave of investor interest. Big Tech and venture capital firms have poured money into the space as companies rush to secure AI systems.

In March, Google bought cybersecurity startup Wiz for $32 billion, a deal aimed at strengthening its cloud security business.

Google CEO Sundar Pichai said AI was introducing “new risks” at a time when multi-cloud and hybrid setups are becoming more common.

“Against this backdrop, organizations are looking for cybersecurity solutions that improve cloud security and span multiple clouds,” he added.

Business Insider reported last year that growing security concerns around AI models have helped fuel a wave of startups pitching tools to monitor, test, and secure AI systems.





Source link