What do CISOs really think about AI?

Insights from security leaders on the potential of AI

Lacework EditorialOctober 6, 20236 min read

Artificial intelligence (AI) has become a buzzword we can’t escape, especially in the world of cybersecurity. You know when you’ve watched three seasons of a show on Netflix, and you tell yourself “Okay, one more episode and I’m done,” but then you’re left with a cliffhanger? That’s how we feel about AI. Every day, there’s a new AI-powered app, a fresh debate about its risks and rewards, or a new feature in ChatGPT, and we can’t help but be curious. AI isn’t going away any time soon, and it sounds like cybersecurity leaders think we should embrace it. But how? Let’s turn to the security professionals who are balancing its benefits and risks every day.

The two faces of AI

On the Code to Cloud podcast, Lacework Field CISOs Tim Chase and Andy Schneider talk to global security experts, and nearly all of those conversations have touched on AI’s influence in cybersecurity. From automating routine tasks to identifying threats, the power of AI is clear, but its risks certainly aren’t being overlooked by security leaders.

Take Bill Dougherty, CISO at Omada Health, for instance. His stance on generative AI, a subset of AI that autonomously creates content, is one that many security leaders share: “It excites me because it has tons of potential for being kind of a force multiplier. But it scares me because it has tons of risk,” he said. 

 

It excites me because it has tons of potential for being kind of a force multiplier. But it scares me because it has tons of risk.

Bill Dougherty, CISO, Omada Health
 

As Bill has extensively experimented with different AI technologies, he’s noticed a pattern from generative AI tools like ChatGPT — it sometimes confidently asserts incorrect data and information (and sounds pretty convincing, too). In sensitive sectors like healthcare, the repercussions of those inaccuracies can be severe.

But these risks aren’t scaring businesses away. According to a Forbes survey, 97% of business owners believe ChatGPT will help their business. 

So if we can’t avoid using these tools, we’ll need to learn how to use them effectively. We also need to be prepared to discern the genuine AI advancements from the hype. Sebastien Jeanquier, Chief Security Officer at fintech startup Upvest, said, “It’ll have a lot of great applicable uses, but you need to really be on the ball to some degree from a technical perspective, but also from the conceptual perspective. How do these new technologies work — how much of it is snake oil and how much of it is meaningful?” 

 

It'll have a lot of great applicable uses, but you need to really be on the ball to some degree from a technical perspective, but also from the conceptual perspective.

Sebastien Jeanquier, CSO, Upvest
 

But in cybersecurity, telling the difference between real progress and just talk is easier said than done. One way to figure it out is by watching how quickly changes happen; if something claims to revolutionize everything overnight, it might be too good to be true. Authentic innovations will typically have a foundational premise grounded in addressing real-world challenges. Reliability is another big sign; if an AI tool doesn’t work consistently and accurately, it’s likely not as groundbreaking as it seems. Also, real advancements are often where companies invest more because they see real results. And the best AI tools work alongside people, not instead of them.

Understanding the threats: AI’s darker shades

One of the most significant concerns with AI is its ability to generate highly convincing fakes — be it deepfakes that replicate real-life personas, AI-driven phishing campaigns that can adapt to user behavior, or voice replication that can deceive even the most vigilant. These aren’t all new threats, but AI escalates their potency. AI’s iterative learning process makes it progressively better at creating more convincing fakes, a challenge in the fight against misinformation.

As these AI models learn and adapt from their mistakes, they become even more potent. “In this case, it’s actually making the AI stronger because the more we’re able to detect what’s fake, then the AI learns and the AI will generate better fakes,” Greg Crowley, CISO at eSentire, said. 

 

In this case, it's actually making the AI stronger because the more we’re able to detect what's fake, then the AI learns and the AI will generate better fakes.

Greg Crowley, CISO, eSentire
 

There are also concerns about the freshness of the data generative AI uses. Older models might pull data from outdated sources, making them less effective. Because AI is a new capability, “We have to have some protections to make sure that we don’t go in with a blind eye and create more risk as a result,” Billy Spears, CISO at Teradata said. 

Even with these concerns in mind, CISOs generally agree that it’s no longer a question of if, but when and how organizations should integrate AI into their cybersecurity strategies. Leaders that delay or overlook the importance of up-to-date AI models risk leaving their systems exposed to increasingly intelligent threats.

Why the benefits outweigh the risks

When addressing security issues, AI is particularly helpful in handling data, especially considering the fact that human resources are spread thin. One of the biggest challenges that security teams face is managing large datasets. AI can play a role in analyzing vast amounts of data to find correlations and promptly identify anomalous results or sudden increases in risk. “This is the future for us. You don’t have unlimited humans to [address] the problem. You’re going to need to use technology or augment that technology to solve that need,” Billy said. 

 

This is the future for us. You don't have unlimited humans to (address) the problem. You're going to need to use technology or augment that technology to solve that need.

Billy Spears, CISO, Teradata
 

AI can also help address the talent gap in cybersecurity. As there are many unfulfilled roles, AI can assist in streamlining tasks and analyzing data. 

One particularly groundbreaking opportunity associated with AI lies in identity verification and the potential of a passwordless future, though it remains an elusive goal. “I am a big fan of identity and getting rid of the password. But I don’t know that anyone’s ever done it well,” Wes Mullins, Chief Technology Officer at Deepwatch said. 

 

I am a big fan of identity and getting rid of the password. But I don't know that anyone's ever done it well.

Wes Mullins, CISO, CTO, Deepwatch
 

Current passwordless solutions are often merely iterations of traditional methods, generating passwords and storing them, but then obfuscating them from the end user. But AI might pave the way for true innovation. “That is a space that I still think has allowed opportunity to improve, where it doesn’t require you completely revamping your entire infrastructure,” Wes said. 

What’s next? 

In the coming years, we can anticipate AI and ML offering more practical solutions, shifting from marketing gimmicks to genuine utility. While AI presents both advantages and challenges, its judicious use can redefine the future of cybersecurity. AI’s power can be used not only to enhance threats but also to develop strong countermeasures.

Want to hear more insights from CISOs and other cybersecurity experts? Tune into the Code to Cloud podcast every other week. 

 

Categories

Suggested for you