Cybercriminals want you to use generative AI

Lacework EditorialFebruary 27, 20244 min read

This blog features some content from our new eBook 3 benefits and a caution when using generative AI (GenAI) for cybersecurity. Download the eBook for more discussion on the impacts of generative AI within the cybersecurity industry and why practitioners should proceed with caution.

The use of generative AI is changing how we operate —  from classrooms to courtrooms, offices to homes, and everything in between. 

For cybersecurity, specifically, the promise of generative AI can’t be overstated. Faster investigations. Data queries through natural language. Tailored remediation guidance. Very little heavy lifting. It’s huge. Especially for a talent-shorted industry. And especially for an industry where every iota of cloud data matters and should be considered when resolving risks or investigating threats.

However, as with any new technology, generative AI isn’t without its faults. In the mad rush to apply this useful technology, many individuals — including security practitioners, themselves — have thrown privacy by the wayside. What’s in the fine print? Is my data protected? Is it safe to use this application?

Well, maybe. Let’s look at the pros and cons.

Generative AI: Your new cybersecurity coworker?

Artificial intelligence (AI) is an umbrella term for any theory, computer system, or software developed to perform tasks using machines instead of humans. “Generative AI” specifically refers to artificial intelligence models that generate new, original content by learning from existing data. It uses complex algorithms to understand patterns and features in the data, enabling it to produce similar but unique outputs.

Generative AI can enhance cybersecurity in a number of ways. According to Forrester, generative AI will augment security teams through: 

  • Content creation (i.e., generating text, code, etc.)
  • Behavior prediction (i.e., detecting patterns in data and suggesting proactive remediation)
  • Knowledge articulation (i.e., communicating complex data-driven concepts in human language)

While there is no replacement for human intelligence, the opportunity to streamline efforts could be a potential game changer for strained security staff. According to The Life and Times of Cybersecurity Professionals, 39% of security professionals claim that the skills shortage is making it hard to achieve their full potential. With generative AI, teams will be able to prioritize, assess, and review inputs and outputs with accuracy and speed. It can even be used to help onboard novice security team members to get them up and running quickly. 

It seems like today’s generative AI is here to help you enjoy your job more, not take your job away. Well, at least not for another few decades.

Go slow to go fast

Anyone who has tinkered with generative AI technology knows that the hype is real. The potential for this technology to upend multiple industries, including cybersecurity, is very high. 

However, the promises of generative AI make it tempting to “Pass Go” and collect your winnings, without paying attention to the fine print. Data security isn’t a game that should be subject to a dice roll. 

While generative AI can absolutely help automate tasks, it must still be managed carefully to prevent sensitive data from being compromised. Sending sensitive data to a third party, such as a public large language model (LLM), may be helpful to retrain the model. However, it may also share your private data with all of its users.

So, before having a public generative AI tool whip up a report on the state of your IT security controls or working generative AI into your standard processes, you should first consider these questions:

  • How and where is my data going to be stored?
  • Does the use of generative AI comply with relevant laws and regulations dealing with data protection?
  • Do I legally own the content produced by generative AI?
  • Does this generative AI tool have a history of introducing code vulnerabilities?
  • Do I have a clear audit trail of engagements with the generative AI tool?
  • Do we have the proper team in place to successfully and safely use generative AI in our security program moving forward?
  • In any given situation, is the reward of operational efficiency worth the potential risk of data breach?

This isn’t a comprehensive list. There are other questions to consider. But here’s the bottom line: security teams should put the excitement aside and proceed with caution around using public generative AI tools, as they would any other technology. It would be wise to first go slow so that, then, they can go fast.

Please use generative AI responsibly

With generative AI, organizations can easily gain deeper insights into cloud accounts and workload data. Analysts and experts will now have the ability to investigate and respond to alerts confidently with unprecedented ease, saving time, and increasing operational efficiency.

Use generative AI. But use it responsibly.

Download our new eBook for more insights into the benefits and cautions of generative AI for cybersecurity.


Suggested for you