As artificial intelligence (AI) continues to advance, it is important to understand the privacy and security risks associated with these data-driven technologies. A few weeks ago, I joined the “ADGC on Privacy & Cybersecurity” podcast with host Jody Westby to discuss the basics of generative AI and its potential future impacts.

We wanted to educate listeners about what AI technologies can do, and the risks and benefits associated with using these technologies - especially as they capture everyone’s interest and attention. While chatbots and generative AI are not traditional cybersecurity policy fodder, it’s clear that both have implications for security and that their uses are developing at a breakneck pace.

For me, the public use of “generative AI” is one of the most interesting developments in the technology space. The press has been focusing a lot of attention on chat bots such as “ChatGPT,” but it’s one of many AI applications that create “new” outputs by using large amounts of training data and then generating new content. Instead of pattern matching and producing stock sentences, chatbots are now creating entirely new sentences and recontextualizing existing ideas into new formats. As these systems progress this will have more security implications.

Generative AI is now capable of creating new, high-quality, human-like outputs based on models that have been trained on vast quantities of data. The results are advancements in machine learning that have improved both the accuracy and the capabilities of the bots. And those capabilities are vast, from answering simple questions to creating fanciful verse -- my favorite is a lullaby I asked ChatGPT to write for my dog.

But there are risks: chatbots are generating new things in creative ways, but at their core, they are only connecting old ideas and don’t have the ability to discern what is true. Chatbots are reliant on their training data to determine the answer they provide, making that data extremely important.

If a chatbot has been created with a biased set of training data, it is likely the answers it provides will also be biased - like any AI system. And an AI system will do what it has been trained to do, whether that’s to connect you with resources or rope you into a long, misleading conversation. It’s worth noting that chatbots have been used to create new content entirely, producing citations for academic sources that don’t exist, but certainly sound credible - because they’ve been trained to do exactly that: sound credible.

And generative AI chatbots can produce responses that increasingly sound human-like, which has the potential to change the way cyber threats are developed and executed. These models can now be used to automate the creation of phishing emails, social engineering attacks, and other types of malicious content.

Potentially worse than that, ChatGPT has also been used to write malicious exploit code. While the code it creates is not particularly sophisticated, generative models will continue to evolve and may be able to create effective exploits that can evade security programs in the future, and not all chatbots will have the same careful guardrails that OpenAI built into ChatGPT. When used in addition to malware, ChatGPT enables hackers to make infinite code variations to stay one step ahead of malware detection engines.

Phishing and business email compromises are attacks that attempt to get a victim to disclose sensitive information relating to finances or personal data. These attacks require personalized messages to be successful. Now that ChatGPT can create convincing personal emails, it can generate them to the masses with infinite variations. The increased speed and frequency in producing attacks from chat bots will result in a higher success rate than we have ever seen before. The legacy security technology in place is not equipped to identify and protect against these advancements.

As these technologies make it harder to sort malicious content from legitimate information, the security risks posed by them may drastically rise, and the security industry will need to respond. While hackers and others will use these tools to attack, the cybersecurity community must also learn to use these technologies to our advantage and combat the threats of advancing artificial intelligence. If you want to learn more about artificial intelligence in general, we encourage you to listen to the entirety of the podcast here.

Heather West

Read Next

Building PQC and Crypto Resiliency Across the Public and Private Sectors

A webinar that featured industry leaders from AT&T, the National Institute of Standards and Technology (NIST), InfoSec Global, The White House, and Venable LLP, focused on cryptographic resilience and post-quantum transition.‍

NTIA Report Reveals Support for Open AI Models

The NTIA released a report examining the risks and benefits of dual-use foundation models with publicly available model weights, also examining the impact of openness on innovation and how to evaluate and quantify risk for these models.

FedRAMP Finalizes Emerging Technology Prioritization Framework

The GSA FedRAMP PMO released the final version of its Emerging Technology Prioritization Framework that seeks to expedite FedRAMP authorizations for select cloud offerings with emerging technology features, such as generative AI.