As a cybersecurity leader, you might be concerned about the impact of AI on the threat landscape as well as on your team’s role in protecting your organization from cyber threats.
There are many opinions and concerns that AI poses a number of threats, not just in the field of cybersecurity, but to humanity, so it’s important to ensure it continues to serve humans and not the other way around.
The Problem
Chatbots and other AI-based tools can produce inaccurate or fabricated “hallucinations” because their output is only as good as the data input, and that ingestion process is often tied to the internet. As a result, we could easily get disinformation, “malinformation” and misinformation.
Sometimes, even vendors “don’t understand everything about how they [AI models]work internally”. And, because there’s no verifiable data governance or protection assurances, generative AI can steal content at will and reproduce it, breaking many laws and regulations.
Government Moves
G7 on Artificial Intelligence
Recently, G7 members met to discuss amongst others – the topics of dangers of generative AI. They called for action to rein in the fast-evolving technology. They called for the creation of technical standards to keep AI in check. On the meeting in Hiroshima, the leaders said that nations must come together on a common vision and goal of trustworthy AI, even while those solutions may vary. They stressed that efforts to create trustworthy AI need to include “governance, safeguard of intellectual property rights including copyrights, promotion of transparency, [and] response to foreign information manipulation, including disinformation.
EU on Artificial Intelligence
Previously, EU countries agreed on the creation of the AI Act, which would rein in generative tools such as ChatGPT, DALL-E, and Midjourney in terms of design and deployment. This is to align with EU law and fundamental rights, including the need for AI makers to disclose any copyrighted material used to develop their systems.
Solutions on the Horizon
On the solutions side of the equation we can also see some movements.
Google SAIF
Google has announced the launch of the Secure AI Framework (SAIF), a conceptual framework for securing AI systems. Google, owner of the Bard, said a framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements so that when AI models are implemented, they’re secure-by-default.
The SAIF is designed to help mitigate risks specific to AI systems like model theft, poisoning of training data, malicious inputs through prompt injection, and the extraction of confidential information in training data. As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical.
OWASP 10
The Open Worldwide Application Security Project (OWASP) recently published the top 10 most critical vulnerabilities, and also provided some preventive measures.
These vulnerabilities have been detected in large language model (LLM) applications that many generative AI chat interfaces are based upon. Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution.
Here, we will take a glimpse at a few.
1. Prompt injections
This involves bypassing filters or manipulating the LLM using carefully crafted prompts that make the model ignore previous instructions or perform unintended actions. “These vulnerabilities can lead to data leakage, unauthorized access, or other breaches”.
Preventative measures for this vulnerability include:
Implement strict input validation and sanitization for user-provided prompts.
Use context-aware filtering and output encoding to prevent prompt manipulation.
Regularly update and fine-tune the LLM to improve its understanding of malicious inputs and edge cases (OWASP lists 10 most critical large language model vulnerabilities)
2. Data leakage
Data leakage occurs when an LLM accidentally reveals sensitive information, proprietary algorithms, or other confidential details through its responses. “This can result in unauthorized access to sensitive data or intellectual property, privacy violations, and other security breaches,” said OWASP.
An attacker could deliberately probe the LLM with carefully crafted prompts, attempting to extract sensitive information that the LLM has memorized from its training data.
Some preventative measures are:
- Implement strict output filtering and context-aware mechanisms to prevent the LLM from revealing sensitive information.
- Use differential privacy techniques or other data anonymization methods during the LLM’s training process to reduce the risk of overfitting or memorization.
- Regularly audit and review the LLM’s responses to ensure that sensitive information is not being disclosed inadvertently.
3. Overreliance on LLM-generated content
Overreliance on LLM-generated content can lead to the propagation of misleading or incorrect information, decreased human input in decision-making, and reduced critical thinking, according to OSAWP. “Organizations and users may trust LLM-generated content without verification, leading to errors, miscommunications, or unintended consequences.”
For example, if a company relies on an LLM to generate reports it may happen the LLM would generate a report containing incorrect data. If the company uses that incorrect data to make critical decisions, there could be significant consequences because of reliance on inaccurate content.
4. Training data poisoning
This is when an attacker manipulates the training data or fine-tuning procedures of an LLM to introduce vulnerabilities, backdoors, or biases that could compromise the model’s security, effectiveness, or ethical behaviour.
These actions can help prevent this risk:
Ensure the integrity of the training data by obtaining it from trusted sources and validating its quality.
Implement robust data sanitization and preprocessing techniques to remove potential vulnerabilities or biases from the training data.
Use monitoring and alerting mechanisms to detect unusual behaviour or performance issues in the LLM, potentially indicating training data poisoning.
Benefits
Above we have explored the threats posed by AI specifically used in LLM, as well as some responses to them. Here, we will elaborate some advantages for cybersecurity professionals.
With the rise of cyberthreats, there is a growing need for advanced security measures to protect against these attacks.
LLM chatbots are designed to mimic human-like conversations, using natural language processing, machine learning algorithms, and other AI technologies to interact with humans. However, these same technologies can also be leveraged to improve cybersecurity measures.
Here are some examples on how ChatGPT and other LLMs can advance cybersecurity.
1. Enhanced Detection Capabilities
ChatGPT and other LLMs can be trained to detect and respond to new and emerging threats in real-time. By leveraging their natural language processing capabilities, these Chatbots can help identify patterns and anomalies that may go unnoticed by traditional cybersecurity measures.
2. Streamlined Incident Response
With their ability to quickly identify and respond to potential threats, LLMs can help streamline incident response processes. This saves time and resources, allowing security teams to focus on more critical tasks.
3. Automated Threat Hunting
LLMs can be used to automate the process of hunting down potential threats. By analyzing vast amounts of data and identifying patterns, Chatbots can narrow down potential attack vectors and provide actionable insights.
4. Improved User Training
LLMs can be used as a tool to improve user training and awareness. By interacting with employees and providing them with targeted training modules, Chatbots can help build a culture of cybersecurity awareness within an organization.
5. Mutating Malware Detection
One of the most significant advantages of ChatGPT and LLMs in cybersecurity is their ability to detect mutating malware. Mutating malware is designed to evade detection by traditional security measures, but Chatbots can analyze its patterns and behaviours and identify it before it causes harm.
With all these benefits, it’s no wonder why many CISOs are turning to AI for advanced cybersecurity solutions. By combining AI technologies with human expertise, organizations can better protect themselves from cyberthreats and stay one step ahead of both AI and human attackers.
Conclusion
When it comes to cybersecurity, AI can be a lifesaver. With the cyberattack landscape becoming more complex and sophisticated every day, the industry will become more dependent on AI to help protect businesses from cyberthreats.
Using AI to Support Not Replace Overworked Cybersecurity Professionals
The rise of AI has sparked concerns on whether or not humans will be replaced in the cybersecurity field. Recently there has been some debate about whether advanced generative AI could replace human workers entirely or if they should be used just for support purposes. We share the opinion that human oversight remains essential in the fight against cybercrime. Yes, machines can detect patterns that humans may miss, but only people can provide context and determine when such patterns are indicative of an actual threat. Organizations need both people and machines working together when it comes to security. The right balance between them depends greatly on individual circumstances and risk levels within each organization.
And do not forget, by joining us in Stockholm, you will gain more valuable insights into the impact of AI on cybersecurity and learn how to leverage human expertise for optimal results. Start preparing for an AI-driven future now.