AI / Machine Learning
-
January 3, 2024

The security issues under LLMs like GPT, LlaMa, and Bard.

Artificial Intelligence (AI) and Machine Learning (ML) are transforming the way businesses operate across a multitude of sectors, particularly how businesses communicate and interact. From personalizing customer experiences to analyzing large chunks of data, AI tools like OpenAI's GPT - an AI language model capable of generating human-like text based on the input provided - have proven to be game-changers.

ChatGPT leverages the vast and advanced GPT (Generative Pre-trained Transformer) language models to produce text-based responses that resemble a human-like conversation. Put simply, it relies on its training on a vast dataset of text and its language processing technology to understand the context of user inputs.

The variety of things we can do with this technology is widely open and the applications for businesses are diverse, including document summarization, speech recognition, text and code generation and translation, strategy formulation, content creation for various platforms, and customer support, among others.

Despite its remarkable offerings in enhancing speed, efficiency, and productivity by using such AI-powered tools, these are not without their security concerns. For enterprises, understanding and mitigating the inherent risks that come with AI integration into their systems is vital.

This article primarily focuses on ChatGPT and its associated security risks, given that serves as a representative example. On the other hand, it may not encompass the entire spectrum of LLMs and their specific risks.

Data privacy, breaches, and unauthorized access

Data privacy is one of the primary risks associated with using AI and machine learning software, given that AI models are trained with huge datasets that might contain sensitive information. While OpenAI disclaimers that GPT does not store personal inputs, the misuse of AI could lead to privacy breaches, either through inadvertent exposure of sensitive data or malicious exploitation by bad actors.

A recent study found that 15% of employees regularly post company data into ChatGPT. By analyzing the behavior of over 10,000 employees, and examining how they use generative AI apps at work, the report also reveals that nearly 25% of those visits include a data paste.

For example, chatbots can record users’ notes on any topic and then summarize that information or search for more details. But if those notes include sensitive data — an organization’s intellectual property or sensitive customer information, for instance — it enters the chatbot library and the user no longer has control over the information.

Companies can protect data privacy by using data anonymization and masking, providing an additional layer of privacy. Training AI chatbots with synthetic or fake data can further reduce the risk of privacy breaches.

Other foundational security risks are the potential for unauthorized access and data leakage. AI tools like ChatGPT are often cloud-based and require an internet connection to work, on the other hand, they also provide API for custom integration. The data shared with these tools is typically processed and stored in the cloud. That can expose sensitive business information to potential cyber-attacks if not adequately protected.

For example, chat logs containing sensitive company information could be accessed, manipulated, or deleted during a cyber attack and this could lead to massive business disruptions and also violate data protection laws.

Whenever you have a popular app or technology, it’s only a matter of time until threat actors target it. In the case of ChatGPT, the exploit came via a vulnerability in the Redis open-source library. This allowed users to see the chat history and payment information of other active users.

To mitigate those, companies should employ stringent security measures like end-to-end encryption which ensures that data cannot be deciphered during transmission. Also, implementing strong user authentication and authorization mechanisms helps to keep unauthorized users out. Particularly in the context of GDPR, organizations might want to deploy and use AI systems that do not require the processing of personal data whenever possible. If it is inevitable, employing privacy-preserving technologies like differential privacy that allows AI tools to learn from the patterns in the data without accessing individual data points can help.

Finally, we have plugins, especially those from third parties, that may not always have good security practices, potentially leading to sensitive data being transmitted to or stored on insecure platforms. Issues like insufficient encryption, non-compliance with data protection regulations like GDPR or CCPA, and vulnerabilities due to inadequate updating and maintenance can further exacerbate these risks.

To safeguard against these threats, users need to examine plugins thoroughly before usage, ensuring they adhere to strict data privacy standards and are regularly maintained. Ensuring that plugins have robust access controls, clear data retention policies, and are compliant with legal requirements is critical for maintaining data security and preventing unauthorized access to sensitive information.

Model Manipulation

Another risk factor is model manipulation or adversarial attacks, which involve a threat actor feeding manipulated data to the AI system to alter its decision-making process. This could be done by skewing the input data or tampering with the underlying algorithms to make the model generate inappropriate, offensive, or harmful outputs. For instance, this could occur in a financial institution, where altering a model’s output might lead to incorrect credit scoring and improper loan issuance.

While OpenAI has implemented several safeguards to reduce the chance of misuse and mitigate these risks through extensive testing and updates to the models and systems, it acknowledges that there is an ongoing challenge to ensure that its AI technologies are not misused.

This security risk can be mitigated by introducing redundancies in decision-making, like requiring human approval for significant decisions. Moreover, employing anomaly detection systems can help to identify and refuse unusual patterns that might indicate an adversarial attack.

Biased AI behavior and misinformation

AI models learn from the data they are trained on. If the training data was biased, the AI output would also reflect that bias. Such biased behavior can lead to wrong decisions, harming an organization's reputation, and customer relationships, and potentially lead to legal complications.

On the other hand, we are in the era of real-time news and social media dominance, and distinguishing authentic news from fake news can be a really difficult task. There's an existing perception that bad actors could exploit ChatGPT to spread untruthful information. These individuals might use this conversational AI to rapidly craft false news stories and even mimic famous personalities' speech patterns. To illustrate, we've seen how AI was enabled to craft a story in the style of Barack Obama, or even more funny cases like the pope wearing a white puffy jacket.

Enterprises can effectively mitigate the risk of biased AI behavior and misinformation from chatbot models by investing in robust controls. It's crucial to supervise the AI response with human reviewers, facilitating a comprehensive feedback loop for continuous improvement of the system. Engaging users and experts in assessing the system behavior can help in establishing suitable use cases and system boundaries. Lastly, providing AI education and clarity to users about the functioning of chatbots, including being upfront about their limitations, can minimize the propagation of misinformation.

Impersonation and phishing emails

Scammers use various tactics to impersonate and trick users into revealing sensitive information. One common scammer tactic is to create fake ChatGPT accounts or chatbots on various online platforms, such as social media sites or messaging apps, hackers then reach out to users, offering them the services of ChatGPT.

Cybercriminals also use ChatGPT by suggesting that it can help improve business operations, provide financial advice, or even offer a loan. Once the scammer has gained the user’s trust, they will ask the user to provide personal or business account information, such as login credentials or bank account details.

We’re not going to talk about ChatGPT’s coding abilities, but even limiting the discussion to ChatGPT’s ability to generate text, its possibilities for threat actors are quite impressive and likely to improve quickly.

Today, ChatGPT is already able to write emails indistinguishable from those written by humans, in any writing style or language. It can generate text for social media posts, YouTube video scripts, website content, press releases, reviews—anything and everything an attacker needs to create a fake web presence, a fake persona.

When it comes to phishing, attackers can start by using ChatGPT and similar platforms to generate individual realistic-sounding emails. With open-source versions of the technology also rapidly becoming available, those with more advanced skills and access to compromised email accounts will be able to train their AIs on a company’s stolen communications. With scripting and automation, they can create an infinite number of mass-produced customized communications using AIs that can learn in real time what works and what doesn’t.

Lack of Explainability

Another big risk involved with AI tools is the lack of explainability and accountability. AI-based decisions can sometimes be opaque, making it challenging to understand how a particular outcome was reached. This lack of explainability can be a major issue, particularly during a security breach, as tracking the breach back to its source might be difficult.

AI-based models sometimes produce outcomes that are difficult to interpret or justify. For instance, if ChatGPT refines its language generation based heavily on a specific input, it can be challenging to track back to why it’s producing such results. This risk can be mitigated by adopting Explainable AI (XAI) approaches and techniques that can unravel the decision-making process of AI. This could involve techniques like Local Interpretable Model-agnostic explanations (LIME) or SHapley additive explanations (SHAP), which can give understandable explanations of how a decision was made.

On the other hand, we have AI hallucinations, while GPT-4 is indeed more accurate than its predecessor, it is not 100% free of falsehoods. It can still “hallucinate” (i.e., make “facts”) and put out flawed logic, though it does so less frequently than GPT-3.5. This means that while GPT-4 still doesn’t cite its sources, users must still verify its output, no matter how truthful that output seems to be.

Mitigating Security Risks

As enterprises integrate sophisticated AI tools into their operations, the security risks must be addressed with diligence and foresight. To safeguard against these vulnerabilities, enterprises must establish robust security practices. These could include access controls and encryption for data security, regular monitoring and audits to spot irregular patterns, adhering to privacy regulations by anonymizing data, and fairness analysis to detect and rectify bias. A robust understanding of AI is crucial to ensure its effective and secure implementation - enterprises should invest in awareness and training initiatives, and develop a strong AI governance framework.

Ultimately, implementing AI tools like ChatGPT in enterprises is a delicate balance that involves leveraging its benefits and ensuring security. It requires a deep understanding of the system, its potential vulnerabilities, and proactive measures to mitigate the associated risks. AI is undeniably transforming the enterprise landscape, but it must be embraced and managed cautiously and conscientiously

The future of AI chatbots, and cybersecurity

The horizon of AI chatbots, including current ones and emerging competitors, offers a view of excitement mixed with big challenges. As funding for artificial intelligence increases, a transformation is unfolding: AI chatbots are becoming not just faster but also provide services that are more personalized, precise, and intuitive than ever before.

AI chatbots are on the way to becoming omnipresent across our digital ecosystem, we will find them integrated into mobile applications, voice assistants, search engines, social media, and websites. They are set to revolutionize a variety of sectors including entertainment, healthcare, education, finance, and software development, this integration is expected to streamline efficiencies and enhance productivity.

However, this bright future is not without its shadows, advanced chatbots like ChatGPT could inadvertently become great tools for bad actors, and there is a real risk that these sophisticated programs could be used to rapidly craft more malware and social engineering attacks than we have ever seen.

But it's not completely a disaster in terms of cybersecurity, AI chatbots harbor the potential to become cyber guardians. By leveraging their ability to detect anomalous patterns across documents, emails, applications, and network traffic, chatbots could serve as early warning systems against cyber threats. Furthermore, a nuanced AI tool like ChatGPT can be deployed to train employees in cybersecurity practices, significantly reducing the susceptibility to phishing and other cyber-attacks.

The rapid development of generative AI technologies marks the beginning of a new period filled with opportunities and risks, as these innovations permeate our lives, it is imperative for developers and regulatory bodies to navigate the complexities of privacy, such as GDPR compliance, intellectual property rights, and the battle against misinformation.

In conclusion, while the future of AI chatbots promises to elevate our interaction with technology, they must be used with an awareness of the potential risks. Companies can harness the power of AI while ensuring they stay protected By embracing AI's dual role as both a potential vector for threats and a defense mechanism against them.

References