Thu, Mar 23, 2023

Emerging Chatbot Security Concerns

Explore Our Latest Insights on Artificial Intelligence (AI). Learn More.

In late 2022, artificial intelligence (AI) chat and conversational bots garnered large followings and user bases. AI chatbots, including ChatGPT, Meta’s Blender Bot3 and DeepMind, and Google’s Sparrow, have numerous benefits and uses, including potentially replacing current search engines, but there are notable drawbacks.

User experimentation with AI chatbots has led to unreliable and inaccurate answers, creating the potential for the spread of misinformation and/or deception, and security concerns persist. Many of these conversational bots are still in the testing phases and shouldn’t be trusted with searches and tasks that require authenticity and credibility.

Cybercriminals with minimal sophistication have utilized chatbots for nefarious purposes such as malicious code development, phishing email creation, scam giveaways, and fake landing pages for websites and adversary-in-the-middle attacks.

In an attempt to prevent the malicious use of the software, ChatGPT has blocked its use in nation states such as Russia, China and Iran. Russian threat actors have discussed workarounds to this order, as access to this software can help with routine cybercriminal activities.

Cybersecurity Concerns within AI Chatbots

With the technological developments surrounding AI, cyber threat actors were quick to attempt to use this software for malicious purposes. While ChatGPT and competitors such as Sparrow have implemented and continue to fix risks associated with the software, there remains potential for use in cyberattacks.

Chatbots Used in Low-Sophistication Attacks

Threat actors are currently experimenting with the idea of using chatbots for malicious purposes. There are indications that they could be used to increase the frequency and sophistication of attacks, such as by making code writing and phishing emails more accessible.

Notably, ChatGPT has been writing, finishing or aiding in the development of computer code, and threat actors have found ways to have the chatbot aid in writing malware. Methods to have ChatGPT write malicious code range from information stealers to decryptors and encryptors using popular encryption ciphers. In other examples, threat actors have begun experimenting with ChatGPT to create dark web marketplaces.

Security researchers are currently testing ChatGPT to evaluate its limits, and the results include potentially undetectable malware code capable of code injections and possible mutations. Mutation, known as "polymorphic malware," is a much higher level of code than that initially produced by ChatGPT. With chatbots being widely accessible by cyber threat actors of all sophistication levels, it could, therefore, increase the frequency of the weaponization of the application in attacks and the damage it can cause.

Existing methods—often used by low-level, unsophisticated cybercriminals—to utilize tools and distribute malware through the purchase of malware builders, phishing-as-a-service (PaaS) kits or purchasing access from initial access brokers (IAB), may decrease in use, due to the accessibility offered by chatbots.

Emerging Chatbot Security Concerns

Figure 1: Example of ChatGPT Writing Potentially Malicious Code (Source: MUO)

Chatbots Used for Phishing

For many threat actors, phishing emails remain one of the most popular methods of initial access and credential harvesting attacks. One way to detect if an email is a phishing attempt is through incorrect spelling and punctuation. Chatbots have been used to create elaborate, more believable, and more human-like phishing emails into which threat actors can add malware. The emails can be tailored to certain companies or organizations or more realistic attempts to harvest credentials. Beyond emails, AI chatbots can generate scam-like messages that include false giveaways.

These chatbot phishing emails can also include a fake landing page that is commonly used in phishing and man-in-the-middle (MitM) attacks. While chatbots do have limitations and can indeed block certain requests to perform such functions, with the right wording, threat actors are able to use them to craft socially engineered emails that would be more believable than those created by some threat actors.

Emerging Chatbot Security Concerns

Figure 2: Example of ChatGPT Creating Phishing Emails (Source: TechRepublic)

Emerging Chatbot Security Concerns

Figure 3: Example of Scam Giveaway Created with Potential of Download Link Inserted (Source: MUO)

Emerging Chatbot Security Concerns

Figure 4: ChatGPT Writing Code for a Landing Page (Source: MUO)

Emerging Chatbot Security Concerns

Figure 5: Dark Web Post Explaining Process to Use ChatGPT in Russia (Source: Flashpoint Intelligence Platform)

Ethical Concerns with Chatbots

All chatbots that have been seen publicly have either been manipulated to output false information or mistakenly had results that appeared to be correct but are factually incorrect. Currently, for example, ChatGPT has no verification processes to determine if the results it outputs are correct or not.

Chatbots, therefore, have the potential to give nation state threat actors, radical groups, or “trolls” the capability to generate mass amounts of misinformation that can later be spread via bot accounts on social media to garner support of their point of view. BlenderBot 3, for example, was commonly cited as having results that were racist, antisemitic, politically incorrect and included misinformation.

Recommendations for Remaining Secure

Chatbots are still a newer software with potential that is yet to be fully realized. Recommendations for remaining secure are currently still based in modern cybersecurity practices, as chatbots can only be used as supplementary for attacks. They are lacking the higher level of situational awareness in the form of phishing and scam detection and verification of information found on the web or on social media.

The majority of possible future solutions to improve the security and reliability of chatbots is currently based on software and applications that firstly attempt to detect whether a piece of writing was developed by chatbots. These solutions will take time to develop a high level of accuracy, but when used they will be valuable to verify sources and detect plagiarism.

To read more on modern cybersecurity practices that will keep your organization secure, take a look at Kroll’s 10 Essential Cybersecurity Controls for Increased Resilience.



Cyber Risk

Incident response, digital forensics, breach notification, managed detection services, penetration testing, cyber assessments and advisory.

24x7 Incident Response

Kroll is the largest global IR provider with experienced responders who can handle the entire security incident lifecycle.

Kroll Responder MDR

Stop cyberattacks. Kroll Responder managed detection and response is fueled by seasoned IR experts and frontline threat intelligence to deliver unrivaled response.


Computer Forensics

Kroll's computer forensics experts ensure that no digital evidence is overlooked and assist at any stage of an investigation or litigation, regardless of the number or location of data sources.

Business Email Compromise (BEC) Response and Investigation

In a business email compromise (BEC) attack, fast and decisive response can make a tremendous difference in limiting financial, reputational and litigation risk. With decades of experience investigating BEC scams across a variety of platforms and proprietary forensic tools, Kroll is your ultimate BEC response partner.