The Effect of ChatGPT on Cyber Security

15 March, 2023
BMaarten de Bakker from CaptureTech
Maarten de Bakker
Marketing Manager
ChatGPT logo
15 March, 2023
Maarten de Bakker
Marketing Manager

Cybersecurity is an essential aspect of our modern world, and as technology advances, so do the threats we face. One of the biggest threats is that of cyber attacks, which can lead to data breaches, financial losses and reputational damage.

To deal with these threats, companies and governments are using various tools and technologies to improve their security. One of the recent developments that can help improve cybersecurity is the much-discussed ChatGPT. In this article, we discuss, among other things:

  1. Improved cyber security with ChatGPT
  2. Dangers of ChatGPT
  3. ChatGPT to deploy or not to deploy

ChatGPT makes powerful AI available to everyone. What does this mean for Cyber Security? Because ChatGPT is trained using deep learning models, it is able to understand human language and communicate as if it were a human being. This technology can be used in several ways to improve cybersecurity. But in the wrong hands, it also poses a great threat.

Enhanced cyber security

First, ChatGPT can be used to improve email security. Emails are one of the most common ways cyber attacks are carried out, such as phishing and malware. ChatGPT can be used to analyze the content of emails and assess whether they are suspicious. It can also be used to automatically respond to suspicious emails or provide security advice to employees.

In addition, ChatGPT may be used to improve data security. By using ChatGPT as a virtual security officer, it can help companies identify suspicious activity and take steps to prevent data breaches. It can also be used to train employees in security behaviors and make them aware of the risks of cyber attacks.

Another potential use of ChatGPT is to improve the security of Internet of Things (IoT) devices. IoT devices are vulnerable to cyberattacks because they often lack security features and can be easily manipulated. ChatGPT can be used to monitor IoT devices and identify when they are being tampered with. It can also be used to automatically respond to suspicious activity and improve the security of IoT devices.

ChatGPT writes blog on cyber security

Danger of ChatGPT

While ChatGPT can potentially help improve cybersecurity, its use also comes with risks and challenges. For example, if ChatGPT is improperly programmed, it may result in incorrect assessments of suspicious activity and data breaches may still occur.

Such is the risk of human error when programming ChatGPT. However intelligent, ChatGPT depends on proper programming to make correct assessments of suspicious activity and take appropriate actions to prevent data breaches. Mistakes in programming can lead to misjudgments that can actually weaken rather than strengthen an organization’s security. An additional risk with this is that hackers attack the technology itself. If ChatGPT is hacked, it could result in misjudgments of suspicious activity or even open up the systems to hackers.

So the danger here is that using ChatGPT can lead to a false sense of security. If organizations become too dependent on technology like ChatGPT, they may overlook the human factor. This means employees may be less alert to suspicious activity and the organization may invest less in employee training and awareness.

But the real elephant in the room, of course, is the high risk of abuse of ChatGPT. The very lifelike way ChatGPT can communicate makes it ideal for cybercriminals. For example, it can be used to greatly enhance phishing emails to steal trade secrets or collect personal information from employees or customers. It also helps criminals create deepfakes and thereby impersonate someone else.


Image created by DALL-E, an AI tool developed by OpenAI that can generate images based on textual descriptions.


It is important to note that this is not a future situation, but is already happening now. For example, a cybercriminal used deepfake software to mimic the voice of the CEO of a German company. With this, the attacker convinced another executive at the company to rush a $243,000 wire transfer in support of a fabricated corporate emergency. Or the cyberattack in which a criminal used deepfake audio and forged emails to convince an employee of a company in the United Arab Emirates to transfer $35 million to an account to support a corporate takeover.

AI makes it easier for cybercriminals to carry out such attacks. Identity security and cyber awareness among employees is therefore becoming increasingly important.


Good or bad?

In short, ChatGPT can be a valuable addition to the security tools and technologies currently available to help businesses and governments protect their data and systems from cyber attacks. But there are also significant risks and challenges associated with its use. It is essential that organizations think carefully about how they deploy ChatGPT and what measures they take to ensure that it is used appropriately and does not create new security risks.

Want to know more? Contact Cyber Security specialist Mikel Warbie.