The rapid advancement of generative AI technologies has brought about unprecedented opportunities for creative expression and innovation. However, this progress also introduces significant risks, particularly in the realm of cybersecurity and personal scams and misuse. AI is a tool that enhances human capabilities regardless of whether humans use it for productive or antagonistic purposes.
Generative AI can be exploited to create sophisticated attacks, such as deepfakes, phishing emails, and malware, which can deceive even the most vigilant individuals and systems. These threats have the potential to undermine trust in digital communications and compromise sensitive information. The International AI Safety Report[i] published in January built a shared understanding from 96 world wide contributors of the real risks brought on by gaps in regulation and governance of the use of AI. Unfortunately, at the AI Action Summit in Paris in February 2025, we failed to obtain synergy on the Declaration for Inclusive and Sustainable AI, with rejections from the US and the UK based on the argument of over-regulation of technology stifling innovation.[ii]
So for those of us that have been working towards the creation of worldwide agreement that can lead to a World AI institution (similar to what the International Atomic Energy Agency (IAEA) is to nuclear energy) to help guide and better regulate the use and development of AI, the rejection from the UK and the US is a set back but also a realisation that we are living through dangerous times, and that the geopolitical environment has become less collaborative where world wide agreements on key topics like AI safety will require more effort and patience, and resulting unfortunately in more impacts, for all us globally. The alternative is a pressing need for innovative solutions that can effectively counter these emerging threats and help safeguard individuals from the malicious use of generative AI. So not wasting a good crisis, innovators can now step up to where regulation is lagging or failing all together.
Innovation in AI security will be crucial to stay ahead of these evolving threats. Traditional security measures often rely on static rules and signatures, which can be easily bypassed by AI-generated attacks. To combat this, one option for researchers and developers is to turn to discriminative AI models, which can learn to distinguish between legitimate and malicious content.
These models hold promise for enhancing intrusion detection systems, improving phishing detection, and identifying malware more effectively. Moreover, combining generative and discriminative AI can simulate realistic threat scenarios, allowing for proactive defense strategies and building reactive capabilities for immediate attack responses. It is important to acknowledge that the days of “Impenetrable Walls” are long gone. With the current capabilities (even worse when capabilities like Quantum Computing solve Asymmetric encryption in minutes), our current paradigm for the protection of systems, data and people is frankly outdated and obsolete. It is not if you would get breached or attacked, but rather when and how.
The competence of a Cybersecurity and AI protection capability will be measured by its ability to respond and isolate the threat rather than by the zero penetration standard of the past.
So to create more robust and adaptive systems that protect organizations and people from the increasingly sophisticated attacks enabled by generative AI we will likely have to go back to the origins of generative AI in the Generative Adversarial Networks (GAN)[iii] and leverage the Discriminator network for design inspiration of AI protective systems (AI that defends us against AI).
Introduction to Generative Adversarial Networks (GAN’s)
Generative Adversarial Networks (GANs) are a training technique of network algorithms designed to solve the generative modeling problem. They consist of two main components: a generator and a discriminator. The generator learns to produce new data samples that resemble existing data, while the discriminator evaluates these samples to determine whether they are real or generated. Through a competitive process, the generator improves its ability to create realistic data, and the discriminator becomes better at distinguishing real from fake data. This adversarial process allows GANs to generate high-quality, realistic images and other data types without explicitly modeling the underlying probability distribution. GANs have been successfully applied to various tasks, including image generation, data augmentation, and style transfer, making them a powerful tool in machine learning and AI applications.

Solulab, “Generative Adversarial Networks,” available at: https://www.solulab.com/generative-adversarial-network/
The discriminator network acts as a classifier that determines whether the sample is real or fake. The input to the discriminator can come either from an input dataset or the generator, and its role is to classify whether the sample is real or fake. The value hypothesis is that discriminators will serve as key architectural designs for Discriminator networks that can help identify generative patterns in the signals we receive (i.e, text, Sound, Images) and can alert us of the probability of use of generative AI in their creation. GAN discriminators are trained to distinguish between real and fake data, which can be adapted for cybersecurity tasks like anomaly detection and intrusion detection. By learning to identify patterns that differentiate legitimate from malicious activity, these models can improve the accuracy of security systems.
Generative Adversarial Networks (GANs) have evolved significantly since their introduction, leading to various architectures tailored for specific tasks.
CycleGAN is used for transforming images between different styles, such as converting summer images to winter ones or transforming horse images into zebra images, making it useful for applications like FaceApp.

StyleGAN excels at generating high-resolution images, such as realistic human faces, and is showcased on platforms like “This Person Does Not Exist.”

PixelRNN is an auto-regressive model that predicts pixel values sequentially, useful for modeling complex distributions like natural images. Text-to-Image GANs generate images based on textual descriptions, allowing for the creation of images that match specific descriptions.

DiscoGAN and CycleGAN both learn cross-domain relations but differ in their loss functions, with DiscoGAN using two reconstruction losses.

Least Squares GAN (lsGAN) improves upon traditional GANs by using least squares loss instead of cross-entropy, enhancing stability and image quality. These diverse architectures demonstrate the versatility of GANs in various applications across computer vision and beyond.[iv]
As GAN technology continues to evolve, the potential for discriminators to transform cybersecurity is vast. They can be used to create more robust defenses by simulating various network traffic patterns, allowing organizations to proactively strengthen their security postures. Furthermore, discriminators can aid in developing anti-phishing systems by mimicking sophisticated phishing tactics, thereby enhancing resilience against these threats.
In the future, we can expect to see discriminators play a pivotal role in creating adaptive security systems that can anticipate and respond to emerging threats more effectively. By harnessing the power of discriminative AI, we can build more secure digital environments capable of withstanding the increasingly sophisticated attacks enabled by AI itself. This not only underscores the importance of discriminators in GANs but also highlights their potential to redefine the landscape of cybersecurity and beyond.
Use cases for GANs Discriminator in Cybersecurity
Anomaly Detection: Anomaly detection using Generative Adversarial Networks (GANs) involves training a generator to learn the distribution of normal data, while a discriminator evaluates whether new data points are likely to be part of this distribution. By leveraging the discriminator’s ability to distinguish between real and fake data, GAN-based anomaly detection methods can identify data points that deviate significantly from the learned normal distribution, effectively flagging them as anomalies without requiring labeled anomalous data. A user can help annotate unique markers of authenticity to further aid in the accuracy of the anomaly detection capability.
Application: Train a discriminator to recognize normal network traffic patterns. When it encounters unusual patterns, it flags them as potential threats.
Example: A company uses a GAN discriminator to monitor its network for unusual login attempts, helping to detect and prevent unauthorized access.
Phishing Detection: Phishing detection using Generative Adversarial Networks (GANs) involves training a discriminator to classify emails as legitimate or phishing attempts based on content and sender characteristics. By generating realistic phishing scenarios, GANs can enhance the training of detection systems, allowing them to better identify and block sophisticated phishing emails that might evade traditional security measures.
Application: Use discriminators to classify emails as legitimate or phishing attempts based on content and sender characteristics.
Example: An email service provider employs a GAN discriminator to identify phishing emails by analyzing sender behavior and email content, reducing false positives.
Malware Detection: Malware detection using Generative Adversarial Networks (GANs) involves training a discriminator to classify files as benign or malicious based on their code patterns and behavioral characteristics. By generating synthetic malware samples, GANs can enhance the training of detection systems, allowing them to recognize and block new, unseen malware variants more effectively, thereby improving cybersecurity defenses against evolving threats.
Application: Train discriminators to distinguish between benign and malicious files based on code patterns and behavior.
Example: An antivirus software company uses a GAN discriminator to identify new malware variants by analyzing file signatures and behavioral patterns.
Network Intrusion Detection Systems (NIDS): Network intrusion detection systems (NIDS) can leverage Generative Adversarial Networks (GANs) to enhance threat detection by training a discriminator to differentiate between legitimate and malicious network traffic patterns. By generating synthetic traffic data that mimics real-world attacks, GANs can improve the training of NIDS, allowing them to better identify and block sophisticated intrusions that might evade traditional security measures.
Application: Implement discriminators in NIDS to identify and classify network traffic as legitimate or malicious.
Example: A cybersecurity firm integrates a GAN discriminator into its NIDS to detect advanced persistent threats (APTs) by recognizing unusual network activity patterns.
Example of Advanced Cyber Sec Applications using GAN’s.
1. Google Anti-Scam : Google has introduced new AI-powered scam detection features for Android devices to combat increasingly sophisticated scams delivered through phone calls and text messages. These features leverage advanced AI models to detect suspicious patterns in real time, providing users with warnings during conversations that may be scams. For instance, Scam Detection in Google Messages uses on-device AI to identify and alert users about potential scams in SMS, MMS, and RCS messages, ensuring privacy by processing all data locally. This proactive approach helps protect users from conversational scams that often start innocently but escalate into harmful situations.
The scam detection system also extends to phone calls, where AI models analyze conversations in real time to identify potential scams. For example, if a caller attempts to deceive the user into providing payment via gift cards, the system alerts the user through audio and haptic notifications. This feature is particularly effective against spoofing techniques used by scammers to hide their real numbers and impersonate trusted companies. By integrating these AI-powered features, Google aims to enhance Android’s security capabilities, providing users with robust tools to stay ahead of evolving threats and maintain control over their financial information and data.[v]


2. Swedbank Fraud Detection. Swedbank has implemented a cutting-edge approach to financial fraud detection by leveraging Generative Adversarial Networks (GANs). This technology is used to model lawful financial transactions and identify anomalies that may indicate fraudulent activities. By training GANs on large datasets, Swedbank can simulate realistic transaction patterns, allowing the system to learn and recognize suspicious behavior more effectively. This approach enhances traditional rule-based systems by adapting to new fraud schemes as they emerge, providing a proactive defense against financial crimes.
The use of GANs in fraud detection at Swedbank involves a semi-supervised anomaly detection strategy. The model learns from historical transaction data to establish a baseline of normal behavior, and then flags transactions that deviate significantly from this baseline as potential fraud. This method is particularly effective in identifying complex fraud patterns that might evade traditional detection systems. By integrating GANs with Hopsworks and NVIDIA GPUs, Swedbank achieves efficient distributed training and processing of large datasets, ensuring that the system can handle the scale and complexity of financial transactions while maintaining high accuracy in fraud detection.[vi]
Conclusion
The reality is that the evolution of Generative AI, especially with more guided Agents (or AgenticAI) becoming smarter at task selection and execution will mean that the capability for more sophisticated attacks will increase and the threat to companies, institutions, governments and individuals will become more eminent. The International AI Safety Report 2025 [vii] underscores that AI capabilities continue to outpace regulatory frameworks, with an urgent need for innovative solutions to mitigate these risks. Generative Adversarial Networks (GANs), with their discriminative components, offer a promising avenue for enhancing security systems. By leveraging discriminators to simulate realistic threat scenarios, cybersecurity teams can proactively strengthen their defenses against evolving threats. This proactive approach is crucial in an era where traditional security measures are often bypassed by sophisticated AI-generated attacks.
Discriminators in GANs can play a pivotal role in this effort by enhancing anomaly detection, phishing detection, and malware identification. By generating synthetic data that mimics real-world threats, these models can improve the training of detection systems, allowing them to recognize and respond to new attacks more effectively. Generative Adversarial Networks (GANs) have shown immense potential not only in creating realistic data but also in enhancing security systems. The discriminator component of GANs, specifically, offers a promising approach to anomaly detection, phishing detection, and malware identification. By leveraging discriminative AI models, cybersecurity systems can better distinguish between legitimate and malicious activities, thereby improving their effectiveness against evolving threats. This proactive approach is crucial in an era where traditional security measures are often bypassed by sophisticated AI-generated attacks.
As we move forward, the integration of GAN discriminators into cybersecurity frameworks will be pivotal in creating more robust and adaptive security systems. These systems can simulate realistic threat scenarios, allowing for enhanced training of detection models and more effective response strategies. The future of AI security will likely involve a significant shift towards leveraging discriminative AI for proactive defense, enabling organizations to stay ahead of emerging threats.
However, let’s not forget that the world will need the development of a global AI governance framework where all developers and designers of Learning Algorithms subscribe to, akin to the International Atomic Energy Agency for nuclear energy, which is essential for ensuring that AI technologies are developed and used responsibly. By embracing innovation in AI security and fostering international cooperation, we can build a safer digital environment capable of withstanding the increasingly sophisticated attacks enabled by AI itself.
[i] International AI Safety Report 2025, chaired by Prof. Yoshua Bengio, Université de Montréal / Mila – Quebec AI Institute, published January 2025.
[ii] “US and UK decline to sign AI declaration at Paris summit,” The Guardian, February 11, 2025. Available at: https://www.theguardian.com/technology/2025/feb/11/us-uk-paris-ai-summit-artificial-intelligence-declaration
[iii] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative Adversarial Networks,” Advances in Neural Information Processing Systems 27 (NIPS 2014)
[iv] Neptune.ai, “6 GAN Architectures You Really Should Know,” available at: https://neptune.ai/blog/6-gan-architectures
[v] Google Security Blog, “New AI-Powered Scam Detection Features to Help Protect You on Android,” March 2025. Available at: https://security.googleblog.com/2025/03/new-ai-powered-scam-detection-features.html
[vi] NVIDIA Developer Blog, “Detecting Financial Fraud Using GANs at Swedbank with Hopsworks and GPUs,” available at: https://developer.nvidia.com/blog/detecting-financial-fraud-using-gans-at-swedbank-with-hopsworks-and-gpus/
[vii] International AI Safety Report 2025, chaired by Prof. Yoshua Bengio, Université de Montréal / Mila – Quebec AI Institute, published January 2025.