Proposed IDS flow diagram.
- Download the dataset DNP3 Intrusion Detection Dataset from Zenodo
- Unzip and copy all the CSV files related to CICFlowmeter and paste them into a single folder. These files will be the main data files.
- Read all files and combine them into a single CSV file. Relevant script: data_preparation/combine_csv.py.
- Install this project as
pip install -e .and all its requirements too.
A packet for the array creation steps.
- Read the CSV file for 120s Timeout and the corresponding PCAP files.
- For each CSV:
- Read each row.
- Find the matching packets in the PCAP file.
- Call matched packets "session" and assign the label to it.
- Convert session to image.
- Relevant script: data_preparation/dnp3_pcap_to_img.py. It needs a mapping file between CSV and PCAP file, and are inside assets.
- PyTorch 2.5.0 with GPU.
- As MLFlow is being used for logging the parameters, the command
mlflow servershould be run before training a model. But for the HPC, it is disabled. - Dataset for session image data: advisg/data/session_image_dataset.py.
- Trainer: advisg/models/trainer.py. A single trainer to train all models, but this is used by other modules in /trainers/.
- trainers/session_image_trainer_backbone.py trains the MobileNet or ResNet-based attack classifiers based on session images. Slurm file: jobs/mobilenet_trainer.slurm
- trainers/session_adv_trainer.py trains the MobileNet or ResNet-based Adversarially Trained Classifier (ATC) based on session images.
- Slurm file: jobs/adversarial_training.slurm and jobs/adversarial_training_normalized.slurm.
- Requires Adversarial Data to be generated.
- trainers/session_ae_trainer.py trains the adversarial blocking models: U-Net and RDU-Net.
- Slurm file: jobs/unet_trainer.slurm and [jobs/unet_trainer_normalized.slurm] | jobs/rdunet_trainer.slurm and jobs/rdunet_trainer_normalized.slurm.
- Requires Adversarial Data to be generated.
- adversarial/generate_adversarial_image.py generates the adversarial data using the session images and trained models.
- Arguments can be passed. Slurm file: jobs/adversarial_generator_mobilenet.slurm.
All files are inside adversarial.
- A notebook notebooks/image_feature_importance.ipynb generates saliency map.
- adversarial/evaluate_from_generated_mobnet.py evaluates the adversarial image sample generated in previous step.
- For benchmarking, adversarial/benchmark.py for image based IDS models.
These evaluation files create result CSV files (and sample images).
- Using notebooks/report_generation_mobnetonly.ipynb for image based IDS.
The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). The hardware is funded by the German Research Foundation (DFG).
@Article{Acharya2026,
author={Acharya, Ramkrishna
and Al Sardy, Loui
and Muhammad, Mamdouh
and German, Reinhard},
title={ADVIS-G: An Adversarially Defended Intrusion Detection System for Smart Grids Using Deep Learning},
journal={KI - K{\"u}nstliche Intelligenz},
year={2026},
month={Apr},
day={10},
abstract={Smart Grids (SG) enhance efficiency and centralised control by enabling networked device communication, but these capabilities expose them to cyberattacks. Machine Learning (ML) and Deep Learning (DL) based Intrusion Detection Systems (IDS) have been employed to detect these threats. Yet, their adoption introduces new adversarial risks: specifically, attacks designed to fool IDS into misclassifying malicious activity as benign. In this study, we propose ADVIS-G, a novel, adversarially defended IDS framework for smart grids utilising deep learning. Our approach begins by training a high-accuracy (macro F1 96+{\%}) classifier on session images from a DNP3-related dataset. We then assess vulnerability to adversarial examples generated using Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), and Momentum Iterative Method (MIM) under varying perturbation rates. To counter such attacks, we introduce an adversarial blocking model based on autoencoder architectures that reconstruct input images, effectively removing adversarial perturbations. Experimental evaluation shows that under MIM, while the baseline model's macro F1 drops to {\$}{\$}{\backslash}sim {\$}{\$}0.5 (at {\$}{\$}{\backslash}epsilon {\$}{\$}=0.1), adversarial training improves robustness to 0.7. Our proposed autoencoder-based blocking further increases the F1-score to {\$}{\$}{\backslash}sim {\$}{\$}0.92 with RDU-Net, and {\$}{\$}{\backslash}sim {\$}{\$}0.9 with U-Net. But the U-Net performed comparatively better under heavier attacks and normal images. Moreover, combining adversarial training with autoencoder defence achieves the highest resilience under stronger attacks. Additionally, MAE thresholding on reconstructions enables adversarial detection with an Area Under Curve (AUC) of 0.914 using RDU-Net and of 0.865 using U-Net. These results suggest that ADVIS-G significantly enhances IDS robustness against adversarial attacks, offering a promising direction for future smart grid security research.},
issn={1610-1987},
doi={10.1007/s13218-026-00905-3},
url={https://doi.org/10.1007/s13218-026-00905-3}
}

