期刊论文详细信息
International Journal on Informatics Visualization: JOIV
Impact of Data Balancing and Feature Selection on Machine Learning-based Network Intrusion Detection
article
Azhari Shouni Barkah1  Siti Rahayu Selamat2  Zaheera Zainal Abidin2  Rizki Wahyudi1 
[1] Universitas Amikom Purwokerto;Universiti Teknikal Malaysia Melaka
关键词: Intrusion Detection;    Feature Selection;    Imbalance;    SMOTE;    ADASYN;   
DOI  :  10.30630/joiv.7.1.1041
来源: Politeknik Negeri Padang
PDF
【 摘 要 】

Unbalanced datasets are a common problem in supervised machine learning. It leads to a deeper understanding of the majority of classes in machine learning. Therefore, the machine learning model is more effective at recognizing the majority classes than the minority classes. Naturally, imbalanced data, such as disease data and data networking, has emerged in real life. DDOS is one of the network intrusions found to happen more often than R2L. There is an imbalance in the composition of network attacks in Intrusion Detection System (IDS) public datasets such as NSL-KDD and UNSW-NB15. Besides, researchers propose many techniques to transform it into balanced data by duplicating the minority class and producing synthetic data. Synthetic Minority Oversampling Technique (SMOTE) and Adaptive Synthetic (ADASYN) algorithms duplicate the data and construct synthetic data for the minority classes. Meanwhile, machine learning algorithms can capture the labeled data's pattern by considering the input features. Unfortunately, not all the input features have an equal impact on the output (predicted class or value). Some features are interrelated and misleading. Therefore, the important features should be selected to produce a good model. In this research, we implement the recursive feature elimination (RFE) technique to select important features from the available dataset. According to the experiment, SMOTE provides a better synthetic dataset than ADASYN for the UNSW-B15 dataset with a high level of imbalance. RFE feature selection slightly reduces the model's accuracy but improves the training speed. Then, the Decision Tree classifier consistently achieves a better recognition rate than Random Forest and KNN.

【 授权许可】

Unknown   

【 预 览 】
附件列表
Files Size Format View
RO202307110004906ZK.pdf 3591KB PDF download
  文献评价指标  
  下载次数:5次 浏览次数:0次