节点文献

基于深度学习的恶意代码检测可解释性研究

Interpretability of deep learning-based malicious code detection

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 陈军詹达之夏士明蒋考林潘志松郭世泽

【Author】 Chen Jun;Zhan Dazhi;Xia Shiming;Jiang Kaolin;Pan Zhisong;Guo Shize;Command and Control Engineering College, Army Engineering University of PLA;

【通讯作者】 潘志松;

【机构】 陆军工程大学指挥控制工程学院

【摘要】 针对深度学习模型无法解释其是否提取到了关键特征的问题,该文从建模之前和建模之后2个层面对基于深度学习的恶意代码检测进行了可解释性研究。在建模之前,通过对二进制文件进行可视化分析,发现恶意代码的图像模态特征存在着明显的同类相似性和类间差异性,验证了运用深度学习模型进行图像模态特征恶意代码检测的可行性。在建模之后,提出了一种基于梯度加权类激活映射(Grad-CAM)的恶意代码检测可视化可解释性方法,在视觉直观上和统计分析上说明了基于深度学习的恶意代码检测具有可解释性。

【Abstract】 To address the problem that deep learning models can’t explain whether they have extracted key features, this paper investigates the interpretability of deep learning-based malicious code detection at two levels. Before modeling, the visual analysis of binary files reveals that there are obvious similarities of same classes and differences between classes of image modal features of malicious codes, which proves the feasibility of malicious code detection using deep learning models for image modal features. After modeling, an interpretable method based on gradient-weighted class activation mapping(Grad-CAM)for malicious code detection visualization is proposed, which illustrates that deep learning-based malicious code detection extracts key features with interpretability in terms of visual intuition and statistical analysis.

【基金】 国家重点研发计划(2017YFB0802800)
  • 【文献出处】 南京理工大学学报 ,Journal of Nanjing University of Science and Technology , 编辑部邮箱 ,2023年03期
  • 【分类号】TP309;TP311.5;TP18
  • 【下载频次】291
节点文献中: