摘要
通过对系统进行对抗攻击可以检测系统漏洞,进而提高系统鲁棒性。然而,对抗攻击前往往需要系统的参数信息,这使得攻击条件受限。为此,结合一种新的量子粒子群优化算法,提出一种黑盒有目标对抗攻击方法。该方法通过在原始样本中添加微小噪声,构造差异化粒子群,作为初始对抗样本种群;基于记忆搜索的领域重分布策略得到当前种群的全局最优粒子,从而生成初始对抗样本;融入扩维和自适应权重位置更新,使得种群更接近目标;根据对抗样本与目标语句的编辑距离,继续优化初始对抗样本,生成最终对抗样本。为了验证方法的攻击效果,在GoogleSpeech、LibriSpeech以及CommonVoice数据集上,对语音识别模型DeepSpeech进行实验,将目标语句设置为不同场景中的常见语音指令。实验结果表明,提出的方法在三个数据集上成功率都优于对比方法,其中在Common Voice数据集上的成功率比对比方法提升了10个百分点。同时,召集志愿者对生成的对抗样本噪声强度进行主观评估,其中82.4%的对抗样本被志愿者判断为没有噪声或噪声很小。
System vulnerability can be detected by adversarial attack to improve system robustness.However,the parameter information of the system is required before adversarial attack,which makes the attack conditions limited.Therefore,combined with a new quantum particle swarm optimization algorithm,a black box targeted adversarial attack method is proposed.By adding small noise to the original example,the differentiated particle swarm is constructed as the initial antagonistic sample population.The global optimal particle of the current population is obtained based on the domain redistribution strategy of memory search,and the initial adversarial example is generated.To make the population closer to the target,the integration of expansion and adaptive weight position updating is performed.According to the editing distance between the adversarial example and the target statement,the initial adversarial example is optimized and the final adversarial example is generated.In order to verify the attack effect of the method,the speech recognition model DeepSpeech is experimented on Google Speech,LibriSpeech and Common Voice datasets.The target phrases are set as common voice commands in various scenarios.Experimental results show that the success rate of the proposed method is better than that of the compared method on the three datasets,and the success rate on the Common Voice dataset is 10 percentage points higher than that of the compared method.At the same time,volunteers are recruited to evaluate the noise intensity of the generated adversarial examples subjectively,and 82.4%of adversarial examples are judged by volunteers as no noise or little noise.
作者
于振华
苏玉璠
叶鸥
丛旭亚
YU Zhenhua;SU Yufan;YE Ou;CONG Xuya(College of Computer Science&Technology,Xi'an University of Science&Technology,Xi'an 710054,China)
出处
《计算机科学与探索》
北大核心
2025年第1期253-263,共11页
Journal of Frontiers of Computer Science and Technology
基金
国家自然科学基金(62273272,62303375)。
关键词
对抗攻击
语音识别
黑盒攻击
样本生成
量子粒子群算法
梯度评估方法
adversarial attack
speech recognition
black-box attack
example generation
quantum particle swarm optimi-zation algorithm
gradient evaluation method
作者简介
于振华(1977-),男,山东乳山人,博士,教授,博士生导师,主要研究方向为人工智能系统安全;苏玉璠(2000-),女,陕西西安人,硕士研究生,主要研究方向为语音对抗攻击;通信作者:叶鸥(1984-),男,陕西西安人,博士,讲师,主要研究方向为数据清理、视频检索、图像处理等。E-mail:oye0928@xust.edu.cn;丛旭亚(1992-),男,陕西西安人,博士,讲师,主要研究方向为离散事件系统安全性分析与控制、Petri网理论与应用等。