摘要
本文在前期量子概率神经网络(QPrNN)的基础上,提出了一种物理可实现的量子神经网络,称为量子并行神经网络(QPNN).主要特点是基于量子神经元的激活机制,利用量子并行性跟踪所有网络状态来提高分类结果.与之前的研究相比,在网络各个中间层和输入层之间添加了连接,增加了量子神经网络的非线性表达能力,所以结构上可以向深层网络发展.由于QPNN独特的量子门性质,该模型在很多条件下对噪声不敏感,涵盖了相位偏移和幅值翻转噪声.QPNN的另一个优势是可以作为内存使用,不但可以像经典内存一样存取数据,还可以作为生成模型,产生新数据.在实验验证部分,本次研究选取了两个标准的例子,MNIST手写体识别和Cifar-10来验证其测试误差.实验结果表明,QPNN只需采用经典神经网络3%左右的神经元资源即可超过相对应的全连接前向神经网络.与QPrNN相比,MNIST的分类测试准确率提高了0.2%;Cifar-10测试准确率提高了3%.同时,MNIST的正确取回概率平均提高了2%.
Based on the previous quantum probability neural networks (QPrNN) research, an improved quantum-implementable neural network, namely Quantum Parallel Neural Network (QPNN) model is proposed in this paper. QPNN is a kind of quantum feed-forward neural networks which is composed of a new type of quantum neurons, or qurons, and their connections. If the input quron x ′ satisfies x ′·ω′ 0, then the output quron will be activated with a probability larger than 0.5 and rest otherwise. In this sense, qurons are similar to classical neurons based on the sigmoid function. Taking advantage of quantum parallelism, QPNN can trace all possible network states to get the final output. Moreover, the most interesting points of QPNN is that we can combine several basic networks with different parameters even different structures at the same time to improve the result. To achieve this purpose, only n qubits are needed to perform quantum multiplexer gates to create 2 n separable networks. Therefore, QPNN has unique advantages over classical feed-forward neural networks. Compare to the previous QPrNN, direct links between each layer and input layer are added to enhance the nonlinearity of QPNN, and hence can be developed into deep network structure. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, such as phase - flip channel and bit-flip channel, which can be efficiently implemented by universal quantum computers. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. During the learning phase of QPNN, the most expensive part is the summation over all possible states of the hidden layer qurons. Therefore, in order to focus on states with relatively large probabilities, classical sampling methods are used to sample the layer. In the experiments, this strategy is used to tradeoff between the learning speed and the accuracy, where for a hidden layer with m qurons, only 2 m -3 sampling states are needed to calculate. Alternatively, in a real quantum computer (suppose there exit a real quantum computer), this can be done by repeatedly measuring the hidden layer several times to obtain a set of most likely layer states. Note that the classical methods sample the layer efficiently only in the second layer as they are tensor product states. As a result, for a deeper network structure, this strategy does not work well and only quantum computers perform efficiently. For verifying the performance of QPNN, we apply it to two real-life classification applications, i.e. MNIST handwritten digit database recognition and Cifar-10 classification. Here, in both experiments, Matlab simulation results show that hat only about 3% neuron resources are required in QPNN to obtain a better result than the classical feedforward neural network. Compare to the previous QPrNN, the test accuracies of MNIST and Cifar-10 are improved by 0.2% and 3% respectively. In addition to the resources saving, QPNN can also be used as memory to retrieve the most relevant data where the successful retrieve probability of MNIST is improved by 2% than QPrNN.
作者
陈佳临
王伶俐
CHEN Jia-Lin;WANG Ling-Li(School of Microelectronics,Fudan University,Shanghai 200433)
出处
《计算机学报》
EI
CSCD
北大核心
2019年第6期1205-1217,共13页
Chinese Journal of Computers
关键词
量子神经元
量子并行神经网络
量子可实现
容错性
量子内存
Quantum neuron (Quron)
Quantum Parallel Neural Network (QPNN)
quantum-implementable
fault tolerant
quantum memory
作者简介
陈佳临,男,1983年生,博士,助理研究员,主要研究方向为量子计算、量子物理架构和机器学习.E-mail: jl_chen@fudan.edu.cn;通信作者:王伶俐,男,1971年生,博士,教授,主要研究领域为逻辑综合、可重构计算、量子计算和机器学习.E-mail: llwang@fudan.edu.cn.