A Review of Quantum Machine Learning Algorithms Based on Variational Quantum Circuit
-
摘要:
随着数据规模的增加,机器学习的重要性与影响力随之增大. 借助量子力学的原理能够实现量子计算,结合量子计算和机器学习形成的量子机器学习算法对经典机器学习算法理论上能够产生指数级的加速优势. 部分经典算法的量子版本已经被提出,有望解决使用经典计算机难以解决的问题. 当前受量子计算硬件所限,可操控的量子比特数目和噪声等因素制约着量子计算机的发展. 短期内量子计算硬件难以达到通用量子计算机需要的程度,当前研究重点是获得能够在中等规模含噪声量子(noisy intermediate-scale quantum,NISQ)计算设备上运行的算法. 变分量子算法是一种混合量子-经典算法,适合应用于当前量子计算设备,是量子机器学习领域的研究热点之一. 变分量子电路是一种参数化量子电路,变分量子算法利用其完成量子机器学习任务. 变分量子电路也被称为拟设或量子神经网络. 变分量子算法框架主要由5个步骤组成:1)根据任务设计损失函数和量子电路结构;2)将经典数据预处理后编码到量子态上,量子数据可以省略编码;3)计算损失函数;4)测量和后处理;5)优化器优化参数. 在此背景下,综述了量子计算基础理论与变分量子算法的基础框架,详细介绍了变分量子算法在量子机器学习领域的应用及进展,分别对量子有监督学习、量子无监督学习、量子半监督学习、量子强化学习以及量子电路结构搜索相关模型进行了介绍与对比,对相关数据集及相关模拟平台进行了简要介绍和汇总,最后提出了基于变分量子电路量子机器学习算法所面临的挑战及今后的研究趋势.
Abstract:As the scale of available data increases, the importance and impact of machine learning grows. It has been found that quantum computing can be realized with the help of the principles of quantum mechanics, and the quantum machine learning algorithm formed by combining quantum computing and machine learning can theoretically produce exponential acceleration advantages over classical machine learning algorithms. Quantum versions of many classical algorithms have been proposed and they may solve problems that are difficult to classical computers. At present, limited by the quantum computing hardware, the number of controllable qubits, noise, and other factors restrict the development of quantum computers. Quantum computing hardware is unlikely to reach the level needed for universal quantum computers in the short term, and current research focuses on the algorithms that can run on noisy intermediate-scale quantum (NISQ) computers. Variational quantum algorithms (VQAs) are hybrid quantum classical algorithms which are suitable for current quantum computing devices. Related research is one of the research hotspots in the field of quantum machine learning. Variational quantum circuits (VQCs) are parameterized quantum circuits (PQCs) used in variational quantum algorithms to solve quantum machine learning tasks. It is also be called Ansatz and quantum neural networks (QNNs). The framework of variational quantum algorithm mainly contains five steps: 1) Designing the loss function according to the task. Designing parameterized quantum circuits as model and initializing parameters. 2) Embedding classical data. The classical data is pre-processed and encoded to the quantum state. If quantum data is used as input, it only needs to be pre-processed without encoding. 3) Calculating the loss function through parameterized quantum circuit. This step is where quantum advantage comes in. 4) Measuring and post-processing. Through quantum measurement operation, the quantum superposition state wave packet collapses into classical state. The classical data can be obtained after post-processing. 5) Optimizing the parameters. Updating parameters and optimizing the model with classical optimization algorithms and then returning to step 3 until the loss function converges after several iterations. We can obtain a set of optimal parameters. The final result is the output of the optimal model. This paper reviews the basic theory of quantum computing and the basic framework of variational quantum algorithm, and further introduces the application and progress of variational quantum algorithm in the field of quantum machine learning, then reviews supervised quantum machine learning including quantum classifiers, unsupervised quantum machine learning including quantum circuit born machine, variational quantum Boltzmann machine and quantum autoencoder, semi-supervised quantum learning including quantum generative adversarial network, quantum reinforcement learning, and quantum circuit architecture search in detail. Next, this paper compares the models and analyses their advantages and disadvantages, and briefly discusses and summarizes the related datasets and simulation platforms that can reproduce the introduced models. Finally, this paper puts forward the challenges and future research trends of quantum machine learning algorithms based on variational quantum circuit.
-
随着人类日益增长的能源需求和不可再生资源的枯竭,核聚变能源由于其清洁性和安全性作为解决长期能源需求的解决方案,越来越受到人类社会的关注,目前正在建设中的国际热核实验反应堆(international thermonuclear experimental reactor,ITER)是实现核聚变能和平应用的重要里程碑. 磁约束核聚变是产生热核聚变能的最重要方法之一[1-2]. 在反应堆中实现和维持等离子体聚变过程具有巨大的科学和技术挑战,其中针对等离子体稳定性的研究有助于理解、预测、控制和减轻等离子体破坏的威胁,是优化燃烧等离子体运行模式,改善等离子体约束和输运的重要保障,是设计和制造先进的核聚变装置的重要依据.
数值模拟是等离子体稳定性研究中的关键方法之一,相比理论研究,它能够分析复杂的物理过程,而相比实验研究,它更加经济和灵活. 在等离子体物理数值模拟研究中,回旋动理学理论经常被用来研究在拉莫尔半径空间尺度下的动理学不稳定性和湍流传输[3-5]. 在回旋动理学理论中,通过回旋平均方法将描述分布函数的方程维度从6维降低到5维,使得其特别适用于研究更长时间尺度下的等离子体不稳定性和湍流传输物理过程.
粒子网格法(particle in cell,PIC)由于其良好的可扩展性、物理守恒性、波粒相互作用描述准确性等优势,在众多回旋动理学模拟算法中具有广泛适用度和应用前景[6-8]. 基于PIC算法的突出特点,科研学者在解决特定时空尺度物理问题的同时,逐步向多时空尺度耦合的非线性复杂物理模拟演进. 其对磁约束核聚变高性能数值模拟中涉及的程序架构、计算性能、算法优化、并行效率都提出了前所未有的挑战. 许多科研学者尝试借助异构平台的计算性能满足回旋动理学PIC代码日益增长的算力需求,在移植优化和数值算法上作出了诸多努力.
GTC代码是早期受益于异构并行计算的代码之一,基于CUDA在天河一号上展示2~3倍的加速[9]. 基于OPENACC在Titan上展示了2~3倍的加速,在Summit上展示了3~4倍的加速[10]. 基于Intel Xeon Phi加速器,在天河二号上展示了2~5倍的加速[11]. ORB5代码基于OPENACC,在Tesla P100 GPU和Tesla V100 GPU的Summit中分别获得了4倍和5倍的加速[12].
在上述研究中,通常着重考虑了等离子体中电子对模型的贡献,针对电子的模拟,凭借访存规则等优势可以获得较高的计算性能加速. 而聚变产物Alpha粒子与动理学离子类似,回旋半径较大,必须在回旋运动轨迹上进行回旋平均,从而带来大量非规则的网格数据访存,对访存性能提出了很高的要求. 文献显示在只有动理学离子和绝热电子的情况下,异构移植给整体性能带来了负面的优化[13]. 考虑到聚变产物Alpha粒子的约束和输运是磁约束聚变能否成功的关键. 本文重点聚焦于以Alpha粒子为代表的回旋动理学代码的异构移植和性能优化.
1. 实验平台:天河新一代超算系统
本文的移植优化及分析测试在天河新一代超级计算机上进行. 天河新一代超级计算机使用异构处理器MT-
3000 [14],它包含16个CPU,4个加速集群(簇),96个控制核心和1 536个加速核心,理论计算密度高达145FLOPB. 每个加速核心以超长指令字(very long instruction word, VLIW)方式工作,每16个加速器核心和1个控制核心被组织成1个加速阵列,以SIMD指令控制. MT-3000 具有混合的存储器层次结构,包括每个集群的GSM(6MB),HBSM(48MB),DDR(32GB)存储器,每个加速阵列的AM(768KB)和SM(64KB)片上存储器为加速核供给数据. 其架构如图1所示.在异构处理器MT-
3000 上移植程序时有2个挑战:一方面,如何高效使用复杂的内存结构高效的将数据传递到加速阵列;另一方面,如何充分发挥高计算密度特性. 这2方面的挑战需要在程序移植优化时打破传统基于CPU的程序设计结构更多地强调计算性能的作用,从而实现整体性能的提高.2. VirtEx代码热点分析及异构开发
VirtEx是基于PIC算法开发的回旋动理学模拟代码,已成功用于分析线性电阻撕裂不稳定性[15]. 代码按照PIC方法,将带电粒子以拉格朗日法描述,对应在连续相空间的分布函数采样点;而场信息以欧拉法描述,采用结构化网格描述平衡场,采用非结构化网格描述扰动场[16]. VirtEx代码的并行化策略是通过在环形方向上将模拟区域划分为不同的子域实现空间并行化,每个子域由1组进程管理. 该组中的每个进程拥有子区域内的场信息副本,并在该子域内将粒子按照进程编号进行并行划分.
VirtEx代码的主要结构如图2所示,其主循环使用2阶龙格-库塔算法,在每个循环中,通过函数Push更新粒子在相空间的位置,其可以更加细致的分为粒子对场信息的回旋平均函数PG(push gather)和粒子位置更新函数PI(push interpolation);通过函数Locate计算粒子位置和扰动场网格之间插值的权重系数;通过函数Charge计算在非结构化扰动网格上的分布函数矩. 而其他热点部分主要是对非结构化网格上的扰动场更新和粒子MPI通信等操作. 其中3个函数Push,Locate,Charge为代码的热点,共占主循环时间的85%以上.
3个热点函数中涉及的算法如下所示:
算法1. 函数PushGather回旋平均算法.
输入:环向格点权重wzpart, 径向格点权重wppart, 极向格点权重wtpart, 格点编号jtpart, 扰动电场gradphi;
输出:回旋平均扰动场wpgc.
for (mp=0; mp<mpmax; mp++)/*粒子循环*/
for(igyro=0;igyro<ngyro;igyro++) /*回旋平均 循环*/
读取粒子所在的格点权重及索引;
以索引读取gradphi;
计算临时变量e;
end for
累加计算wpgc,供函数PI使用*/
end for
算法2. 函数PushInterpolation粒子位置更新算法.
输入:相空间坐标zpart, 历史相空间坐标zpart0,回旋平均扰动场wpgc;
输出:相空间坐标zpart.
for (mp=0; mp<mpmax; mp++)/*粒子循环*/
读取粒子信息 zpart ,wpgc;
插值获取网格信息、电场、磁场等;
计算场对粒子的作用;
推动粒子更新速度位置信息;
end for
算法3. 函数Locate粒子到场的插值权重系数算法.
输入:相空间坐标zpart;
输出:环向格点权重wzpart, 径向格点权重wppart, 极向格点权重wtpart, 格点编号jtpart.
for (mp=0; mp<mpmax; mp++)/*粒子循环*/
for(igyro=0; igyro<ngyro; igyro++)/*回旋平均 循环*/
读取粒子信息zpart;
读取网格信息;
计算粒子插值权重;
end for
end for
算法4. 函数Charge非结构化扰动网格上的分布函数矩算法.
输入:环向格点权重wzpart, 径向格点权重wppart, 极向格点权重wtpart, 格点编号jtpart;
输出:电流密度density.
for (mp=0; mp<mpmax; mp++)/*粒子循环*/
插值获取网格信息、电场、磁场;
for(igyro=0; igyro<ngyro; igyro++)/*回旋平均 循环*/
读取粒子插值权重;
计算粒子对于周围格点的扰动量;
粒子信息向网格上规约到density;
end for
end for
上述3个热点函数中的4个算法外层循环体均围绕粒子展开,且粒子间具有良好的独立性,面向异构处理器MT-
3000 异构移植工作主要围绕粒子循环的向量指令集改写展开.同时,为了更好适配向量指令集的访存特性,在数据结构上做了改写,将粒子数据使用SOA(struct of array)数据结构标识,网格数据使用AOS(array of struct)数据结构. 粒子数据具有数量多,独立性好的特性,配合SOA数据结构更适用于发挥向量指令运算的优势;而网格数据数量远远小于粒子数,访存量巨大,AOS的数据结构能够充分发挥内存局部性. 针对数据结构的改写工作为后续程序的性能优化提供了重要的保障.
3. 面向高计算密度异构设备的性能优化策略
基于上述对于程序热点函数的分析,回旋动理学PIC数值模拟算法涉及粒子与网格数据间的大量访存,尤其在面向扰动场网格数据的访存操作中存在非规则访问和原子写操作,二者对于访存性能提出了艰难的挑战,几个热点函数的访存与计算量统计如表1所示.
表 1 VirtEx热点函数的初始计算密度统计Table 1. Initial calculated density statistics of VirtEx hot spot function函数 浮点计算量/FLO 访存量/B 计算密度/FLOPB PG 269mp 232mp 1.15 PI 462mp 224mp 1.98 Locate 238mp 200mp 1.17 Charge 158mp 200mp 0.75 注:变量mp表示粒子数量,变量前系数为热点函数中每个粒子计算访存量的统计值. 因此,如何将计算密度在1~2 FLOPB的访存密集型模块,通过性能优化策略发挥高计算密度型异构设备的计算性能,是关键性的研究内容,也是本文的研究重点. 在本章中通过中间变量的即时计算,基于SM片上存储的软件缓存设计,热点函数合并3种优化方法展开介绍.
3.1 中间变量的即时计算
在传统基于CPU的程序设计中,开发者更倾向于主动寻找公用数据预先计算并暂存于内存中,利用多级高速缓存,通过索引获取数据,通过增加访存量换取计算量的减少. 然而,这种优化方法并不适合于基于宽向量计算的高计算密度型异构设备,大量引入访存会限制计算能力的发挥,同时使用索引的非规则访存模式也不适用于向量计算. 因此,考虑到新架构的特点,本文采用了与传统方法截然相反的优化方法来提高计算性能.
在VirtEx中,磁场、温度、密度、安全因子等中间变量可以将预计算转换为即时计算,引入热点函数中,按照每个粒子对中间变量的需求完成计算. 该操作可以有效减少热点函数中的规则访存和非规则访存,降低流水线中断次数,避免由于按索引访问所带来的向量重组操作.
通过热点函数分析,可以进行优化的中间变量重要分为2类. 一类以每个径向网格上的极向网格点数mtheta为例,该函数可以在热点函数中完成即时计算:
mthetai=2Floor(πriΔl+0.5). (1) 另一类中间变量却难以直接解析化表达,例如粒子在非结构化扰动场网格中的位置索引信息igrid,其形式为
igridi=1+i−1∑j=0mthetai, (2) mthetai=2πrΔl+δi=ai+b+δi. (3) 如式(2)所示,变量igrid的计算基于变量mtheta的累加式,而由于函数Floor引入的不连续性,导致变量igrid的数学公式不能通过简单的变换和积分得出.
由于极向格点数远大于1,且径向格点在r坐标描述下是均匀的,当残差δi≪1,igrid同样可以表示为
igridi=ai2+bi+c+ri, (4) 其中残差r远小于二次函数部分. 为了能够构建igrid的解析表达式,采用多项式来拟合二次函数的部分,而残差可以通过周期函数f来降低到0.5以下,如图3所示. 从而igrid的解析表达式可以表示为如下的形式:
igridi=Round[ai2+bi+c+f(i)]. (5) 得益于对平衡剖面信息的解析化表达和即时计算,函数PushInterpolation和函数Locate中的随机内存访问过程得到减少. 只有热点函数PushGather中存在针对扰动场回旋平均的随机内存访问,在下面的章节中会论述相应的优化方法.
3.2 基于SM片上存储的软件缓存设计
在基于CPU的通用架构中,内置的缓存机制允许开发者在编程时无需关注高速缓存,更多的是将其视为自动化的访存系统. 而在MT-
3000 处理器中,考虑到性能,内存和SM/AM之间,以及SM/AM和向量寄存器之间的数据交换需要由程序员手动控制. 在处理内存的随机访问时,依赖DMA接口操作需要依赖索引和数据,造成了内存带宽的浪费. 为了解决这个问题,本文针对加速阵列内部片上存储SM设计软缓存机制,充分发挥内存结构和内存局部性的优势.在VirtEx热点函数中有2个非规则访问,其中一个是在函数Push中涉及到对扰动场网格数据的非规则访问,另一个是在函数Charge中涉及到对扰动场网格数据更新的原子写操作.
函数Charge通过累加操作(+=)将粒子信息到网格上,由于粒子分散在子域内的多个进程,且网格数远小于粒子数,这将涉及到原子操作. 读/写锁是MT-3000处理器中解决数据竞争的重要方法,因此基于读/写锁设计了1种多级同步的软件缓存机制,首先在SM中进行细粒度(如单字)更新,不涉及任何同步操作;其次,使用读写锁保证缓存块在被换出时不会受到数据竞争. 同时完成缓存块从SM到主存储器的累加操作.
函数PushGaher主要通过4点回旋平均算法获取粒子在回旋运动轨迹上的扰动场信息. 由于片上缓存空间有限,回旋平均算法的随机访问性质会对主存带来巨大的访存开销. 因此基于片上SM存储设计了1种软件缓存机制,该机制通过粒子索引将网格数据按照缓存块读入,如果向量宽度内所有粒子的索引均在缓存块内命中,组装网格数据向量传到向量寄存器完成向量计算;如果索引未在缓存块命中,按照所需索引完成缓存块数据的更新. 同时考虑到性能和局部性的平衡,设计64个缓存块并使用哈希作为缓存块的标识.
在软件缓存机制的实施后,非规则访存被有效转化,访存带宽的压力得到了缓解. 为缓存命中问题. 进一步地,考虑到回旋平均算法需获取轨迹上每1点的扰动场信息,由于粒子在速度空间分布的随机性,在更新粒子位置后,极坐标方向的粒子分布会被分散,从而扰乱粒子在非结构化扰动场网格上的分布. 程序现有的基于粒子所在径向网格点的排序算法,由于加速阵列中的片上存储空间有限,该算法不足以支撑高计算密度的异构设备,导致缓存命中率的降低.
图4显示了排序算法优化前后,粒子序号与相应的非结构化网格序号之间的关系,其中psi排序是原始的径向排序算法,igrid排序是改进的排序算法,按照粒子所在的网格点排序,增强了空间局部性. 优化后的排序采用桶式排序算法,每个桶对应于粒子所属的网格点,由于粒子运动的对称性,每个桶的容量总是与每个网格的粒子数同序,因此该算法的复杂性与原来的psi排序同样是O(N).
不同排序算法下针对扰动场变量gradphi的缓存命中率,如表2所示,在64个缓存块和1 024 B缓存块大小的情况下,扰动场变量gradphi在没有粒子排序的情况下命中率为77.99%,接近于psi排序下的84.47%,而采用igrid排序可以获得99.15%的缓存命中率,得益于超高的缓存命中率,针对变量gradphi的非规则访问可以被近似视作规则访问.
表 2 不同排序算法下针对扰动场变量gradphi的缓存命中率Table 2. Cache Hit Rate for Disturbance Field Variable gradphi Under Different Sorting Algorithms排序算法 缓存命中率/% 不排序 77.99 psi排序 84.47 igrid排序 99.15 3.3 热点函数合并
通过热点函数面向异构加速器MT-
3000 的移植以及上述几种优化方式的应用. 非规则访存操作已经被近似消除,减轻了访存带宽的压力. 在经过优化后,热点函数PG,PI,Locate的浮点计算量、访存量以及计算密度的统计数据如表3所示,其中mp表示粒子数量,考虑到每个粒子相同的操作,其在统计中作为系数表示. 从数据上可以看出,由于函数PG中的回旋平均操作主要涉及内存访问,其计算密度仅为1.39;而时间占比最高的函数PI,考虑到基于粒子的计算特点,计算密度仅为12.4;而函数Locate在经过变量即时计算优化后,计算密度达到56.3. 综上所述,时间占比高达40%的函数Push的计算密度需要进一步提高计算访存比.表 3 热点函数合并优化后计算密度统计Table 3. Hot Spot Function is Merged and Optimized to Calculate the Density Statistics函数 浮点计算量/FLO 访存量/B 计算密度/FLOPB PG 277mp 198.64mp 1.39 PI 1 888mp 152mp 12.4 Locate 12 161mp 216mp 56.3 PushOpt 14 326mp 134.64mp 106.4 注:变量mp表示粒子数量,变量前系数为热点函数中每个粒子计算访存量的统计值. 函数PG,PI,Locate在PIC算法中是计算粒子运动的3个相关函数,函数Locate负责计算插值系数,函数PG负责获取网格数据,函数PI负责推动粒子,三者在算法上具备可合并性. 将函数Locate引入到函数Push中,并将函数PG和PI合并,合并后输入仅为粒子信息和网格信息,输出为粒子信息,减少了对于大量中间变量的读写. 优化函数PushOpt的计算密度达到106.4 FLOPB,进一步缩小了与理论值的差距.
4. 优化性能测试及分析
4.1 中等规模基准算例性能测试
在该这个基准算例测试中,我们用1个MPI进程控制1个MT-3000加速集群(簇),在天河新一代超算系统上使用120个节点上的480个MPI进程和480个簇. 该基准测试使用了1.23 × 106个网格,模拟了2.5 × 109个粒子.
表4显示了CPU版本和优化版本之间在主循环和热点函数上的性能对比,CPU版本的3个主要的热点函数的占比达到86.06%. 结果显示,基于MT-
3000 处理器的应用加速效果良好,总体速度提高了4.2倍,其中函数Push和函数Locate分别实现了10.9倍和13.3倍的加速,在具有原子操作的函数Charge实现了16.2倍的性能提升.表 4 基准算例的性能表现Table 4. The Performance of Benchmark Examples热点函数 CPU版本 优化后版本 加速比 计算时间/s 占比/% 计算时间/s 占比/% 主循环 845.63 100 201.46 100 4.2 Push 323.86 38.30 29.64 14.71 10.9 Locate 128.69 15.22 9.67 4.80 13.3 Charge 275.19 32.54 16.98 8.43 16.2 4.2 扩展性测试
本节展示了优化后的VirtEx程序的弱扩展性测试结果. 在弱扩展性测试中,基准测试为120个节点,使用了3.86 × 105个网格,模拟了3.7 × 109个粒子. 随着节点数增加至3 840个,模拟的粒子数也相应的增加到了1.18 × 1011. 经过多轮测试取平均后的并行效率,如图5所示,在天河新一代超算系统的3 840个节点5 898 240个加速器核心上,其并行效率为88.4%,展示了良好的弱扩展性.
5. 结 论
基于天河新一代超算系统的异构加速器MT-
3000 对大规模并行磁约束聚变回旋动理学模拟代码VirtEx进行代码移植和性能优化,围绕高计算密度型系统和访存密集型应用间存在的矛盾. 通过中间变量的即时计算、定制化的软件缓存设计、空间局部性优化、热点函数合并等优化策略,并通过数据分析验证了优化的合理性. 同时在基准测试中,VirtEx的优化显示了良好的加速效果,其中函数Push提速10.9倍,函数Locate提速13.3倍,函数Charge提速16.2倍,从而使整个程序提速4.2倍. 并且在3 840个节点的5 898 240个加速器核心上展示了良好的可扩展性,并行效率为88.4%.作者贡献声明:李青峰负责程序设计、移植、测试,并撰写论文;李跃岩负责设计并实现优化算法;栾钟治负责程序瓶颈分析和解决方案提供;张文禄提供了针对程序原理和算法方面的指导;龚春叶提供了针对异构加速设备的优化指导;郑刚提供了系统测试环境及保障工作;康波提供了共性技术的指导;孟祥飞负责设计研究方案并把控研究进度.
-
表 1 用于量子机器学习模型的设备
Table 1 Device Used for Quantum Machine Learning Model
模型 门电路 量子退火机 脉冲电路 分类器 √ × √ 量子卷积神经网络 √ × × 影子电路 √ × × 量子玻恩机 √ × × 量子玻尔兹曼机 √ √ × 量子自编码器 √ √ × 量子生成对抗网络 √ × × 量子强化学习 √ × × 量子电路结构搜索 √ × √ 注:“√”表示可以实现的设备,“×”表示不能实现的设备. 表 2 常用基本量子逻辑门
Table 2 Frequently Used Basic Quantum Gates
量子逻辑门 符号表示 酉矩阵表示 单位门 I (1001) Pauli-X门 X (0110) Pauli-Y门 Y (0−ii0) Pauli-Z门 Z (100−1) Hadmard门 H √22(111−1) Phase门 S (100i) 交换门 SWAP (1000001001000001) 受控非门 CNOT,CX (1000010000010010) 受控Y门 CY (10000100000−i00i0) 受控Z门 CZ (100001000010000−1) Toffoli门 Toffoli,CCNOT (1000000001000000001000000001000000001000000001000000000100010010) Fredkin门 Fredkin,CSWAP (1000000001000000001000000001000000001000000000100000010000010001) 表 3 经典优化算法及模型使用偏好
Table 3 Classical Optimization Algorithms and Usage Preference of Models
经典优化器 基于梯度 有监督模型 无监督模型 半监督模型 强化学习模型 电路结构搜索 Adam[37] 是 TTNs[10,32-33]
QCCNN[38-39]
VSQL[34]QCBM[31,40-42],
QBM[14]
QAE[15,43]QGAN[19,44-50] QDQN[16,51]
QActor-critic[52]
QMARL[53-54]QuantumNAS[12]
QAS[55]
MQNE[56](mini-batch)SGD[57] 是 HNN[58] QGAN[59] QDDQN[60] AMSGRAD[61] 是 QGAN[50,62] BFGS/L-BFGS-B[63-64] 是 QCBM[40]
QAE[65-66]Nesterov moment[67] 是 QCNN[39] RMSProp[68] 是 MPS-VQC[30] VQ-DQN[69] 基于梯度优化的其他模型 是 QCNN[11,70-71] QCBM[41]
QBM[72]QGAN[73-74] PSO[75] 否 QCBM[13,76] SPSA[77] 否 QGAN[62] CRLQAS[78] CMA-ES[79] 否 QCBM[40,80] GA[81] 否 QCBM[82]
QAE[83]表 4 量子机器学习任务常用数据集
Table 4 Frequently Used Datasets of Quantum Machine Learning Tasks
任务 数据集/交互环境 有监督学习 MNIST[10,30,32-33,38,70-71,98]、Iris [10,32,58]、BAS[58]、量子数据[32] 无监督学习 MNIST[103]、BAS[13,15,31,40,42,72,76]、金融数据[41,80]、药物数据QM9[43]、生成概率分布[15,40,82,101]、量子数据[101] 半监督学习 MNIST[19,49-59]、BAS[19,44-45]、QM9[47,104]、生成概率分布数据[46,62]、量子数据[73] 电路结构搜索 MNIST[12,55-56]、量子数据[56] 强化学习 frozen-lake[69,105],cart pole[16,51-52,105-106] 表 5 常用模拟平台及模型
Table 5 Frequently Used Simulation Platforms and Models
模拟平台 机构 模型 语言 Qiskit IBM HNN[58],QGAN[50],QBM[72] Python TFQ Google TTNs[32],MERAs[32],VQTN[10],QCNN[70],QGAN[107],QRL[51] Python Pennylane Xanadu QTN[33],QCNN[39],QCBM[31],QGAN[47],QVAE[43],
VQ−DQN[69],QRL[52],QAS[108]Python Torchquantum MIT QuantumNAS[12],QAE[15],QMARL[53] Python Yao QuantumBFS MQNE[56],QCBM[40],QGAN[44] Julia Paddle Quantum 百度 QCL[84],VSQL[34],QAE[65],QGAN[74] Python VQNet 本源量子 VQM[109],QCNN[11,110],VSQL[34],QAE[65],QGAN[50],VQ-DQN[69] C++ 表 6 分类任务上基于变分量子电路的机器学习算法
Table 6 Machine Learning Algorithms Based on Variational Quantum Circuits for Classification Tasks
模型 数据集 任务 环境 量子位 参数量 准确率/% 训练集 测试集 VQM[98] MNIST 二分类 模拟 17 136 90 TTN[32] Iris 二分类 模拟 4 7 98.92 TTN[32] MNIST 二分类 模拟 8 7 97.63 MERA[32] MNIST 二分类 模拟 8 11 98.86 Hybrid[32](TTN 预训练过的 MERA) MNIST 二分类 模拟 8 11 98.46 TTN[32] 合成量子数据集 二分类 模拟 8 7 60.45 PCA-VQC[30] MNIST 二分类 模拟 4 12 87.29 87.34 MPS-VQC[30] MNIST 二分类 模拟 4 12 99.91 99.44 QTN-VQC[33] MNIST 二分类 模拟 8 328 91.43 QTN-VQC[33] MNIST 二分类 模拟 12 4464 92.36 QTN-VQC[33] MNIST 二分类 模拟 16 600 92.28 VQTN[10] Iris 三分类 模拟 2 3 100 VQTN(TTN)[10] MNIST 二分类 模拟 8 12 97.80 VQTN(TTN)[10] MNIST 二分类 模拟 16 28 97.45 VQTN(MERA)[10] MNIST 二分类 模拟 8 18 97.92 VQTN[10] MNIST-4 四分类 模拟 82.19 QCNN[70] MNIST 十分类 模拟 4 6 95 Noisy QCNN[71] MNIST 二分类 模拟 14 46 94.8 96.0 Noisy QCNN[71] MNIST 十分类 模拟 14 379 74.2 74.0 Noisy-free QCNN[71] MNIST 二分类 模拟 14 46 95.4 96.3 Noisy-free QCNN[71] MNIST 十分类 模拟 14 379 75.6 74.3 QCCNN[38] Tetri 二分类 模拟 4 16 ≈100 QCCNN[38] Tetri 四分类 模拟 4 16 ≈100 QMLP[111] MNIST 十分类 模拟 16 128 75 QMLP[111](比特翻转) MNIST 十分类 模拟 16 128 63 QMLP[111](相位翻转) MNIST 十分类 模拟 16 128 67 VSQL[34] MNIST 二分类 模拟 2 35 99.52 VSQL[34] MNIST( 1000 个样本)十分类 模拟 9 928 87.39 VSQL[34] 含噪量子态 二分类 模拟 2 100 VSQL[34] 不含噪量子态 三分类 模拟 2 100 HNN[58] BAS 二分类 模拟 10 20 100 HNN[58] BAS 二分类 量子 10 20 33.33 HNN[58] Iris 三分类 模拟 10 20 89.88 91.5 HNN[58] Iris 三分类 量子 10 20 28.12 37.5 注:数据取相应论文给出的最优模型数据,使用相同数据集的相同任务之间仍存在差异,例如 MNIST 数据集二分类任务可以为2个数字的分类、是否为偶数的分类、是否大于4 的分类等,并非完全一致. 模拟环境是指使用经典计算机模拟的环境,量子环境是指使用量子计算机上运行相应算法. 表 7 QGAN 分类及相关研究
Table 7 Classification of QGANs and Related Researches
任务 生成器 判别器 名称 相关研究 经典 经典 经典 CT-CGCD 文献[126] 经典 经典 量子 CT-CGQD 文献[46, 104, 107] 经典 量子 经典 CT-QGCD 文献[19, 44−49, 59] 经典 量子 量子 CT-QGQD 文献[46, 59] 量子 经典 经典 QT-CGCD 文献[127] 量子 经典 量子 QT-CGQD 量子 量子 经典 QT-QGCD 文献[50, 62] 量子 量子 量子 QT-QGQD 文献[73−74, 124, 128−129] 注:采用文献[125]给出的命名方式,名称中的字母 T,G,D分别表示任务、生成器、判别器. C,Q 分别表示是通过经典还是量子方法完成的. 经典生成器与量子判别器构成的QGAN对于量子数据无法收敛到纳什均衡,无法完成量子任务. 表 8 量子强化学习算法
Table 8 Quantum Reinforcement Learning Algorithms
模型 测试环境 环境 量子位 参数量 回合数 回报 VQ-DQN[69] frozen-lake 模拟 4 28 198 0.9 VQ-DQN(pretrianed)[69] frozen-lake 量子 4 28 1 0.95 VQ-DQN[69] cognitive-radio 模拟 4 28 10* 100 VQ-DQN(pretrianed)[69] cognitive-radio 量子 4 28 1 100 Quantum-DQN[105] frozen-lake v0 模拟 4 5层 3100 1.0 Quantum-DQN[105] frozen-lake v0 模拟 4 10层 2200 1.0 Quantum-DQN[105] frozen-lake v0 模拟 4 15层 1700 1.0 Quantum-DQN[105] Cart Pole v0(optimal) 模拟 4 62 186 195 Quantum-DQN[105] Cart Pole v0(sub-optimal) 模拟 4 62 3000 176 Quantum Actor-critic[52] Cart Pole 模拟 4 36 6000 105 QLSTM-DRQN-1[16] Cart Pole(Full Observable) 模拟 8 150 350* 100* QLSTM-DRQN-1[16] Cart Pole(Partially Observable) 模拟 8 146 675* 150* QLSTM-DRQN-2[16] Cart Pole(Full Observable) 模拟 8 270 420* 125* QLSTM-DRQN-2[16] Cart Pole(Partially Observable) 模拟 8 266 750* 100* QMARL[53] Single-Hop Offloading 模拟 4 50 500 −3.0 改进CTDE QMARL[54] Smart Factory 模拟 16 54 980 −37.0 注:带*的数值表示原论文中未给出精确数值,本文进行估算后得到的数值. 结果为各论文中给出的最优参数的模型. Quantum-DQN模型中未给出具体参数量,层数与参数量正相关. 表 9 量子架构搜索算法
Table 9 Quantum Architecture Searching Algorithms
模型 数据集 任务 环境 量子位 最优结构参数量 准确率/% QuantumNAS[12] MNIST 二分类 量子 5 22 95 QuantumNAS[12] MNIST 四分类 量子 5 22 75 QuantumNAS[12] MNIST 十分类 量子 15 32.5 QuantumNAS[12] Fashion-2 二分类 量子 5 22 92 QuantumNAS[12] Fashion-4 四分类 量子 5 36 85 MQNE[56] MNIST 二分类 模拟 9 106 97 MQNE[56] Cancer 二分类 模拟 7 68 94.6 MQNE[56] SPT 量子态分类 模拟 8 46 100 QAS[55] Fashion-MNIST 二分类 模拟 10 92.4 QAS[108] 合成数据集(无噪声)[144] 二分类 模拟 3 >90 QAS[108] 合成数据集(有噪声)[144] 二分类 模拟 3 100 CRLQAS[78] VQE 模拟 NAPA[142] VQE最大割 量子 -
[1] Hilbert M, López P. The world’s technological capacity to store, communicate, and compute information[J]. Science, 2011, 332(6025): 60−65 doi: 10.1126/science.1200970
[2] Arute F, Arya K, Babbush R, et al. Quantum supremacy using a programmable superconducting processor[J]. Nature, 2019, 574(7779): 505−510 doi: 10.1038/s41586-019-1666-5
[3] Huang Cupjin, Zhang Fang, Newman M, et al. Classical simulation of quantum supremacy circuits[J]. arXiv preprint, arXiv: 2005.06787, 2020
[4] Pan Feng, Zhang Pan. Simulating the Sycamore quantum supremacy circuits[J]. arXiv preprint, arXiv: 2103.03074, 2021
[5] Zhu Qingling, Cao Sirui, Chen Fusheng, et al. Quantum computational advantage via 60-qubit 24-cycle random circuit sampling[J]. Science Bulletin, 2022, 67(3): 240−245 doi: 10.1016/j.scib.2021.10.017
[6] Feng Congcong, Zhao Bo, Zhou Xin, et al. An enhanced quantum k-nearest neighbor classification algorithm based on polar distance[J]. Entropy, 2023, 25(1): 127 doi: 10.3390/e25010127
[7] Li Jing, Gao Fei, Lin Song, et al. Quantum k-fold cross-validation for nearest neighbor classification algorithm[J]. Physica A: Statistical Mechanics and Its Applications, 2023, 611: 128435 doi: 10.1016/j.physa.2022.128435
[8] Cerezo M, Sharma K, Arrasmith A, et al. Variational quantum state eigensolver[J]. NPJ Quantum Information, 2022, 8(1): 113 doi: 10.1038/s41534-022-00611-6
[9] Zhou Zeqiao, Du Yuxuan, Tian Xinmei, et al. Qaoa-in-qaoa: Solving large-scale maxcut problems on small quantum machines[J]. Physical Review Applied, 2023, 19(2): 024027 doi: 10.1103/PhysRevApplied.19.024027
[10] Huang Rui, Tan Xiaoqing, Xu Qingshan. Variational quantum tensor networks classifiers[J]. Neurocomputing, 2021, 452: 89−98 doi: 10.1016/j.neucom.2021.04.074
[11] Cong I, Choi S, Lukin M D. Quantum convolutional neural networks[J]. Nature Physics, 2019, 15(12): 1273−1278 doi: 10.1038/s41567-019-0648-8
[12] Wang Hanrui, Ding Yongshan, Gu Jiaqi, et al. QuantumNAS: Noise-adaptive search for robust quantum circuits[C]//Proc of the 28th Annual Int Symp on High-Performance Computer Architecture. Piscataway, NJ: IEEE, 2022: 692−708
[13] Benedetti M, Garcia-Pintos D, Perdomo O, et al. A generative modeling approach for benchmarking and training shallow quantum circuits[J]. NPJ Quantum Information, 2019, 5(1): 45 doi: 10.1038/s41534-019-0157-8
[14] Zoufal C, Lucchi A, Woerner S. Variational quantum Boltzmann machines[J]. Quantum Machine Intelligence, 2021, 3(1): 7 doi: 10.1007/s42484-020-00033-7
[15] Wu S R, Li C T, Cheng H C. Efficient data loading with quantum autoencoder[C/OL]//Proc of the 48th Int Conf on Acoustics, Speech and Signal Processing. Piscataway, NJ: IEEE, 2023[2023-09-14]. https://ieeexplore.ieee.org /abstract/document/10096496
[16] Chen Guoming, Chen Qiang, Long Shun, et al. Quantum convolutional neural network for image classification[J]. Pattern Analysis and Applications, 2023, 26(2): 655−667 doi: 10.1007/s10044-022-01113-z
[17] Akshay V, Philathong H, Morales M E, et al. Reachability deficits in quantum approximate optimization[J]. Physical Review Letters, 2020, 124(9): 090504 doi: 10.1103/PhysRevLett.124.090504
[18] Anand A, Alperin-Lea S, Choquette A, et al. Exploring the role of parameters in variational quantum algorithms[J]. arXiv preprint, arXiv: 2209.14405, 2022
[19] Zhou Nanrun, Zhang Tianfeng, Xie Xinwen, et al. Hybrid quantum classical generative adversarial networks for image generation via learning discrete distribution[J]. Signal Processing: Image Communication, 2023, 110: 116891 doi: 10.1016/j.image.2022.116891
[20] Romero J, Babbush R, Mcclean J R, et al. Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz[J]. Quantum Science and Technology, 2018, 4(1): 014008 doi: 10.1088/2058-9565/aad3e4
[21] Bang J, Lim J, Kim M S, et al. Quantum learning machine[J]. arXiv preprint, arXiv: 0803.2976, 2008
[22] Cerezo M, Arrasmith A, Babbush R, et al. Variational quantum algorithms[J]. Nature Reviews Physics, 2021, 3(9): 625−644 doi: 10.1038/s42254-021-00348-9
[23] Schuld M, Killoran N. Quantum machine learning in feature Hilbert spaces[J]. Physical Review Letters, 2019, 122(4): 040504 doi: 10.1103/PhysRevLett.122.040504
[24] Lloyd S, Schuld M, Ijaz A, et al. Quantum embeddings for machine learning[J]. arXiv preprint, arXiv: 2001.03622, 2020
[25] Schuld M. Supervised quantum machine learning models are kernel methods[J]. arXiv preprint, arXiv: 2101.11020, 2021
[26] Grover L, Rudolph T. Creating superpositions that correspond to efficiently integrable probability distributions[J]. arXiv preprint, quant-ph/0208112, 2002
[27] Kitaev A, Webb W A. Wavefunction preparation and resampling using a quantum computer[J]. arXiv preprint, arXiv: 0801.0342, 2008
[28] Lloyd S. Universal quantum simulators[J]. Science, 1996, 273(5278): 1073−1078 doi: 10.1126/science.273.5278.1073
[29] Kandala A, Mezzacapo A, Temme K, et al. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets[J]. Nature, 2017, 549(7671): 242−246 doi: 10.1038/nature23879
[30] Chen S Y C, Huang C M, Hsing C W, et al. Hybrid quantum-classical classifier based on tensor network and variational quantum circuit[J]. arXiv preprint, arXiv: 2011.14651, 2020
[31] Gong Lihua, Xing Lingzhi, Liu Sihang, et al. Born machine model based on matrix product state quantum circuit[J]. Physica A: Statistical Mechanics and its Applications, 2022, 593: 126907 doi: 10.1016/j.physa.2022.126907
[32] Grant E, Benedetti M, Cao Shuxiang, et al. Hierarchical quantum classifiers[J]. NPJ Quantum Information, 2018, 4(1): 65 doi: 10.1038/s41534-018-0116-9
[33] Qi Jun, Yang Chaohan, Chen Pinyu. QTN-VQC: An end-to-end learning framework for quantum neural networks[J]. Physica Scripta, 2023, 99(1): 015111
[34] Li Guangxi, Song Zhixin, Wang Xin. VSQL: Variational shadow quantum learning for classification[C]//Proc of the 35th AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2021: 8357−8365
[35] Grimsley H R, Economou S E, Barnes E, et al. An adaptive variational algorithm for exact molecular simulations on a quantum computer[J]. Nature Communications, 2019, 10(1): 3007 doi: 10.1038/s41467-019-10988-2
[36] Bittel L, Kliesch M. Training variational quantum algorithms is NP-hard[J]. Physical Review Letters, 2021, 127(12): 120502 doi: 10.1103/PhysRevLett.127.120502
[37] Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint, arXiv: 1412.6980, 2014
[38] Liu Junhua, Lim K H, Wood K L, et al. Hybrid quantum-classical convolutional neural networks[J]. Science China Physics, Mechanics & Astronomy, 2021, 64(9): 290311
[39] Hur T, Kim L, Park D K. Quantum convolutional neural network for classical data classification[J]. Quantum Machine Intelligence, 2022, 4(1): 3 doi: 10.1007/s42484-021-00061-x
[40] Liu Jinguo, Wang Lei. Differentiable learning of quantum circuit born machines[J]. Physical Review A, 2018, 98(6): 062324 doi: 10.1103/PhysRevA.98.062324
[41] Coyle B, Henderson M, Le J C J, et al. Quantum versus classical generative modelling in finance[J]. Quantum Science and Technology, 2021, 6(2): 024013 doi: 10.1088/2058-9565/abd3db
[42] Leyton-Ortega V, Perdomo-Ortiz A, Perdomo O. Robust implementation of generative modeling with parametrized quantum circuits[J]. Quantum Machine Intelligence, 2021, 3(1): 17 doi: 10.1007/s42484-021-00040-2
[43] Li Junde, Ghosh S. Scalable variational quantum circuits for autoencoder-based drug discovery[C]//Proc of the 25th Design, Automation and Test in Europe Conf and Exhibition. Piscataway, NJ: IEEE, 2022: 340−345
[44] Zeng Jinfeng, Wu Yufeng, Liu Jinguo, et al. Learning and inference on generative adversarial quantum circuits[J]. Physical Review A, 2019, 99(5): 052306 doi: 10.1103/PhysRevA.99.052306
[45] Situ Haozhen, He Zhimin, Wang Yuyi et al. Quantum generative adversarial network for generating discrete distribution[J]. Information Sciences, 2020, 538: 193−208 doi: 10.1016/j.ins.2020.05.127
[46] Romero J, Aspuru-Guzik A. Variational quantum generators: Generative adversarial quantum machine learning for continuous distributions[J]. Advanced Quantum Technologies, 2021, 4(1): 2000003 doi: 10.1002/qute.202000003
[47] Li Junde, Topaloglu R O, Ghosh S. Quantum generative models for small molecule drug discovery[J]. IEEE Transactions on Quantum Engineering, 2021, 2: 3103308
[48] Herr D, Obert B, Rosenkranz M. Anomaly detection with variational quantum generative adversarial networks[J]. Quantum Science and Technology, 2021, 6(4): 045004 doi: 10.1088/2058-9565/ac0d4d
[49] Tsang S L, West M T, Erfani S M, et al. Hybrid quantum-classical generative adversarial network for high resolution image generation[J]. IEEE Transactions on Quantum Engineering, 2023, 4: 3102419
[50] Zoufal C, Lucchi A, Woerner S. Quantum generative adversarial networks for learning and loading random distributions[J]. NPJ Quantum Information, 2019, 5(1): 103 doi: 10.1038/s41534-019-0223-2
[51] Lockwood O, Si Mei. Reinforcement learning with quantum variational circuit[C]//Proc of the 16th AAAI Conf on Artificial Intelligence and Interactive Digital Entertainment. Menlo Park, CA: AAAI, 2020: 245−251
[52] Kwak Y, Yun W J, Jung S, et al. Introduction to quantum reinforcement learning: Theory and pennylane-based implementation[C]//Proc of the 12th Int Conf on Information and Communication Technology Convergence. Piscataway, NJ: IEEE, 2021: 416−420
[53] Yun W J, Kwak Y, Kim J P, et al. Quantum multiagent reinforcement learning via variational quantum circuit design[C]//Proc of the 42nd Int Conf on Distributed Computing Systems. Piscataway, NJ: IEEE, 2022: 1332−1335
[54] Yun W J, Kim J P, Jung S, et al. Quantum multi-agent actor-critic neural networks for internet-connected multirobot coordination in smart factory management[J]. IEEE Internet of Things Journal, 2023, 10(11): 9942−9952 doi: 10.1109/JIOT.2023.3234911
[55] Zhang Shixin, Hsieh Changyu, Zhang Shengyu, et al. Neural predictor based quantum architecture search[J]. Machine Learning: Science and Technology, 2021, 2(4): 045027 doi: 10.1088/2632-2153/ac28dd
[56] Lu Zhide, Shen Peixin, Deng Dongling. Markovian quantum neuroevolution for machine learning[J]. Physical Review Applied, 2021, 16(4): 044039 doi: 10.1103/PhysRevApplied.16.044039
[57] Robbins H, Monro S. A stochastic approximation method[J]. The Annals of Mathematical Statistics, 1951, 22(3): 400−407 doi: 10.1214/aoms/1177729586
[58] Arthur D. A hybrid quantum-classical neural network architecture for binary classification[J]. arXiv preprint, arXiv: 2201.01820, 2022
[59] Huang Heliang, Du Yuxuan, Gong Ming, et al. Experimental quantum generative adversarial networks for image generation[J]. Physical Review Applied, 2021, 16(2): 024051 doi: 10.1103/PhysRevApplied.16.024051
[60] Heimann D, Hohenfeld H, Wiebe F, et al. Quantum deep reinforcement learning for robot navigation tasks[J]. arXiv preprint, arXiv: 2202.12180, 2022
[61] Reddi S J, Kale S, Kumar S. On the convergence of adam and beyond[J]. arXiv preprint, arXiv: 1904.09237, 2019
[62] Agliardi G, Prati E. Optimal tuning of quantum generative adversarial networks for multivariate distribution loading[J]. Quantum Reports, 2022, 4(1): 75−105 doi: 10.3390/quantum4010006
[63] Nocedal J, Wright S J. Numerical Optimization[M]. New York: Springer, 2006
[64] Zhu Ciyou, Byrd R H, Lu Peihuang, et al. Algorithm 778: LBFGS-B: Fortran subroutines for large-scale bound-constrained optimization[J]. ACM Transactions on mathematical software, 1997, 23(4): 550−560 doi: 10.1145/279232.279236
[65] Romero J, Olson J P, Aspuru-Guzik A. Quantum autoencoders for efficient compression of quantum data[J]. Quantum Science and Technology, 2017, 2(4): 045001 doi: 10.1088/2058-9565/aa8072
[66] Bravo-Prieto C. Quantum autoencoders with enhanced data encoding[J]. Machine Learning: Science and Technology, 2021, 2(3): 035028 doi: 10.1088/2632-2153/ac0616
[67] Sutskever I, Martens J, Dahl G, et al. On the importance of initialization and momentum in deep learning[C]//Proc of the 30th Int Conf on Machine Learning. New York: PMLR, 2013: 1139−1147
[68] Tieleman T. Lecture 6.5-RMSProp: Divide the gradient by a running average of its recent magnitude[J]. COURSERA: Neural Networks for Machine Learning, 2012, 4(2): 26−31
[69] Chen S Y C, Yang C H H, Qi Jun, et al. Variational quantum circuits for deep reinforcement learning[J]. IEEE Access, 2020, 8: 141007−141024 doi: 10.1109/ACCESS.2020.3010470
[70] Oh S, Choi J, Kim J. A tutorial on quantum convolutional neural networks (QCNN)[C]//Proc of the 11th Int Conf on Information and Communication Technology Convergence (ICTC). Piscataway, NJ: IEEE, 2020: 236−239
[71] Wei Shijie, Chen Yanhu, Zhou Zengrong, et al. A quantum convolutional neural network on NISQ devices[J]. AAPPS Bulletin, 2022, 32(1): 2 doi: 10.1007/s43673-021-00030-3
[72] Shingu Y, Seki Y, Watabe S, et al. Boltzmann machine learning with a variational quantum algorithm[J]. Physical Review A, 2021, 104(3): 032413 doi: 10.1103/PhysRevA.104.032413
[73] Chakrabarti S, Huang Yiming, Li Tongyang, et al. Quantum wasserstein generative adversarial networks[C]//Proc of the 33rd Conf on Neural Information Processing Systems (NeurIPS). La Jolla, CA: NIPS, 2019: 6781−6792
[74] Lloyd S, Weedbrook C. Quantum generative adversarial learning[J]. Physical Review Letters, 2018, 121(4): 040502 doi: 10.1103/PhysRevLett.121.040502
[75] Kennedy J, Eberhart R. Particle swarm optimization[C]//Proc of ICNN’95. Piscataway, NJ: IEEE, 1995: 1942−1948
[76] Zhu Daiwei, Linke N M, Benedetti M, et al. Training of quantum circuits on a hybrid quantum computer[J]. Science Advances, 2019, 5(10): eaaw9918 doi: 10.1126/sciadv.aaw9918
[77] Spall J C. A one-measurement form of simultaneous perturbation stochastic approximation[J]. Automatica, 1997, 33(1): 109−112 doi: 10.1016/S0005-1098(96)00149-5
[78] Patel Y J, Kundu A, Ostaszewski M, et al. Curriculum reinforcement learning for quantum architecture search under hardware errors[J]. arXiv preprint, arXiv: 2402.03500, 2024.
[79] Hansen N, Müller S D, Koumoutsakos P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)[J]. Evolutionary Computation, 2003, 11(1): 1−18 doi: 10.1162/106365603321828970
[80] Alcazar J, Leyton-Ortega V, Perdomo-Ortiz A. Classical versus quantum models in machine learning: Insights from a finance application[J]. Machine Learning: Science and Technology, 2020, 1(3): 035003 doi: 10.1088/2632-2153/ab9009
[81] Las Heras U, Alvarez-Rodriguez U, Solano E, et al. Genetic algorithms for digital quantum simulations[J]. Physical Review Letters, 2016, 116(23): 230504 doi: 10.1103/PhysRevLett.116.230504
[82] Kondratyev A. Non-differentiable leaning of quantum circuit Born machine with genetic algorithm[J]. Wilmott, 2021, 2021(114): 50−61
[83] Ding Yongcheng, Lamata L, Sanz M, et al. Experimental implementation of a quantum autoencoder via quantum adders[J]. Advanced Quantum Technologies, 2019, 2(7/8): 1800065
[84] Mitarai K, Negoro M, Kitagawa M, et al. Quantum circuit learning[J]. Physical Review A, 2018, 98(3): 032309 doi: 10.1103/PhysRevA.98.032309
[85] He Guangping. Computing the gradients with respect to all parameters of a quantum neural network using a single circuit[J]. arXiv preprint, arXiv: 2307.08167, 2023
[86] Li Jun, Yang Xiaodong, Peng Xinhua, et al. Hybrid quantum-classical approach to quantum optimal control[J]. Physical Review Letters, 2017, 118(15): 150503 doi: 10.1103/PhysRevLett.118.150503
[87] Nakanishi K M, Fujii K, Todo S. Sequential minimal optimization for quantum-classical hybrid algorithms[J]. Physical Review Research, 2020, 2(4): 043158 doi: 10.1103/PhysRevResearch.2.043158
[88] Parrish R M, Iosue J T, Ozaeta A, et al. A Jacobi diagonalization and Anderson acceleration algorithm for variational quantum algorithm parameter optimization[J]. arXiv preprint, arXiv: 1904.03206, 2019
[89] Ostaszewski M, Grant E, Benedetti M. Structure optimization for parameterized quantum circuits[J]. Quantum, 2021, 5: 391 doi: 10.22331/q-2021-01-28-391
[90] Shor P W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer[J]. SIAM Review, 1999, 41(2): 303−332 doi: 10.1137/S0036144598347011
[91] Huang H Y, Broughton M, Mohseni M, et al. Power of data in quantum machine learning[J]. Nature Communications, 2021, 12(1): 2631 doi: 10.1038/s41467-021-22539-9
[92] Caro M C, Huang H Y, Cerezo M, et al. Generalization in quantum machine learning from few training data[J]. Nature Communications, 2022, 13(1): 4919 doi: 10.1038/s41467-022-32550-3
[93] Chia N H, Gilyén A P, Li T, et al. Samplingbased sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning[J]. Journal of the ACM, 2022, 69(5): 33
[94] Huang H Y, Kueng R, Torlai G, et al. Provably efficient machine learning for quantum many-body problems[J]. Science, 2022, 377(6613): eabk3333 doi: 10.1126/science.abk3333
[95] Huang H Y, Broughton M, Cotler J, et al. Quantum advantage in learning from experiments[J]. Science, 2022, 376(6598): 1182−1186 doi: 10.1126/science.abn7293
[96] Aharonov D, Cotler J, Qi X L. Quantum algorithmic measurement[J]. Nature Communications, 2022, 13(1): 887 doi: 10.1038/s41467-021-27922-0
[97] Bravyi S, Gosset D, König R. Quantum advantage with shallow circuits[J]. Science, 2018, 362(6412): 308−311 doi: 10.1126/science.aar3106
[98] Farhi E, Neven H. Classification with quantum neural networks on near term processors[J]. arXiv preprint, arXiv: 1802.06002, 2018
[99] Chefles A. Quantum state discrimination[J]. Contemporary Physics, 2000, 41(6): 401−424 doi: 10.1080/00107510010002599
[100] Barnett S M, Croke S. Quantum state discrimination[J]. Advances in Optics and Photonics, 2009, 1(2): 238−278 doi: 10.1364/AOP.1.000238
[101] Čepaitė I, Coyle B, Kashefi E. A continuous variable Born machine[J]. Quantum Machine Intelligence, 2022, 4(1): 6 doi: 10.1007/s42484-022-00063-3
[102] Coyle B, Mills D, Danos V, et al. The Born supremacy: Quantum advantage and training of an ising Born machine[J]. NPJ Quantum Information, 2020, 6(1): 60 doi: 10.1038/s41534-020-00288-9
[103] Rudolph M S, Toussaint N B, Katabarwa A, et al. Generation of high-resolution handwritten digits with an iontrap quantum computer[J]. Physical Review X, 2022, 12(3): 031010 doi: 10.1103/PhysRevX.12.031010
[104] Kao P Y, Yang Y C, Chiang W Y, et al. Exploring the advantages of quantum generative adversarial networks in generative chemistry[J]. Journal of Chemical Information and Modeling, 2023, 63(11): 3307−3318 doi: 10.1021/acs.jcim.3c00562
[105] Skolik A, Jerbi S, Dunjko V. Quantum agents in the gym: A variational quantum algorithm for deep Q-learning[J]. Quantum, 2022, 6: 720 doi: 10.22331/q-2022-05-24-720
[106] Skolik A, Mangini S, Bäck T, et al. Robustness of quantum reinforcement learning under hardware errors[J]. EPJ Quantum Technology, 2023, 10(1): 8 doi: 10.1140/epjqt/s40507-023-00166-1
[107] Niu M Y, Zlokapa A, Broughton M, et al. Entangling quantum generative adversarial networks[J]. Physical Review Letters, 2022, 128(22): 220505 doi: 10.1103/PhysRevLett.128.220505
[108] Du Yuxuan, Huang Tao, You Shan, et al. Quantum circuit architecture search for variational quantum algorithms[J]. NPJ Quantum Information, 2022, 8(1): 62 doi: 10.1038/s41534-022-00570-y
[109] Schuld M, Bocharov A, Svore K M, et al. Circuit-centric quantum classifiers[J]. Physical Review A, 2020, 101(3): 032308 doi: 10.1103/PhysRevA.101.032308
[110] Henderson M, Shakya S, Pradhan S, et al. Quanvolutional neural networks: Powering image recognition with quantum circuits[J]. Quantum Machine Intelligence, 2020, 2(1): 2 doi: 10.1007/s42484-020-00012-y
[111] Chu Cheng, Chia N H, Jiang Lei, et al. QMLP: An error-tolerant nonlinear quantum mlp architecture using parameterized two-qubit gates[C/OL]//Proc of the 29th Int Symp on Low Power Electronics and Design. New York: ACM, 2022[2023-11-16]. https://dl.acm.org/doi/abs/10.1145/35 31437.3539719
[112] Chen S Y C, Huang C M, Hsing C W, et al. An end-to-end trainable hybrid classical-quantum classifier[J]. Machine Learning: Science and Technology, 2021, 2(4): 045021 doi: 10.1088/2632-2153/ac104d
[113] Pesah A, Cerezo M, Wang S, et al. Absence of barren plateaus in quantum convolutional neural networks[J]. Physical Review X, 2021, 11(4): 041011 doi: 10.1103/PhysRevX.11.041011
[114] Mcclean J R, Boixo S, Smelyanskiy V N, et al. Barren plateaus in quantum neural network training landscapes[J]. Nature Communications, 2018, 9(1): 4812 doi: 10.1038/s41467-018-07090-4
[115] Monteiro C A, Gustavo Filho I, Costa M H J, et al. Quantum neuron with real weights[J]. Neural Networks, 2021, 143: 698−708 doi: 10.1016/j.neunet.2021.07.034
[116] Hu Zhirui, Li Jinyang, Pan Zhenyu, et al. On the design of quantum graph convolutional neural network in the NISQ-era and beyond[C]//Proc of the 40th Int Conf on Computer Design (ICCD). Piscataway, NJ: IEEE, 2022: 290−297
[117] Shepherd D, Bremner M J. Temporally unstructured quantum computation[J]. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2009, 465(2105): 1413−1439 doi: 10.1098/rspa.2008.0443
[118] Amin M H, Andriyash E, Rolfe J, et al. Quantum Boltzmann machine[J]. Physical Review X, 2018, 8(2): 021050 doi: 10.1103/PhysRevX.8.021050
[119] Kieferová M, Wiebe N. Tomography and generative training with quantum Boltzmann machines[J]. Physical Review A, 2017, 96(6): 062327 doi: 10.1103/PhysRevA.96.062327
[120] Huijgen O,Coopmans L,Najafi P,et al. Training quantum Boltzmann machines with the β-variational quantum eigensolver[J]. Machine Learning:Science and Technology,2024,5(2):025017
[121] Khoshaman A, Vinci W, Denis B, et al. Quantum variational autoencoder[J]. Quantum Science and Technology, 2018, 4(1): 014001 doi: 10.1088/2058-9565/aada1f
[122] Huang Changjiang, Ma Hailan, Yin Qi, et al. Realization of a quantum autoencoder for lossless compression of quantum data[J]. Physical Review A, 2020, 102(3): 032412 doi: 10.1103/PhysRevA.102.032412
[123] Cerezo M, Sone A, Volkoff T, et al. Cost function dependent barren plateaus in shallow parametrized quantum circuits[J]. Nature Communications, 2021, 12(1): 1791 doi: 10.1038/s41467-021-21728-w
[124] Dallaire-Demers P L, Killoran N. Quantum generative adversarial networks[J]. Physical Review A, 2018, 98(1): 012324 doi: 10.1103/PhysRevA.98.012324
[125] Tian Jinkai, Sun Xiaoyu, Du Yuxuan, et al. Recent advances for quantum neural networks in generative learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(10): 12321−12340 doi: 10.1109/TPAMI.2023.3272029
[126] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proc of the 28th Conf on Neural Information Processing Systems (NeurIPS). La Jolla, CA: NIPS, 2014, 2672−2680
[127] Carleo G, Cirac I, Cranmer K, et al. Machine learning and the physical sciences[J]. Reviews of Modern Physics, 2019, 91(4): 045002 doi: 10.1103/RevModPhys.91.045002
[128] Hu Ling, Wu Shuhao, Cai Weizhou, et al. Quantum generative adversarial learning in a superconducting quantum circuit[J]. Science Advances, 2019, 5(1): eaav2761 doi: 10.1126/sciadv.aav2761
[129] Kim L, Lloyd S, Marvian M. Hamiltonian quantum generative adversarial networks[J]. Physical Review Research, 2024, 6(3): 033019 doi: 10.1103/PhysRevResearch.6.033019
[130] Du Yuxuan, Hsieh M H, Tao Dacheng. Efficient online quantum generative adversarial learning algorithms with applications[J]. arXiv preprint arXiv: 1904.09602, 2019
[131] Pan Minghua, Wang Bin, Tao Xiaoling, et al. Application of quantum generative adversarial network to the abnormal user behavior detection and evaluation[J]. arXiv preprint, arXiv: 2208.09834, 2022
[132] Watkins C J, Dayan P. Q-learning[J]. Machine Learning, 1992, 8(3/4): 279−292 doi: 10.1023/A:1022676722315
[133] Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with deep reinforcement learning[J]. arXiv preprint, arXiv: 1312.5602, 2013
[134] Van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double Q-learning[C]//Proc of the 30th AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2016: 2094−2100
[135] Jerbi S, Gyurik C, Marshall S, et al. Parametrized quantum policies for reinforcement learning[C]//Proc of the 35th Annual Conf on Neural Information Processing Systems (NeurIPS). La Jolla, CA: NIPS, 2021: 28362−28375
[136] Jerbi S, Cornelissen A, Ozols M, et al. Quantum policy gradient algorithms[J]. arXiv preprint, arXiv: 2212.09328, 2022
[137] Wu Shaojun, Jin Shan, Wen Dingding, et al. Quantum reinforcement learning in continuous action space[J]. arXiv preprint, arXiv: 2012.10711, 2020
[138] Ostaszewski M, Trenkwalder L M, Masarczyk W, et al. Reinforcement learning for optimization of variational quantum circuit architectures[C]//Proc of the 35th Conf on Neural Information Processing Systems (NeurIPS). La Jolla, CA: NIPS, 2021: 18182−18194
[139] Wang S, Fontana E, Cerezo M, et al. Noise-induced barren plateaus in variational quantum algorithms[J]. Nature Communications, 2021, 12(1): 6961 doi: 10.1038/s41467-021-27045-6
[140] Liang Zhiding, Wang Hanrui, Cheng Jinglei, et al. Variational quantum pulse learning[C]//Proc of the 3rd IEEE Int Conf on Quantum Computing and Engineering (QCE). Los Alamitos, CA: IEEE Computer Society, 2022: 556−565
[141] Meitei O R, Gard B T, Barron G S, et al. Gatefree state preparation for fast variational quantum eigensolver simulations[J]. NPJ Quantum Information, 2021, 7(1): 155 doi: 10.1038/s41534-021-00493-0
[142] Liang Zhiding, Cheng Jinglei, Ren Hang, et al. NAPA: Intermediate-level variational native-pulse Ansatz for variational quantum algorithms[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2024, 43(6): 1834−1847 doi: 10.1109/TCAD.2024.3355277
[143] Rattew A G, Hu S, Pistoia M, et al. A domain-agnostic, noise-resistant, hardware-efficient evolutionary variational quantum eigensolver[J]. arXiv preprint, arXiv: 1910.09694, 2019
[144] Havlíček V, Córcoles A D, Temme K, et al. Supervised learning with quantum-enhanced feature spaces[J]. Nature, 2019, 567(7747): 209−212 doi: 10.1038/s41586-019-0980-2
[145] Stilck França D, Garcia-Patron R. Limitations of optimization algorithms on noisy quantum devices[J]. Nature Physics, 2021, 17(11): 1221−1227 doi: 10.1038/s41567-021-01356-3
[146] Chen Yiting, Farquhar C, Parrish R M. Low-rank density-matrix evolution for noisy quantum circuits[J]. NPJ Quantum Information, 2021, 7(1): 61 doi: 10.1038/s41534-021-00392-4
[147] Sharma K, Khatri S, Cerezo M, et al. Noise resilience of variational quantum compiling[J]. New Journal of Physics, 2020, 22(4): 043006 doi: 10.1088/1367-2630/ab784c
[148] Skolik A, Mcclean J R, Mohseni M, et al. Layerwise learning for quantum neural networks[J]. Quantum Machine Intelligence, 2021, 3(1): 5 doi: 10.1007/s42484-020-00036-4
[149] Volkoff T,Coles P J. Large gradients via correlation in random parameterized quantum circuits[J]. Quantum Science and Technology,2021,6(2):025008
[150] Endo S, Cai Z, Benjamin S C, et al. Hybrid quantumclassical algorithms and quantum error mitigation[J]. Journal of the Physical Society of Japan, 2021, 90(3): 032001 doi: 10.7566/JPSJ.90.032001
[151] Bilkis M, Cerezo M, Verdon G, et al. A semi-agnostic Ansatz with variable structure for quantum machine learning[J]. arXiv preprint, arXiv: 2103.06712, 2021
[152] Weber M, Liu Nana, Li Bo, et al. Optimal provable robustness of quantum classification via quantum hypothesis testing[J]. NPJ Quantum Information, 2021, 7(1): 76 doi: 10.1038/s41534-021-00410-5
[153] Du Yuxuan, Hsieh M H, Liu Tongliang, et al. Quantum noise protects quantum classifiers against adversaries[J]. Physical Review Research, 2021, 3(2): 023153 doi: 10.1103/PhysRevResearch.3.023153
[154] Liu Junyu, Wilde F, Mele A A, et al. Stochastic noise can be helpful for variational quantum algorithms[J]. arXiv preprint, arXiv: 2210.06723, 2022
[155] Gentini L, Cuccoli A, Pirandola S, et al. Noise-resilient variational hybrid quantum-classical optimization[J]. Physical Review A, 2020, 102(5): 052414 doi: 10.1103/PhysRevA.102.052414
[156] Zhang Kaining, Liu Liu, Hsieh M H, et al. Escaping from the barren plateau via Gaussian initializations in deep variational quantum circuits[C]//Proc of the 36th Conf on Neural Information Processing Systems (NeurIPS). La Jolla, CA: NIPS, 2022: 18612−18627
[157] Cervera-Lierta A, Kottmann J S, Aspuruguzik A. Meta-variational quantum eigensolver: Learning energy profiles of parameterized hamiltonians for quantum simulation[J]. PRX Quantum, 2021, 2(2): 020329 doi: 10.1103/PRXQuantum.2.020329
[158] Harrow A W, Low R A. Random quantum circuits are approximate 2-designs[J]. Communications in Mathematical Physics, 2009, 291(1): 257−302 doi: 10.1007/s00220-009-0873-6
[159] Holmes Z, Sharma K, Cerezo M, et al. Connecting Ansatz expressibility to gradient magnitudes and barren plateaus[J]. PRX Quantum, 2022, 3(1): 010313 doi: 10.1103/PRXQuantum.3.010313
[160] Sharma K, Cerezo M, Cincio L, et al. Trainability of dissipative perceptron-based quantum neural networks[J]. Physical Review Letters, 2022, 128(18): 180505 doi: 10.1103/PhysRevLett.128.180505
[161] Kashif M, Al-Kuwari S. The impact of cost function globality and locality in hybrid quantum neural networks on NISQ devices[J]. Machine Learning: Science and Technology, 2023, 4(1): 015004 doi: 10.1088/2632-2153/acb12f
[162] Sim S, Johnson P D, Aspuru-Guzik A. Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms[J]. Advanced Quantum Technologies, 2019, 2(12): 1900070 doi: 10.1002/qute.201900070
[163] Nielsen M A, Dawson C M, Dodd J L, et al. Quantum dynamics as a physical resource[J]. Physical Review A, 2003, 67(5): 052301
[164] Jaques S, Rattew A G. Qram: A survey and critique[J]. arXiv preprint, arXiv: 2305.10310, 2023
[165] Phalak K, Li Junde, Ghosh S. Trainable PQC-based QRAM for quantum storage[J]. IEEE Access, 2023, 11: 51892−51899 doi: 10.1109/ACCESS.2023.3278600
[166] 付祥,郑宇真,苏醒,等. 一种面向含噪中尺度量子技术的量子-经典异构计算系统[J]. 计算机研究与发展,2021,58(9):1875−1896 doi: 10.7544/issn1000-1239.2021.20210368 Fu Xiang, Zheng Yuzhen, Su Xing, et al. A heterogeneous quantum-classical computing system targeting noisy intermediate-scale quantum technology[J]. Journal of Computer Research and Development, 2021, 58(9): 1875−1896 (in Chinese) doi: 10.7544/issn1000-1239.2021.20210368
[167] Verdon G, Pye J, Broughton M. A universal training algorithm for quantum deep learning[J]. arXiv preprint, arXiv: 1806.09729, 2018