Processing math: 0%
  • 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

亦正亦邪大模型——大模型与安全专题导读

虎嵩林, 李涓子, 秦兵, 邱锡鹏, 刘知远

虎嵩林, 李涓子, 秦兵, 邱锡鹏, 刘知远. 亦正亦邪大模型——大模型与安全专题导读[J]. 计算机研究与发展, 2024, 61(5): 1085-1093. DOI: 10.7544/issn1000-1239.qy20240501
引用本文: 虎嵩林, 李涓子, 秦兵, 邱锡鹏, 刘知远. 亦正亦邪大模型——大模型与安全专题导读[J]. 计算机研究与发展, 2024, 61(5): 1085-1093. DOI: 10.7544/issn1000-1239.qy20240501
虎嵩林, 李涓子, 秦兵, 邱锡鹏, 刘知远. 亦正亦邪大模型——大模型与安全专题导读[J]. 计算机研究与发展, 2024, 61(5): 1085-1093. CSTR: 32373.14.issn1000-1239.qy20240501
引用本文: 虎嵩林, 李涓子, 秦兵, 邱锡鹏, 刘知远. 亦正亦邪大模型——大模型与安全专题导读[J]. 计算机研究与发展, 2024, 61(5): 1085-1093. CSTR: 32373.14.issn1000-1239.qy20240501

亦正亦邪大模型——大模型与安全专题导读

  • 在稀疏线性方程组求解等高性能计算(high performance computing,HPC)应用中,很多算法需要涉及稀疏矩阵向量乘(sparse matrix-vector multiplications,SpMV)序列Ax, A2x, …, Asx的计算. 上述SpMV序列操作又称为稀疏矩阵幂函数(matrix power kernel,MPK),其中稀疏矩阵A保持不变,x为输入向量,s为给定常数次幂[1-3]. 例如,通信避免(communication-avoiding)Krylov子空间法通过调用MPK,一次迭代计算s个Krylov基向量[4]. 此外多重网格方法、特征值方法等也涉及对MPK的调用. 优化MPK性能对于提升大型稀疏线性方程组求解等HPC应用效率至关重要.

    SpMV是HPC、人工智能等很多领域的基础内核. 由于当前HPC体系结构上SpMV强访存受限特点,很多工作开展了SpMV访存性能优化研究,例如设计新的存储格式等[5-8]. 已有工作多针对单次SpMV执行开展性能优化,没有考虑MPK针对同一稀疏矩阵多次调用SpMV的特点. 利用MPK特点可开发时间局部性(temporal locality),在缓存中重用稀疏矩阵A,避免每次SpMV调用从主存中重复加载A,从而缓解SpMV访存受限问题,提升MPK性能.

    然而,在不同次幂SpMV操作之间(例如AixAi−1x之间)重用矩阵A,会导致数据依赖. 以往MPK优化工作主要针对分布式并行平台,通过重用矩阵A减少通信和同步的频率[9-12]. 共享内存平台上MPK的缓存数据重用工作相对较少,通常借鉴模板计算(stencil computation)里面的经典分块策略[13-16],往往仅适用于结构化稀疏矩阵,推广到普通稀疏矩阵时会导致较大的冗余计算和非规则数据访问开销,影响MPK并行效率.

    本文提出了缓存感知的MPK方法Ca-MPK,通过缓存数据重用优化共享内存平台MPK性能:

    1)基于稀疏矩阵的依赖图,设计了体系结构感知的递归划分方法. 根据并行核数和缓存大小,将矩阵图划分为若干适合缓存大小的子图/子矩阵,通过构建分割子图解耦数据依赖,通过子图重排提升SpMV数据局部性;

    2)根据图划分结果,设计了基于行和基于子矩阵的2种SpMV并行方法,按照特定顺序在子矩阵上调度执行SpMV序列,可实现缓存数据重用;

    3)在x86和ARM处理器上采用SuiteSparse矩阵集中的302个稀疏矩阵进行了测试分析,相对于Intel OneMKL库和最新MPK实现[2],平均性能提升多达1.57倍和1.40倍. 将Ca-MPK应用于通信避免,或称为s-Step共轭梯度求解器和稳定双共轭梯度求解器,平均性能提升多达1.42倍和2.00倍.

    早期MPK实现主要针对通信避免算法等应用场景展开. 以稀疏线性方程组求解为例,传统Krylov子空间方法通常每次迭代调用一次SpMV,产生一个Krylov子空间基向量,并行性能通常受限于通信或数据访问开销. 通信避免的Krylov子空间法[9-12]每次迭代调用s次SpMV,产生s个Krylov子空间基向量,从而最小化通信开销.

    以往MPK优化工作主要针对分布式并行平台,通过重用矩阵减少通信和同步的频率. 针对MPK的缓存重用多借鉴了模板计算中的经典分块策略,通常需要针对给定的次幂,确定矩阵分块的邻居[13-16]. 由于模版计算对应于结构化稀疏矩阵,这种方法推广到普通稀疏矩阵的MPK时通常会导致较大的冗余计算和非规则数据访问开销,影响MPK并行效率. 文献[2]针对MPK提出了一种基于分层的缓存分块方法,可适用于任意稀疏矩阵,是最新的相关工作. 该方法利用递归代数着色引擎(recursive algebraic coloring engine,RACE)[17]采用距离k着色(distance-k coloring)算法将矩阵行划分为若干层. 这种方法能够满足MPK中缓存数据重用的数据依赖要求,但限制了并行粒度,每次仅在1层内实现并行计算,因而并行性很低且同步开销大. 为了解决上述问题,文献[2]引入点对点同步(point-to-point synchronization)及分层粗化(level coarsening)等优化策略,但次幂s增加时,MPK性能仍然下降严重. 文献[1]引入了计算流水线,实现了一种重用矩阵行(而非矩阵分块)的细粒度缓存数据重用方法. 该方法实现相对简单,但需要利用图着色实现并行,因而同步开销大,且影响数据局部性[18].

    SpMV操作将稀疏矩阵A乘以稠密输入向量x,获得稠密输出向量b. 稀疏矩阵通常采用某种压缩格式仅存储其非零元和对应位置,从而节省计算和存储. 图1(b)上部给出了采用流行的CSR(compressed sparse row)格式存储的图1(a)中的稀疏矩阵A. CSR格式采用2个数组用于存储非零元值(value)机器对应的列索引(colIdx). 此外,还采用行指针(rowPtr)表示每行的非零元数量. 算法1给出了基于CSR格式实现的串行版本SpMV伪代码.

    图  1  MPK中的稀疏矩阵存储格式、数据依赖和数据依赖图
    Figure  1.  Storage format, data dependencies data dependency graph for a sparse matrix in MPK

    算法1. 基于CSR的SpMV串行实现.

    输入:稀疏矩阵A,向量x

    输出:向量b.

    ① for i←0 to N do

    ② sum←0;

    ③ for jA.rowPtr[i] to A.rowPtr[i] do

    ④  sumsum+A.value[jx[A.colIdx[j]];

    ⑤ end for

    ⑥ b[i]←b[i]+sum

    ⑦ end for

    MPK针对同一稀疏矩阵A,反复执行SpMV,获得AxA2x, …, Asx或其线性组合如Ax+A2x等. 算法2给出了传统MPK实现的伪代码,即连续调用s次SpMV函数. 此时,如缓存容量较小,无法存储整个稀疏矩阵A,则A可能需要从主存加载s次.

    算法2. 传统MPK实现.

    输入:稀疏矩阵A,二维数组x,次幂s

    输出:二维数组x.

    ① for i←0 to s do

    ②  S pMV\left(\boldsymbol A,x\left[:,i\right],x\left[:,i+1\right]\right)

    ③ end for

    MPK是Krylov子空间方法、多重网格方法、多项式预条件子、特征值问题等很多HPC应用的关键步骤. 本文将优化后的MPK应用于s-Step的共轭梯度(conjugate gradient,CG)求解方法和稳定双共轭梯度(biconjugate gradient stabilized,BiCGStab)求解方法. CG方法和BiCGStab方法是求解对称正定和非对称正定稀疏线性方程组的主流方法,算法3给出了s-Step CG求解方法的伪代码[19]. 该方法每次调用MPK产生s个Krylov基向量(行③),因此在分布式并行时相对于传统Krylov求解器可潜在节省s倍通信开销. BiCGStab求解方法的算法流程参见文献[19].

    算法3. s-Step CG求解方法.

    输入:对称正定稀疏矩阵A,初始值x0

    输出:解向量x.

    {r}_{0} bAx0, k=i×s

    ② for i←0 to convergence do

    ③  \boldsymbol T [{r}_{k},\boldsymbol A{r}_{k},… ,{\boldsymbol A}^{s}{r}_{k}] ;/*调用MPK*/

    ④  {\boldsymbol R}_{i} [{r}_{k},\boldsymbol A{r}_{k},… ,{\boldsymbol A}^{s-1}{r}_{k}] ;/*从T中抽取R*/

    ⑤  {\boldsymbol P}_{i} [\boldsymbol A{r}_{k},{\boldsymbol A}^{2}{r}_{k}… ,{\boldsymbol A}^{s-1}{r}_{k}] ;/*从T中抽取P*/

    ⑥ if i= =0 then;/*计算方向P*/

    ⑦   {P}_{i} {R}_{i}

    ⑧ else

    ⑨   {C}_{i} -{Q}_{i}^{T}{P}_{i-1} ;/*块点积计算*/

    ⑩  求解 {W}_{i-1}{B}_{i}={C}_{i} ;/*计算标量 \delta */

    ⑪   {P}_{i} {R}_{i}+{P}_{i}{B}_{i} ;/*块点积计算*/

    ⑫ end if

    ⑬ end for

    {W}_{i} {Q}_{i}^{T}{P}_{i}

    {{g}}_{{i}} {P}_{i}^{T}{{r}}_{{i}}

    ⑯ 求解 {W}_{i}{{a}}_{{i}}={{g}}_{{i}} ;/*计算标量 \alpha */

    {\boldsymbol{x}}_{{k+s}} {\boldsymbol{x}}_{{k}}+{P}_{i}{{a}}_{{i}} ;/*更新解*/

    {{r}}_{{k+s}} {\boldsymbol b}-\boldsymbol A{\boldsymbol{x}}_{{k+s}} ;/*计算残差*/

    ⑲ if {\|{{r}}_{{k+s}}\|}_{2}/{\|{{r}}_{{0}}\|}_{2}\le tol then

    ⑳  end

    ㉑ end if

    ㉒ end for

    对于共享内存平台,可针对MPK特点实现矩阵在缓存中的重用,但缓存重用会导致多次SpMV执行之间的数据依赖. 图1(c)以s=2为例描述了上述数据依赖. 假定图中黄色行对应的子矩阵已加载入缓存用于计算Ax,可利用该子矩阵的一部分(红色的行)计算出A2x的一部分,既保持子矩阵在缓存的同时计算出多次SpMV结果. 可以观察出为了实现上述数据重用需要满足的数据依赖关系:重用的部分矩阵行(红色的行)包含的列索引必须位于原始子矩阵(黄色的行)包含的行索引的范围内.

    本节介绍提出的缓存感知的MPK(即Ca-MPK)方法及其实现. Ca-MPK核心思想是首先根据稀疏矩阵获得其依赖图,然后对依赖图进行体系结构感知的递归划分获得子图和分割子图(即子矩阵),最后按特定顺序对子图和分割子图进行调度执行,从而实现缓存数据重用.

    针对任意稀疏矩阵可构建依赖图(dependency graph,DG),表示实现缓存数据重用时可能面临的数据依赖. 图1(b)下部给出了图1(a)中稀疏矩阵A对应的依赖图G=(V, E). 依赖图采用无向图的形式,图中每个顶点vV表示矩阵的一行,每条边eE对应该行的非零元(忽略对角线非零元). 依赖图中的边可表示由缓存数据重用导致的SpMV序列之间所有可能的依赖关系. 例如,图2(a)给出了2次SpMV调用Ai+1x[0]和Ai+1x[6]的计算所依赖的Aix的特定元素,其中Ai+1x[0]依赖于Aix[0],Aix[1]和A1x[3],而Ai+1x[6]依赖于Aix[3],Aix[4],Aix[6]和A1x[7]. 对于非对称矩阵A,通过A+AT获得其对称形式对应的无向图,因此依赖图同时适用于所有类型的矩阵. 图2(b)给出了图3(b)对应的数据结构.

    图  2  相信幂次间的数据依赖及Ca-MPK递归划分后的数据结构
    注:(b)中方括号内数字为子矩阵行索引,[16, 24]和[24, 24]分别是第1次和第2次递归的分割子图.
    Figure  2.  Data dependency between neighbor powers and data structure after recursive partition in Ca-MPK
    图  3  Ca-MPK依赖图递归划分过程
    Figure  3.  Recursive partition process of dependency graph in Ca-MPK

    对于给定矩阵获得依赖图后,本文提出了一种体系结构感知的递归方法对图进行划分,主要考虑的因素是共享内存并行核心数及缓存大小,目的是在开发并行性的同时满足缓存数据重用要求. 首先将依赖图划分为numPart个子图(subgraph),可利用Metis[20]等图划分工具实现. 对于基于CSR的稀疏矩阵(假定每个非零元采用8字节双精度浮点存储,整数索引采用4字节整型存储),每个子图包含不超过partSize个非零元.

    partS ize=C\frac{L2cacheS ize}{12}{,} (1)

    其中L2cacheSize是处理器L2缓存大小(单位为字节),C是系数(取值为[0.85, 1.05],用于均衡缓存中行偏移指针等其他数据的影响). 上述划分可保证每个子图(对应于子矩阵)全部加载到L2缓存中. numPart根据式(2)确定,其取值为处理器核数numCore的整数倍,以确保所有子图在处理器核上均衡分布.

    numPart=\left(\left\lfloor \frac{NNZ/partS ize}{numCore}\right\rfloor +1\right) \times numCore \text{,} (2)

    其中NNZ是总的矩阵非零元数.

    完成上述子图划分后,从现有子图中找出所有在其他子图中包含邻居的顶点,从而构建一个特殊子图称为分割子图(separator). 接下来对所有子图(包括分割子图)的顶点进行重编号,确保每个子图顶点索引连续且分割子图在普通子图之后进行编号. 重编号可提升SpMV中访问输入/输出向量的数据局部性.

    图3给出了经过2次递归划分(numPart=4)后的图. 本文采用一个树指针数据结构treePtr保存递归划分的结果(即每个子图对应的起始行编号),用于后续并行实现.

    基于稀疏矩阵的依赖图完成递归划分后,可根据划分结果实现并行计算. 由于通过分割子图消除了所有子图间的边(数据依赖),每次递归获得的子图/子矩阵是独立的. 这些子矩阵由树数据结构中的起始行索引表示,通过按特定顺序调度执行可实现MPK中的缓存数据重用.

    为了实现Ca-MPK,需要对标准SpMV实现(参见算法1)进行调整. 首先需要支持在给定的子图/子矩阵(由给定矩阵的行索引范围确定)上执行SpMV. 其次,除了传统的按行SpMV并行,还需要支持按子图/子矩阵并行.

    算法4. uSpMV函数.

    输入:稀疏矩阵A,向量x,指针treePtr

    输出:向量b.

    ① for itreePtr->st to treePtr->ed do

    ② 算法1中行②~⑥;

    ③ end for

    算法5. puSpMV函数.

    输入:稀疏矩阵A,向量x,树指针treePtr

    输出:向量b.

    ① for itreePtr->st to treePtr->ed do in parallel

    ② 算法1中行②~⑥;

    ③ end for

    算法6. Ca_MPK函数.

    输入:稀疏矩阵A,二维数组x,次幂s,指针treeRoot

    输出:二维数组x.

    ① if s==1 then

    ② puSpMV(A, x[:,0], x[:,1], treeRoot);

    ③ return;

    ④ end if

    i←0;

    separatortreeRoot->separator

    puSpMV(A, x[:,0], x[:,1], separator);

    ⑧ while i< s do

    ⑨ for i←0 to treeRoot->numParts do in parallel

    ⑩  uSpMV(A, x[:,i], x[:,i+1], treeRoot-> son[j]);

    ⑪  if i+1< s then

    ⑫   uSpMV(A, x[:,i+1], x[:,i+2], treeRoot->    son[j]);

    ⑬  else stop

    ⑭  end if

    ⑮ end for

    ⑯ ii+2;

    ⑰ if i< s then

    ⑱  puSpMV(A, x[:,i−1], x[:,i], separator->   separator);

    ⑲  for j←0 to separator->numParts do in parallel

    ⑳   uSpMV(A, x[:,i−1], x[:,i], separator->    son[j]);

    ㉑   uSpMV(A, x[:,i], x[:,i+1], separator->    son[j]);

    ㉒  end for

    ㉓ puSpMV(A, x[:,i], x[:,i+1], separator->  separator);

    ㉔ else

    ㉕  puSpMV(A, x[:,i−1], x[:,i], separator);

    ㉖ end if

    ㉗ end while

    算法6给出了Ca-MPK的伪代码. 根据上述需求,将串行版本SpMV封装为一个统一的函数uSpMV(算法4),该函数可对给定的子矩阵(行索引位于treePtr->sttreePtr->ed之间)执行SpMV操作. 对应的按行并行版本封装在另一个函数puSpMV(算法5)中. Ca-MPK基于uSpMV和puSpMV这2个函数实现. 其中,分割子图通过调用puSpMV实现按行并行,其他子图通过调用uSpMV执行,实现按子矩阵并行.

    图4给出了运行算法6时图3(b)中不同子图的执行顺序. 由于每个子图/子矩阵均可放入L2 缓存,且所有数据冲突和数据依赖已消除,执行完当前子矩阵的次幂i后,可立即调度标准SpMV或对称SpMV函数执行次幂(i+1),从而实现该子矩阵在缓存中的重用. 以图4为例,标记执行顺序为1的子图和执行顺序为2的子图均实现了在L2 缓存中的重用(图中表示为虚点长方形). 由图4可以看出,除了第2次递归产生的分割子图,大部分子图均可实现缓存重用. 需要指出的是,图4中每个核心分配了1个子矩阵,子矩阵数numPart可以是核心数的倍数,按上述执行顺序,仍然可以实现复用.

    图  4  次幂s=5时Ca-MPK执行顺序的示意图
    注:每个框代表一个子矩阵,框中数字为执行顺序,同一数字子矩阵可并行执行. 长方形虚框内的子矩阵可在缓存中重用
    Figure  4.  Illustration of execution sequence in Ca-MPK with the power s=5

    本文实验硬件平台为一个x86处理器和一个ARM处理器. x86处理器为Intel Xeon Gold 6258R,频率为2.4 GHz,包括28个核心,每个核心具有私有的32 KB 的L1数据缓存以及1 024 KB的L2 缓存,L3 缓存大小为39 MB,主存大小为128 GB. ARM处理器为Kunpeng 920,频率为2.6 GHz,包括32个核心,每个核心具有私有的64 KB 的L1数据缓存以及512 KB的L2 缓存,L3 缓存大小为32 MB,主存大小为128 GB. x86上编译器版本为Intel oneAPI 2022.1[21],ARM上编译器版本为GCC8.4.1,采用“-O3”选项双精度编译. 测试中将每个软件线程绑定到一个处理器物理核心(没有采用超线程),针对每个矩阵均测试100次取平均的GFLOPS.

    测试的稀疏矩阵选自SuiteSparse矩阵集[22]. 选取了169个相对较大的矩阵(矩阵行数大于100000且非零元数大于200000). 在上述矩阵中,选取12个矩阵与现有方法进行详细对比分析,如表1所示. 根据3.2节递归划分方法和Ca-MPK实现,只需针对每个矩阵的依赖图执行2次划分,因而引入的前处理开销较小,求解器迭代通常小于5次.

    表  1  12个代表性稀疏矩阵的详细性能分析
    Table  1.  Detailed Performance Analysis of Twelve Represented Sparse Matrices
    编号 矩阵 行数 非零元数 对称正定
    1 ASIC 320k 321 821 1 931 828
    2 ASIC 320ks 321 671 1 316 085
    3 ASIC 680ks 682 712 1 693 767
    4 Freescale1 3 428 755 17 052 626
    5 FullChip 2 987 012 26 621 983
    6 G2 circui 150 102 726 674
    7 G3 circuit 1 585 478 7 660 826
    8 memchip 2 707 524 13 343 948
    9 nxp1 414 604 2 656 177
    10 pre2 659 033 5 834 044
    11 rajat30 643 994 6 175 244
    12 transient 178 866 961 790
    下载: 导出CSV 
    | 显示表格

    本文对比以下版本的MPK实现:1)MKL表示直接调用Intel oneMKL的SpMV函数实现的MPK,没有开发缓存数据重用;2)RACE表示文献[2]中基于递归代数着色引擎(RACE)实现的MPK,开发了缓存数据重用,是目前文献公开报道的最新研究成果;3)Ca-MPK表示本文实现的版本,调用标准的基于CSR格式的SpMV,实现了本文提出的缓存数据重用方法;4)CSR表示调用基于CSR格式的SpMV实现的传统版本MPK.

    图5给出了次幂s=5,10,15时x86处理器上不同MPK版本在所有302个稀疏矩阵上的整体性能. 可以看出,在上述3个版本MPK中,Ca-MPK性能最好. 次幂s=5时,Ca-MPK相对于MKL和RACE的平均加速比分别为1.56和1.19;s=10时,Ca-MPK相对于MKL和RACE的平均加速比分别为1.57和1.26;s=15时,Ca-MPK相对于MKL和RACE的平均加速比分别为1.55和1.40.

    图  5  不同MPK实现不同幂次时302个矩阵在Intel 6258R CPU上时的性能对比
    Figure  5.  Performance comparison of different MPK implementations with different powers using 302 sparse matrices on the Intel 6258R CPU

    图6给出了次幂s=5,10,15时ARM处理器上不同MPK版本在所有302个稀疏矩阵上的整体性能. 同样在上述3个版本MPK中Ca-MPK性能最好. s=5时,Ca-MPK相对于CSR和RACE的平均加速比分别为1.48和1.08;s=10时,Ca-MPK相对于CSR和RACE的平均加速比分别为1.47和1.15;s=15时,Ca-MPK相对于CSR和RACE的平均加速比分别为1.47和1.20.

    图  6  不同MPK实现不同幂次时302个矩阵在Kunpeng 920 CPU上时的性能对比
    Figure  6.  Performance comparison of different MPK implementations with different powers using 302 sparse matrices on the Kunpeng 920 CPU

    上述结果可以看出,Ca-MPK相对于RACE随着次幂s增加,性能提升更加明显. 而RACE方法随着次幂s增加而性能下降,与文献[2]中报道的结果基本相符. 其原因在于RACE的缓存重用距离,随着距离增加,可能超过缓存容量,因而无法实现重用. 相应地,Ca-MPK总是在2次SpMV计算之间实现缓存数据重用,不受次幂s大小影响.

    图7给出了x86平台上次幂s=5,15时不同MPK版本在12个代表性稀疏矩阵上的具体性能. s=5时,Ca-MPK相对于MKL和RACE的平均性能加速比分别为1.69(最大2.53,最小1.20)和1.40(最大2.52,最小0.81);s=15时,Ca-MPK相对于MKL和RACE的平均性能加速比分别为1.67(最大2.68,最小1.10)和2.26(最大3.99,最小0.84). 图7中右侧y轴同时给出了Ca-MPK应用于CG和BiCGStab求解器时,相对于采用MKL实现时的性能加速比. 可以看出,s=5时,基于Ca-MPK的实现,提升CG和BiCGStab求解器性能平均达1.37(最大1.43,最小1.30)和1.42(最大1.81,最小1.14);s=15时,基于Ca-MPK的实现,提升CG和BiCGStab求解器性能平均达1.38(最大1.50,最小1.27)和1.40(最大1.86,最小1.08),充分说明了Ca-MPK在实际应用中的有效性.

    图  7  不同MPK实现不同幂次时12个矩阵在Intel 6258R CPU上时的性能对比
    Figure  7.  Performance comparison for different MPK implementations with different powers using 12 matrices on the Intel 6258R CPU

    图8给出了ARM平台上12个代表性稀疏矩阵上的具体性能. s=5时,Ca-MPK相对于CSR和RACE的平均加速比分别为2.02(最大7.63,最小0.94)和1.25(最大1.92,最小0.78);s=15时,Ca-MPK相对于CSR和RACE的平均加速比分别为2.01(最大7.28,最小0.96)和1.72(最大2.94,最小0.79). 右侧y轴给出了CG和BiCGStab求解器采用不同MPK实现时相对于基于CSR实现的加速比. 相对于采用CSR实现的求解器,s=5时基于Ca-MPK的实现提升CG和BiCGStab求解器性能平均达2.00(最大2.83,最小1.18)和1.36(最大1.81,最小0.97);s=15时,性能平均达2.00(最大2.79,最小1.21)和1.37(最大1.73,最小1.00).

    图  8  不同MPK实现不同幂次时12个矩阵在Kunpeng 920 CPU上时的性能对比
    Figure  8.  Performance comparison for different MPK implementations with different powers using 12 matrices on the Kunpeng 920 CPU

    图9给给出了针对Freescale1矩阵,采用性能工具LIKWID[23]在x86上测得的主存和L2/L3 缓存的数据访问量. 可以看出,相对于MKL,Ca-MPK大幅减少了主存和L3的数据访问,充分表明数据在L2中被成功复用.

    图  9  Freescale1矩阵在x86上幂次为15时主存和缓存数据访问量
    Figure  9.  Data traffic of memory and cache for matrix Freescale1 on x86 with s equals to 15

    本文提出了基于缓存数据重用的SpMV序列(MPK)优化方法. 基于稀疏矩阵依赖图,设计了体系结构感知的递归划分方法,将依赖图划分为适合缓存大小的子图/子矩阵,通过构建分割子图解耦数据依赖,根据特定顺序在子矩阵上调度执行SpMV,实现缓存数据重用. 测试表明,本文方法相对于Intel OneMKL库和基于RACE的MPK实现,获得了较大的平均性能提升.

    后续将在多个体系结构平台上(例如GPU平台)对本文方法进行测试分析和优化. 除此之外,拟结合矩阵对称性质,引入对称SpMV实现MPK,减少内存空间占用以提升MPK性能. 此外,本文工作主要关注在多次SpMV调用之间重用矩阵. 理论上任何针对单次SpMV算子调用的优化,都与本文工作是互补的,可以直接与本文方法集成.

    https://www.humanornot.ai/
    https://www.figure.ai/
    https://www.safe.ai/work/statement-on-ai-risk
    https://openai.com/blog/superalignment-fast-grants
  • [1]

    Wei J, Tay Y, Bommasani R, et al. Emergent abilities of large language models [J/OL]. Transactions on Machine Learning Research, 2022, 1. [2024-04-29].https://openreview.net/forum?id=yzkSU5zdwD

    [2]

    OpenAI. GPT−4 technical report [J]. arXiv preprint, arXiv: 2305.10403, 2023

    [3]

    Biever C. ChatGPT broke the Turing test—the race is on for new ways to assess AI[J]. Nature, 2023, 619((7971): ): 686−689 doi: 10.1038/d41586-023-02361-7

    [4]

    Yang Kaiyu, Swope A, Gu A, et al. LeanDojo: Theorem proving with retrieval-augmented language models [C]//Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Datasets and Benchmarks Track. New York: Curran Associates, Inc., 2023, 36: 21573−21612

    [5]

    Boiko D A, MacKnight R, Kline B, et al. Autonomous chemical research with large language models[J]. Nature, 2023, 624(7992): 570−578 doi: 10.1038/s41586-023-06792-0

    [6]

    Gervais D J, Nay J J. Artificial intelligence and interspecific law[J]. Science, 2023, 382(6669): 376−378 doi: 10.1126/science.adi8678

    [7]

    Service R F. Could chatbots help devise the next pandemic virus?[J]. Science, 2023, 380(6651): 1211−1211 doi: 10.1126/science.adj3377

    [8]

    Naddaf M. The science events to watch for in 2024[J]. Nature, 2024, 625(7994): 221−223 doi: 10.1038/d41586-023-04044-9

    [9]

    Lu Yao, Bartolo M, Moore A, et al. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity[C]//Proc of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2022: 8086−8098

    [10]

    Wang Sirui, Wei Kaiwen, Zhang Hongzhi, et al. Let me check the examples: Enhancing demonstration learning via explicit imitation[C]//Proc of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2023: 1080−1088

    [11]

    Perez E, Ringer S, Lukosiute K, et al. Discovering language model behaviors with model-written evaluations[C]//Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg, PA: ACL, 2023: 13387−13434

    [12]

    Ouyang Long, Wu J, Jiang Xu, et al. Training language models to follow instructions with human feedback [C]//Advances in Neural Information Processing Systems 35 (NeurIPS 2022). New York: Curran Associates, Inc., 2022, 35: 27730−27744

    [13]

    Langosco L, Koch J, Sharkey L, et al. Goal misgeneralization in deep reinforcement learning[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2022: 12004−12019

    [14]

    Shah R, Varma V, Kumar R, et al. Goal misgeneralization: Why correct specifications aren't enough for correct goals[J]. arXiv preprint, arXiv:2210.01790, 2022.

    [15]

    Pan A, Bhatia K, Steinhardt J. The effects of reward misspecification: Mapping and mitigating misaligned models[C]//Proc of the 10th Int Conf on Learning Representations. 2022 [2024-04-29]. https://openreview.net/forum?id=JYtwGwIL7ye

    [16]

    Askell A, Bai Yuntao, Chen Anna, et al. A general language assistant as a laboratory for alignment[J]. arXiv preprint, arXiv: 2112.00861, 2021

    [17]

    Wei A, Haghtalab N, Steinhardt J. Jailbroken: How does LLM safety training fail?[C]// Advances in Neural Information Processing Systems 36 (NeurIPS 2023). New York: Curran Associates, Inc., 2023, 36: 80079−80110

    [18]

    Liang P, Bommasani R, Lee T, et al. Holistic evaluation of language models[J]. Annals of the New York Academy of Sciences, 2023, 1525(1): 140−146

    [19]

    Zhang Zhexin, Lei Leqi, Wu Lindong, et al. SafetyBench: Evaluating the safety of large language models with multiple choice questions[J]. arXiv preprint, arXiv: 2309.07045, 2023

    [20]

    Rudinger R, Naradowsky J, Leonard B, et al. Gender bias in coreference resolution[C]//Proc of the 2018 Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2018: 8−14

    [21]

    Lin S, Hilton J, Evans O, et al. TruthfulQA: Measuring how models mimic human falsehoods[C]//Proc of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2022: 3214−3252

    [22]

    Li Junyi , Cheng Xiaoxue, Zhao W X, et al. HaluEval: A large-scale hallucination evaluation benchmark for large language models[C]//Proc of the 2023 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2023: 6449−6464

    [23]

    Gehman S, Gururangan S, Sap M, et al. RealToxicityPrompts: Evaluating neural toxic degeneration in language models[C]//Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA: ACL, 2020: 3356−3369

    [24]

    Xu Jing, Ju Da, Li M, et al. Bot-adversarial dialogue for safe conversational agents[C]// Proc of the 2021 Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: Association for Computational Linguistics, 2021: 2950−2968

    [25]

    Perez E, Huang S, Song F, et al. Red teaming language models with language models[C]// Proc of the 2022 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2022: 3419−3448

    [26]

    Mehrotra A, Zampetakis M, Kassianik P, et al. Tree of attacks: Jailbreaking black-box LLMs automatically[J]. arXiv preprint, arXiv: 2312.02119, 2023

    [27]

    Shayegani E, Mamun M A A, Fu Yu, et al. Survey of vulnerabilities in large language models revealed by adversarial attacks[J]. arXiv preprint, arXiv: 2310.10844, 2023

    [28]

    Liu Yi, Deng Gelei, Xu Zhengzi, et al. Jailbreaking chatgpt via prompt engineering: An empirical study[J]. arXiv preprint, arXiv: 2305.13860, 2023

    [29]

    Li Jie, Liu Yi, Liu Chongyang, et al. A cross-language investigation into jailbreak attacks in large language models[J]. arXiv preprint, arXiv: 2401.16765, 2024.

    [30]

    Lv Huijie, Wang Xiao, Zhang Yuansen, et al. CodeChameleon: Personalized encryption framework for jailbreaking large language models[J]. arXiv preprint, arXiv: 2402.16717, 2024

    [31]

    Deng Gelei, Liu Yi, Li Yuekang, et al. MASTERKEY: Automated jailbreaking of large language model chatbots[C/OL]// Proc of 2024 ISOC NDSS (Network and Distributed System Security Symposium) . [2024-04-29].https://www.ndss-symposium.org/wp-content/uploads/2024-188-paper.pdf

    [32]

    Chao P, Robey A, Dobriban E, et al. Jailbreaking black box large language models in twenty queries[C/OL]// Proc of Robustness of Few-shot and Zero-shot Learning in Large Foundation Models (R0-FoMo), NeurIPS 2023 Workshop. 2023[2024-04-29].https://openreview.net/forum?id=rYWD5TMaLj

    [33]

    Zeng Yi, Lin Hongpeng, Zhang Jingwen, et al. How Johnny can persuade LLMs to jailbreak them: Rethinking persuasion to challenge AI safety by humanizing LLMs[J]. arXiv preprint, arXiv: 2401.06373, 2024

    [34]

    Zou A, Wang Zifan, Kolter J Z, et al. Universal and transferable adversarial attacks on aligned language models[J]. arXiv preprint, arXiv: 2307.15043, 2023

    [35]

    Liu Xiaogeng, Xu Nan, Chen Muhao, et al. AutoDAN: Generating stealthy jailbreak prompts on aligned large language models[J]. arXiv preprint, arXiv: 2310.04451, 2023

    [36]

    Zhang Mi, Pan Xudong, Yang Min. JADE: A linguistics-based safety evaluation platform for large language models[J]. arXiv preprint, arXiv: 2311.00286, 2023

    [37]

    Zhao Xuandong, Yang Xianjun, Pang Tianyu, et al. Weak-to-strong jailbreaking on large language models[J]. arXiv preprint, arXiv: 2401.17256, 2024

    [38]

    Xu Zhihao, Huang Ruixuan, Wang Xiting, et al. Uncovering safety risks in open-source LLMs through concept activation vector[J]. arXiv preprint, arXiv: 2404.12038, 2024

    [39]

    Li Tianlong, Dou Shihan, Liu Wenhao, et al. Open the Pandora’s Box of LLMs: Jailbreaking LLMs through representation engineering[J]. arXiv preprint, arXiv: 2401.06824, 2024

    [40]

    Pasquini D, Strohmeier M, Troncoso C. Neural Exec: Learning (and learning from) execution triggers for prompt injection attacks[J]. arXiv preprint, arXiv: 2403.03792, 2024

    [41]

    Shi Jiawen, Yuan Zenghui, Liu Yinuo, et al. Optimization-based prompt injection attack to LLM-as-a-judge[J]. arXiv preprint, arXiv: 2403.17710, 2024

    [42]

    Zhang Yiming, Ippolito D. Prompts should not be seen as secrets: Systematically measuring prompt extraction attack success[J]. arXiv preprint, arXiv: 2307.06865, 2023

    [43]

    OpenAI. Moderation [EB/OL]. [2024-04-22].https://platform.openai.com/docs/guides/moderation/moderation

    [44]

    Jigsaw. About the API [EB/OL]. [2024-04-22].https://developers.perspectiveapi.com/s/about-the-api

    [45]

    Sen I, Assenmacher D, Samory M, et al. People make better edits: Measuring the efficacy of LLM-generated counterfactually augmented data for harmful language detection[C]//Proc of the 2023 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2023: 10480−10504

    [46]

    Hu Beizhe, Sheng Qiang, Cao Juan, et al. Bad actor, good advisor: Exploring the role of large language models in fake news detection[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2024, 38(20): 22105−22113

    [47]

    Ji Ziwei, Lee N, Frieske R, et al. Survey of hallucination in natural language generation[J]. ACM Computing Surveys, 2023, 55(12): 1−38

    [48]

    Muhlgay D, Ram O, Magar I, et al. Generating benchmarks for factuality evaluation of language models[C]// Proc of the 18th Conf of the European Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2024: 49−66

    [49]

    Yang Xianjun, Cheng Wei, Wu Yue, et al. DNA-GPT: Divergent n-gram analysis for training-free detection of GPT-generated text[C/OL]//Proc of the 12th Int Conf on Learning Representations. 2024 [2024-04-29].https://openreview.net/forum?id=Xlayxj2fWp

    [50]

    Karamolegkou A, Li Jiaang, Zhou Li, et al. Copyright violations and large language models[C]//Proc of the 2023 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2023: 7403−7412

    [51]

    Bai Yuntao, Kadavath S, Kundu S, et al. Constitutional AI: Harmlessness from AI feedback[J]. arXiv preprint, arXiv: 2212.08073, 2022

    [52]

    Lightman H, Kosaraju V, Burda Y, et al. Let’s verify step by step[C/OL]//Proc of the 12th Int Conf on Learning Representations. 2024 [2024-04-29].https://openreview.net/forum?id=v8L0pN6EOi

    [53]

    Rafailov R, Sharma A, Mitchell E, et al. Direct preference optimization: Your language model is secretly a reward model[C]//Advances in Neural Information Processing Systems 36 (NeurIPS 2023). New York: Curran Associates, Inc., 2023, 36: 53728−53741

    [54]

    Qian Jing, Dong Li, Shen Yelong, et al. Controllable natural language generation with contrastive prefixes[C]//Findings of the Association for Computational Linguistics: ACL 2022. Stroudsburg, PA: ACL, 2022: 2912−2924

    [55]

    Dathathri S, Madotto A, Lan J, et al. Plug and play language models: A simple approach to controlled text generation[C/OL]//Proc of Int Conf on Learning Representations. 2020 [2024-04-29].https://openreview.net/forum?id=H1edEyBKDS

    [56]

    Li Xian, Yu Ping, Zhou Chunting, et al. Self-alignment with instruction backtranslation[C/OL]// Proc of the 12th Int Conf on Learning Representations. 2024 [2024-04-29].https://openreview.net/forum?id=1oijHJBRsT

    [57]

    Liu Ruibo, Yang Ruixin, Jia Chenyan, et al. Training socially aligned language models in simulated human society [C/OL]//Proc of the 12th Int Conf on Learning Representations. 2024 [2024-04-29].https://openreview.net/forum?id=NddKiWtdUm

    [58]

    Goyal S, Hira M, Mishra S, et al. LLMGuard: Guarding against unsafe LLM behavior[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2024, 38(21): 23790−23792

    [59]

    Mündler N, He J, Jenko S, et al. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation[C/OL]// Proc of the 12th Int Conf on Learning Representations. 2024 [2024-04-29].https://openreview.net/forum?id=EmQSOi1X2f

    [60]

    Caselli T, Basile V, Mitrović J, et al. HateBERT: Retraining BERT for abusive language detection in English[C]//Proc of the 5th Workshop on Online Abuse and Harms (WOAH 2021). Stroudsburg, PA: ACL, 2021: 17−25

    [61]

    Gémes K, Kovács Á, Recski G. Offensive text detection across languages and datasets using rule-based and hybrid methods[C/OL]// Advances in Interpretable Machine Learning and Artificial Intelligence Workshop. 2022 [2024-04-29].https://ceur-ws.org/Vol-3318/short22.pdf

    [62]

    Longpre S, Yauney G, Reif E, et al. A pretrainer’s guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity[J]. arXiv preprint, arXiv: 2305.13169, 2023

    [63]

    Kandpal N, Wallace E, Raffel C. Deduplicating training data mitigates privacy risks in language models[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2022: 10697−10707

    [64]

    Lee K, Ippolito D, Nystrom A, et al. Deduplicating training data makes language models better[C]//Proc of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2022: 8424−8445

    [65]

    Lukas N, Salem A, Sim R, et al. Analyzing leakage of personally identifiable information in language models[C]//Proc of 2023 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2023: 346−363

    [66]

    Carlini N, Tramer F, Wallace E, et al. Extracting training data from large language models[C]//Proc of the 30th USENIX Security Symp (USENIX Security 21). Berkeley, CA: USENIX Association, 2021: 2633−2650

    [67]

    Mireshghallah F, Uniyal A, Wang Tianhao, et al. An empirical analysis of memorization in fine-tuned autoregressive language models[C]//Proc of the 2022 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2022: 1816−1826

    [68]

    Su Zhenpeng, Wu Xing, Bai Xue, et al. MiLe loss: A new loss for mitigating the bias of learning difficulties in generative language models [C/OL]// Proc of 2024 Annual Conf of the North American Chapter of the Association for Computational Linguistics, Stroudsburg, PA: ACL, 2024 [2024-04-29].https://arxiv.org/abs/2310.19531

    [69]

    Li Yansong, Tan Zhixing, Liu Yang. Privacy-preserving prompt tuning for large language model services[J]. arXiv preprint, arXiv: 2305.06212, 2023

    [70]

    Jiang Zhengbao, Xu F F, Gao Luyu, et al. Active retrieval augmented generation[C]//Proc of the 2023 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2023: 7969−7992

    [71]

    Zhang Yunxiang, Khalifa M, Logeswaran L, et al. Merging generated and retrieved knowledge for open-domain QA[C]//Proc of the 2023 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2023: 4710−4728

    [72]

    Xu Zhangchen, Jiang Fengqing, Niu Luyao, et al. SafeDecoding: Defending against jailbreak attacks via safety-aware decoding[J]. arXiv preprint, arXiv: 2402.08983, 2024

    [73]

    Li K, Patel O, Viégas F, et al. Inference-time intervention: Eliciting truthful answers from a language model[C]// Advances in Neural Information Processing Systems 36 (NeurIPS 2023). New York: Curran Associates, Inc., 2023, 36: 41451−41530

    [74]

    Varshney N, Yao Wenlin, Zhang Hongming, et al. A stitch in time saves nine: Detecting and mitigating hallucinations of LLMs by validating low-confidence generation[J]. arXiv preprint, arXiv: 2307.03987, 2023

    [75]

    Mozes M, He Xuanli, Kleinberg B, et al. Use of LLMs for illicit purposes: Threats, prevention measures, and vulnerabilities[J]. arXiv preprint, arXiv: 2308.12833, 2023

    [76]

    Yao Yunzhi, Wang Peng, Tian Bozhong, et al. Editing large language models: Problems, methods, and opportunities[C]//Proc of the 2023 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2023: 10222−10240

    [77]

    Ozdayi M, Peris C, FitzGerald J, et al. Controlling the extraction of memorized data from large language models via prompt-tuning[C]//Proc of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA: ACL, 2023: 1512−1521

    [78]

    Brcic M, Yampolskiy R V. Impossibility results in AI: A survey[J]. ACM Computing Surveys, 2023, 56(1): 1−24

    [79]

    Wolf Y, Wies N, Avnery O, et al. Fundamental limitations of alignment in large language models[J]. arXiv preprint, arXiv: 2304.11082, 2023

计量
  • 文章访问数:  577
  • HTML全文浏览量:  241
  • PDF下载量:  138
  • 被引次数: 0
出版历程
  • 刊出日期:  2024-05-13

目录

/

返回文章
返回