高级检索

    高度可扩展的3D叠前Kirchhoff时间偏移并行算法

    A Highly Scalable Parallel Algorithm for 3D Prestack Kirchhoff Time Migration

    • 摘要: 为适应海量地震数据以及集群并行规模不断增大的趋势,提出了多维度成像空间分解算法.根据大规模集群系统有多个并行层次的特征,首先沿炮检距方向分解成像空间;然后再沿in-line方向继续切分,直到成像空间小于计算节点物理内存;最后在二维地表上以面元为单位分解成像空间.算法实现上,共炮检距成像空间映射到计算节点组上,计算节点内的CPU核之间按照round-robin均分面元.该并行算法在不增加数据通信量的情况下,降低了内存的需求,减少了通信开销和同步时间,提高了数据的局部性.实际资料测试表明,该并行算法比传统的输出并行和输入并行算法具备更好的性能与可扩展性,实验作业调度多达497个节点、7552个线程,仍然具备较好的加速效果.

       

      Abstract: To support increasing survey sizes and processing complexity, we propose a practical approach that implements the large-scale parallel processing of 3D prestack Kirchhoff time migration(PKTM) on clusters of multi-core nodes. The parallel algorithm is based on three-level decomposition of the imaging space. Firstly, the imaging space is partitioned by offsets. Each node runs in just one process, and all processes are divided into several distinct groups. The imaging work of common-offset space is assigned to a group, and the common-offset input traces are dynamically distributed to the processes of the group. Once all input traces are migrated, the local imaging sections of all the processes in a group are added to form the final common-offset image. In a node, the common-offset imaging section is further partitioned equally by common middle point (CMP) into as many blocks as the number of CPU cores, and the computing threads share the same input traces and spread the sampled points to a different set of imaging points. If the size of a common-offset imaging section exceeds the total physical memory on the compute node, the whole imaging space should be firstly partitioned along in-line direction so that each common-offset imaging space can fit in memory. The algorithm greatly reduces the memory requirement, does not introduce overlapping input traces between any processes, and makes it easy to implement fault-tolerance application. An implementation of the algorithm demonstrats high scalability and excellent performance in our experiment with actual data. Parallelism is scaled to efficiently use up to 497 nodes and 7552,threads.

       

    /

    返回文章
    返回