高级检索

    基于Spark的大数据访存行为跨层分析工具

    A Cross-Layer Memory Tracing Toolkit for Big Data Application Based on Spark

    • 摘要: 大数据时代的到来为信息处理带来了新的挑战,内存计算方式的Spark显著提高了数据处理的性能.Spark的性能优化和分析可以在应用层、系统层和硬件层开展,然而现有工作都只局限在某一层,使得Spark语义与底层动作脱离,如操作系统参数对Spark应用层的性能影响的缺失将使得大量灵活的操作系统配置参数无法发挥作用.针对上述问题,设计了Spark存储系统分析工具SMTT,打通了Spark层、JVM层和OS层,建立了上层应用程序的语义与底层物理内存信息的联系.SMTT针对Spark内存特点,分别设计了针对执行内存和存储内存的追踪方式.基于SMTT工具完成了对Spark迭代计算过程内存使用,以及跨越Spark,JVM和OS层的执行/存储内存使用过程的分析,并以RDD为例通过SMTT分析了单节点和多节点情况下Spark中读和写操作比例,结果表明该工作为Spark内存系统的性能分析和优化提供了有力的支持.

       

      Abstract: Spark has been increasingly employed by industries for big data analytics recently, due to its efficient in-memory distributed programming model. Most existing optimization and analysis tool of Spark perform at either application layer or operating system layer separately, which makes Spark semantics separate from the underlying actions. For example, unknowing the impaction of operating system parameters on performance of Spark layer will lead unknowing of how to use OS parameters to tune system performance. In this paper, we propose SMTT, a new Spark memory tracing toolkit, which establishes the semantics of the upper application and the underlying physical hardware across Spark layer, JVM layer and OS layer. Based on the characteristics of Spark memory, we design the tracking scheme of execution memory and storage memory respectively. Then we analyze the Spark iterative calculation process and execution/storage memory usage by SMTT. The experiment of RDD memory assessment analysis shows our toolkit could be effectively used on performance analysis and provide guides for optimization of Spark memory system.

       

    /

    返回文章
    返回