高级检索

    一种最大化内存共享与最小化运行时环境的超轻量级容器

    An Ultra Lightweight Container that Maximizes Memory Sharing and Minimizes the Runtime Environment

    • 摘要: 容器技术的兴起带来了数据中心的深刻变化,大量软件转为微服务方式部署与交付.如何优化海量用户环境下大规模容器的启动、运行与维护问题具有广泛的现实意义.目前以Docker为代表的主流容器技术已经取得较大成功,但在镜像体积、资源共享等方面仍有较大改进空间.梳理了虚拟化技术的发展过程,阐明轻量级的虚拟化技术是未来的研究方向,对数据敏感型应用至关重要.通过建立库文件共享模型,探究了库文件的共享程度对容器最大启动数量的影响.给出了一种超轻量级的容器设计方案,通过细化可操作资源的粒度,使得支撑应用程序运行的容器运行时环境最小化;将依赖库文件与可执行二进制文件单独抽取成层,实现了容器对主机内存资源的最大化共享.根据上述方案实现了一种超轻量级容器管理引擎:REG(runtime environment generation),并定义了一套基于REG的工作流.在镜像体积、启动速度、内存占用、容器启动风暴等方面进行对比实验,验证了所提方法在大规模容器环境下的有效性.

       

      Abstract: The rise of container technology has brought about profound changes in the data center, and a large number of software has been transferred to micro-service deployment and delivery. Therefore, it is of a broad practical significance to optimize the startup, operation and maintenance of large-scale containers in massive user environment. At present, the mainstream container technology represented by Docker has achieved great success, but there is still much room for improvement in image volume and resource sharing. We comb the development process of virtualization technology, and clarify that lightweight virtualization technology is the future research direction, which is very important for data-sensitive applications. By establishing a library file sharing model, we explored the impact of the degree of library files sharing on the maximum number of containers that can be launched. We present an ultra-lightweight container design that minimizes the container runtime environment supporting application execution by refining the granularity of operational resources. At the same time, we extract the library files and the executable binary files into a single layer, which realizes the maximum sharing of the host’s memory resources among containers. Then, according to the above scheme, we implement an ultra-lightweight container management engine: REG (runtime environment generation), and a REG-based workflow is defined. Finally, we carry out a series of comparative experiments on mirror volume, startup speed, memory usage, container startup storm, etc., and verify the effectiveness of the proposed method in the large-scale container environment.

       

    /

    返回文章
    返回