Abstract:
With the development of Internet of things, cloud computing and Internet public opinion analysis, big data applications are growing into the critical workloads in current data center. Directory cache is used to guarantee cache coherence in chip multi-processor, which is massively deployed in data centers. Previous researches proposed all kinds of innovation to improve the utilization of directory cache capacity and scalability, making it more suitable for high-performance computing. Big data workloads are timing sensitive, which is not satisfied by previous works. To meet the requirement of big data workloads, master-salve directory is a novel directory cache design, which can optimize the path of memory instruction. In the novel directory cache design, master directory picks up private data accesses and provides services for them to reduce miss-latency, and slave directory provides cache coherence for shared memory space to improve the utilization of cache capacity and the scalability of chip multi-processor. Our experiment benchmark is CloudSuite-v1.0, running on Simics+GEMS simulator. Compared with sparse directory with 2×capacity, the experimental results show that master-slave directory can reduce hardware overhead by 24.39%, and reduce miss-latency by 28.45%, and improve IPC by 3.5%. Compared with in-cache directory, the results show that master-slave directory sacrifices 5.14% miss-latency and 1.1% IPC, but reduces hardware overhead by 42.59%.