Abstract:
With the explosive growth of multimedia data, the data in cloud becomes heterogeneous and large. The conventional storage systems served for data analysis face the challenge of long read latency due to the lack of semantic management of data. To solve this problem, a cross-modal image and text content sifting storage(CITCSS) mechanism is proposed, which saves the read bandwidth by only reading relevant data. The mechanism consists of the off-line and on-line stages. In the off-line stage, the system first uses the self-supervised adversarial Hash learning algorithm to learn and map the stored data to similar Hash codes. Then, these Hash codes are connected by Hamming distances and managed by the metadata style. In the implement, we use Neo4j to construct the semantic Hash code graph. Furthermore, we insert storage paths into the property of node to accelerate reading. In the on-line stage, our mechanism first maps the image or text represented the analysis requirement into Hash codes and sends them to the semantic Hash code graph. Then, the relevant data will be found by the sifting radius on the graph, and returned to the user finally. Benefiting from our mechanism, storage systems can perceive and manage semantic information resulting in advance service for analysis. Experimental results on public cross-modal datasets show that CITCSS can greatly reduce the read latency by 99.07% to 99.77% with more than 98% recall rate compared with conventional semantic storage systems.