Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 January 2009, Volume 46 Issue 1
Paper
Video Semantics Mining Using Multi-Modality Subspace Correlation Propagation
Liu Yanan, Wu Fei, and Zhuang Yueting
2009, 46(1):  1-8. 
Asbtract ( 539 )   HTML ( 1)   PDF (1570KB) ( 744 )  
Related Articles | Metrics
Research on content-based multimedia retrieval is motivated by a growing amount of digital multimedia content in which video data is a big part. Interaction and integration of multi-modality media types such as visual, audio and textual data in video are the essence of video content analysis. Although any uni-modality type partially expresses limited semantics less or more, video semantics are fully manifested only by interaction and integration of any unimodal. Video data comprises plentiful semantics, such as people, scene, object, event and story, etc. A great deal of research has been focused on utilizing multi-modality features for better understanding of video semantics. Proposed in this paper is a new approach to detect semantic concepts in video using co-occurrence data embedding (CODE), SimFusion, and locality preserving projections (LPP) from temporal associated co-occurring multimodal media data in video. The authors experiments show that by employing these key techniques, the performance of video semantic concept detection can be improved and better video semantics mining results can be obtained.
A Novel Theorem Proving Algorithm Based on Extension Rule
Sun Jigui, Li Ying, Zhu Xingjun, and Lü Shuai
2009, 46(1):  9-14. 
Asbtract ( 562 )   HTML ( 0)   PDF (623KB) ( 529 )  
Related Articles | Metrics
ATP (automated theorem proving) has always been one of the most advanced areas of computer science. The traditional idea used in ATP is to try to deduce the empty clause to check satisfiability, such as resolution based theorem proving, which is one of the most popular methods. Extension-rule-based theorem proving is a new resolution-based theorem proving method. After a deep research work on the extension rule, a brilliant property of the rule is obtained. In this paper, the property and an algorithm which is used to decide it are proposed firstly. In addition, the algorithms time complexity and space complexity are analyzed and proved. Based on the above work, a novel extension rule based theorem proving algorithm called NER is proposed. The NER algorithm transforms the problem which decides whether a clause set is satisfiable to a series of problems deciding whether one literal set includes another one, while the original extension algorithm transforms them to problems counting the number of maximum terms that can be expended. A number of experiments show that the NER algorithm obviously outperforms both the original extension rule based algorithm ER and the directional resolution algorithm DR. Especially, it can be improved up to two orders of magnitude.
Hybrid Reasoning of Terminological Cycles in Description Logic εL
Jiang Yuncheng, Wang Ju, Zhou Shengming, and Tang Yong
2009, 46(1):  15-22. 
Asbtract ( 500 )   HTML ( 2)   PDF (647KB) ( 632 )  
Related Articles | Metrics
Description logic is a logical reconstruction of the frame-based knowledge representation languages, with the aim of providing a simple well-established declarative semantics to capture the meaning of structured representation of knowledge. Terminological cycles have been a difficult spot in the study of description logic for quite a few years. The basic problems, such as their semantics and reasoning mechanisms, have not been reasonably well settled. The current research progresses and the existing problems of terminological cycles in description logic are analyzed in this paper. Based on the works of Baader and Brandt, the hybrid reasoning of terminological cycles in description logic εL is further studied. The syntax and semantics (including fixpoint semantics and descriptive semantics) of hybrid knowledge bases in εL are given. Aiming at the requirement of hybrid reasoning of terminological cycles in εL, TBox-completion is presented, and the description graphs (including syntax description graph and semantics description graph) are redefined. The instance checking reasoning algorithms of hybrid knowledge bases in εL w.r.t. greatest fixpoint semantics and descriptive semantics are presented using TBox-completion and description graphs. The correctness of reasoning algorithms is proved, and the complexity property of reasoning algorithms is given. Theoretical foundation for the hybrid reasoning of terminological cycles in εL is provided through the instance checking reasoning algorithms.
Image Segmentation Based on Self-Organizing Dynamic Neural Network
Shi Chunqi, Shi Zhiping, Liu Xi, and Shi Zhongzhi
2009, 46(1):  23-30. 
Asbtract ( 1049 )   HTML ( 0)   PDF (1584KB) ( 736 )  
Related Articles | Metrics
Image segmentation is critical to image processing and pattern recognition, while feature space clustering is an important method for unsupervised image segmentation. The method assumes that image feature is a constant property of the surface of each object to be segmented within the image and the image feature could be mapped into a certain geometrical space called feature space. Meanwhile, the method also assumes that different objects present in the image will manifest themselves as different clusters in the feature space. Therefore, the problem of segmenting the objects of an image can be viewed as that of finding the mapping clusters in the feature space. Using a proposed novel competitive-learning-based neural network clustering algorithm to cluster image feature space, an unsupervised image segmentation method is realized. The proposed clustering algorithm, self-organizing dynamic neural net(SODNN), is possible to dynamically grow swiftly and adjust the size of neural net more accurately, so it is able to find the number of clusters according to the input data pattern and get more stable and accurate result. Here, the SODNN algorithm is utilized and the color and position features are combined to carry out the image segmentation. The presented results show better consistency between the segmentation by proposed methods and the segmentation by human.
Kernelization of 2-Vertex for Vertex Cover in Random Graphs Based on Subgraphs
Huang Haibin, Yang Luming, Wang Jianxin, Chen Jianer, and Li Shaohua,
2009, 46(1):  31-40. 
Asbtract ( 517 )   HTML ( 1)   PDF (966KB) ( 689 )  
Related Articles | Metrics
While the exact solution to vertex cover problem can be found within the frame of parameterized computation, there are some limits in the theory and practice, due to the lacking both in the analysis of algorithms mechanism and process and in the grasp of problems macroscopical and dynamic properties. On the basis of fixed-parameter tractability, the d-decision makable by way of kernelization (d-DMK) of the parameterized vertex cover problem of random graph is put forward and the counting method for triangle subgraphs with 2-degree vertex is also presented. According to the studies of the sharing relationship of the vertex between the subgraphs, the dynamic and evolvement mechanism of the kernel and the degree distribution in the process of 2-degree vertex kernelization are described, from which two deductions are drawn: the first states that the strength of the kernelization algorithm based on 2-degree gets its maximum in a random graph when the probability of its 2-degree vertex is about 0.75; the second states that the parameterized vertex cover problem (G, k) of random graph given by φ(x) is 2-DMK when k smaller than a given value relation to φ(x). The results of the emulation show that the theory can not only deal with the mechanism of the kernelization, but also offer an effective way to analyze such an NP-completeness problem in random graph and a new method to solve a class of uncertain problems with known degree distributions.
GPSFM: Generalized Potential Support Features Selection Method
Gao Jun, Wang Shitong, and Deng Zhaohong,
2009, 46(1):  41-51. 
Asbtract ( 472 )   HTML ( 0)   PDF (1574KB) ( 548 )  
Related Articles | Metrics
Feature selection is one of the fundamental problems in machine learning. Not only can its proper design reduce system complexity and processing time, but it can also enhance system performance in many cases. It becomes even more critical to the success of a machine learning algorithm in problems involving a large amount of irrelevant features. Potential support vector machine (P-SVM), as a new wrapper feature selection approach, has been applied to several fields successfully. However, according to Fisher linear discriminant criterion, it is found that P-SVM can work only when the mean of each class is zero, which makes it difficult to get best decision boundaries for sample data and therefore lowers classification capability of P-SVM. In this paper, based on the above mentioned finding, a new criterion function with general within-class scatter is adopted and a generalized potential support features selection method (GPSFM) is proposed, which not only has the advantages of P-SVM to some extent but also has the characteristics of low redundant features selection, high selection speed, and nicer adaptive abilities. So compared with the traditional P-SVM, this new method has much stronger abilities in both feature selection and classification. Our experimental results demonstrate the above advantages of the proposed method GPSFM.
A Strategy to Class Imbalance Problem for kNN Text Classifier
Hao Xiulan, Tao Xiaopeng, Xu Hexiang, and Hu Yunfa
2009, 46(1):  52-61. 
Asbtract ( 703 )   HTML ( 0)   PDF (1167KB) ( 705 )  
Related Articles | Metrics
Class imbalance is one of the problems plagueing practitioners in data mining community. First, some strategies to deal with this problem are reviewed. When training set is skewed, the popular kNN text classifier will mislabel instances in rare categories into common ones and lead to degradation in macro F1. To alleviate such a misfortune, a novel concept, critical point (CP) of the text training set, is proposed. Then property of CP is explored and algorithm evaluating the lower approximation (LA) and upper approximation (UA) of CP is given. Afterwards, traditional kNN is adapted by integrating LA or UA, training number with decision functions. This version of kNN is called self-adaptive kNN classifier with weight adjustment. To verify self-adaptive kNN classifier with weight adjustment feasible, two groups of experiments are carried out to compare with it. The first group is to compare the performance of different shrink factors, which can be viewed as comparing with Tan's work, and to prove that at LA or UA, the classifier will exhibit better Macro F1. The second group is to compare with random-sampling, where traditional kNN is used as a baseline. Experiments on four corpora illustrate that self-adaptive kNN text classifier with weight adjustment is better than random re-sampling, improving macro F1 evidently. The proposed method is similar to cost-sensitive learning to some extent.
Applying Terminology Definition Pattern and Multiple Features to Identify Technical New Term and Its Definition
Xun Endong and Li Cheng
2009, 46(1):  62-69. 
Asbtract ( 514 )   HTML ( 1)   PDF (864KB) ( 745 )  
Related Articles | Metrics
identification of technical new term and its definition is an important research topic in information extraction. It is still a great challenge to provide a scalable solution for large-scale terms extraction, because most previous approaches fail to explicitly define the linguistic constituent of terms and the function of their definition patterns. The authors’ research shows that the occurrences of technical new terms in most cases are accompanied with their definition descriptions in the real corpus. Based on this intuition, the linguistic constituent of technical terms and the numerical function of their definitions are defined explicitly. Also presented is a novel statistical approach based on linguistic pattern of terminology definition (LPTD) to extract Chinese technical new terms and their definitions. LPTD in this paper is first proposed to delimit the boundary of technical terms. In the identification phase, both statistical information of terms and LPTD features obtained from the previous filtering process are taken into account in the SVM classifier. They are integrated into one unified framework. The idea in this paper can also be used for reference in collocation extraction (CE) and be easily extended to other different languages. Compared with the previously reported outcomes, this approach achieves a competitive result in real large-scale corpora at 90.5% in precision and 78.1% in recall.
Multi-Robot Map Building Based on Hybrid DSm Model
Li Peng, Huang Xinhan, and Wang Min
2009, 46(1):  70-76. 
Asbtract ( 575 )   HTML ( 0)   PDF (1634KB) ( 647 )  
Related Articles | Metrics
A new multi-robot system framework and a novel kind of robot control structure are introduced to solve the problem of multi-robot map building under entirely unknown environment Firstly. In order to deal with a mass of data, distributed fusion framework is adopted in this multi-robot system. Compared with the classical architectures, this multi-robot system enhances overall system performance, especially real-time performance. Then a few general basic belief assignment functions (gbbaf) and a hybrid DSm model of sonar are constructed to deal with the uncertain and imprecise, sometimes even high conflicting information, which is obtained by sonar sensors with the application of new information fusion method Dezert-Smarandache theory (DSmT). DSmT is extended from Bayesian theory and Demster-Shafer theory (DST) in the system and consideration of characteristics of sonar sensors. Finally, Pioneer II mobile robots are used to carry out experiments of map building for both single-robot system and multi-robot system. And a 3D general basic belief assignment map is structured by openGL. The comparison of created ichnography with the real map testifies the validity of hybrid DSm model and efficiency of multi-robot system for fusing imprecise information and map building proposed in this research.
Algorithms to Find the Set of Relevant Views and Quasi-Identifiers for K-Anonymity Method
Song Jinling, Liu Guohua, Huang Liming, and Zhu Caiyun
2009, 46(1):  77-88. 
Asbtract ( 678 )   HTML ( 5)   PDF (984KB) ( 769 )  
Related Articles | Metrics
Quasi-identifier is a key factor to impact the validity of k-anonymity method. If the quasi-identifier is not identified correctly, the publishing view may still disclose secret information although it has been k-anonymized on the quasi-identifier. In the procedure of publishing views, an important problem concerning identifying quasi-identifiers is how to find all the views relevant to the publishing view from the set of published views. By mapping the set of published views and the publishing view into a hypergraph, the problem of finding the set of relevant views can be converted to searching for all the paths containing special edge between two given nodes in the hypergraph. In this paper, at first, the method mapping all the views to hypergraph and the related lemma and deciding theorems are presented; the algorithm for finding the set of relevant views based on hypergraph is proposed too. Secondly, the composing structures of the quasi-identifiers for the basic table with FDs and without FDs are studied respectively, and the characters of quasi-identifiers are generalized. Then, the algorithms to identify quasi-identifiers of the publishing view based on the set of relevant views for the cases with and without FDs are proposed. At last, the correctness of all the algorithms are proved, and the time complexity of these algorithms are analyzed. The algorithms finding the set of relevant views and quasi-identifiers can ensure the successful application of the k-anonymity method.
A Computer Network Defense Policy Specification Language
Xia Chunhe1,2, Wei Yudi2, Li Xiaojian1,2,3, Wang Haiquan2,4, and He Wei2
2009, 46(1):  89-99. 
Asbtract ( 579 )   HTML ( 1)   PDF (2051KB) ( 470 )  
Related Articles | Metrics

Policy is an essential part of computer network defense, which has important directive to the deployment, implementation, configuration and effects of defense systems. Presently, models and specifications on access control policy work well. However, they can not be directly applied to the whole defense policy area. In this paper, a new computer network defense policy specification language called CNDPSL is proposed to provide a common method of specifying protection, detection and response policies according to a new defined model called CNDPM, which is put forward by extending Or-BAC (organization based access control model). In CNDPM, automatic assignment mechanism is introduced to improve efficiency, and derivative principles are presented to refine abstract policies to concrete rules. Moreover, completeness, validity and consistency of policy are also formally analyzed and demonstrated. CNDPSL is declarative and able to abstract defense control behaviors of network, which makes this language flexible, extensible and adaptable to network defense requirements. Finally, a policy engine is implemented. Detailed experiments in GTNetS platform indicate that CNDSPL can be refined to concrete technical rules automatically, such as ACL (access control list) in firewall, IDS detection rules, response rules, etc, and obtain defense effects it expresses. The above information proves its effectiveness and efficiency.

Paper
Reversible R-S Digital Watermarking Using the Subliminal Channel
Zhao Xianfeng, Li Ning, and Huang Wei
2009, 46(1):  100-107. 
Asbtract ( 546 )   HTML ( 2)   PDF (1056KB) ( 516 )  
Related Articles | Metrics
Because of the difficulty of making adequate redundant capacity for watermarking, some reversible watermarking schemes for content authentication just embed a symmetrically encrypted hash value, the size of which is significantly smaller than that of a digital signature. It makes a dishonest verifier be capable of fabricating legal contents. However, the research finds that introducing a public-key signature, which is usually several times longer than a hash code, can only need several bytes more capacity if the subliminal channel in the signature scheme can be used. And it is shown that RSA-PSS (RSA-probabilistic signature scheme) has such an exploitable subliminal channel. Then an improved reversible R-S (regular-singular) watermarking scheme, which adopts RSA-PSS, is proposed. It still produces the redundant watermarking capacity by compressing the R-S vector but stores partial or entire compressed R-S vector in the subliminal channel. As a result, the new scheme only needs 4 bytes more additional capacity. By the reduced size of overall embedded data, the new scheme can even partition an original image into several tens of blocks and embed data block-wise so that it has the capability of localizing tampering. And by not revealing any information directly about the private key, it has reliable security also.
P2P Content Distribution with Network Coding
Lei Yingchun, Cheng Shi, Wu Chanle, Gong Yili, and Kang Qing
2009, 46(1):  108-119. 
Asbtract ( 721 )   HTML ( 0)   PDF (1245KB) ( 582 )  
Related Articles | Metrics
Network coding is a field of information theory and coding theory and is a method of attaining maximum information flow in a network. Many researchers believe that network coding could help improve P2P network performance. But some other researchers still hold their views on this issue. Discussed here is the application of network coding to P2P content distribution system. It is concluded that network coding can be used to reduce the complexity of the piece selection in P2P content distribution, and to increase the utilization of the network resource. However, without being combined with good neighbor selection and chokingunchoking, its advantage can not be realized. Meanwhile, how to apply network coding to the P2P content distribution system is presented in detail with further discussion about primary implementation problems. To verify the feasibility of P2P content distribution system based on network coding, the main coding and decoding operation of network coding is implemented, and its system resource consumption is analyzed. Results are as follows: The coding operation consumes 2.25% of the CPU resource and 20MB of the memory resource while the upload rate is 50KBps. It is indicated that the system consumption of network coding is relatively low, and it is completely feasible to apply network coding to the P2P content distribution.
A Receiver-Based Cross-Iayer Forwarding Protocol for Mobile Sensor Networks
Li Lian, Jiang Wentao, Sun Limin, and Fan Xiaozhong
2009, 46(1):  120-128. 
Asbtract ( 514 )   HTML ( 1)   PDF (1365KB) ( 604 )  
Related Articles | Metrics
As nodes can move randomly in mobile sensor networks, the network topology changes frequently. Generally, there is no stable communication path between source node and sink node, even in a short interval. Thus the data forwarding protocol encounters great challenges in mobile sensor networks. Receiver based routing protocols do not require to establish an overall routing from source node to sink node, but allow the neighbor nodes of the sender to contend for the forwarding right under certain rules, and only the contention winner has the opportunity to forward data. Existing receiver based routing protocols can be applied to mobile sensor networks, but they still suffer to some disadvantages in the fields of forwarding priority calculation, data collision, multicast suppression and routing void bypass. Taking into account the characteristics of mobile sensor networks, a receiver based cross-layer forwarding protocol (RCF) is proposed in this paper. The RCF protocol optimizes the forwarding priority calculation and utilizes a self-adaptive mechanism for forwarding right contention. Also, it deals with the data collision and multicast suppression better through the dual-channel communication model, and puts forward an efficient routing void bypass mechanism. The simulation results show that RCF performs well in terms of communication consumption, forwarding delay and reliability.
Path Planning for Mobile Anchor Node in Localization for Wireless Sensor Networks
Li Hongjun, Bu Yanlong, Xue Han, Li Xun, and Ma Hongxu
2009, 46(1):  129-136. 
Asbtract ( 628 )   HTML ( 2)   PDF (981KB) ( 594 )  
Related Articles | Metrics
Localization is a key technology in wireless sensor networks (WSN). Many applications and middleware of WSN require the sensor nodes to obtain their locations. The accuracy of collected data can significantly be affected by an imprecise positioning of the event of interest. The main idea in most localization methods has been that some nodes with known coordinates (e.g., GPS-equipped nodes) transmit beacons with their coordinates in order to help other nodes to localize themselves. In this case, a fundamental research issue is the planning of the path that the mobile anchor should travel along. In this paper the authors study the path planning of the mobile anchor in localization for wireless sensor networks. Considering the graph theory, wireless sensor network is regarded as a connected undirected graph and then the path planning problem is translated into having a spanning tree and traversing graph. Breadth-first (BRF) and backtracking greedy (BTG) algorithms for spanning tree are proposed. The differences in performance between the two algorithms are analyzed. The BRF and BTG algorithms are effective and realistic algorithms that work in actual implementation. The simulations and real experiments show that these algorithms obtain higher localization precision and are robust in localization for the randomly deployment of the sensor nodes.
A Routing Protocol of Wireless Mesh Network Based on Weighted Link State
Fu Yunqing, Wang Songjian, and Wu Zhongfu
2009, 46(1):  137-143. 
Asbtract ( 540 )   HTML ( 1)   PDF (657KB) ( 575 )  
Related Articles | Metrics
Wireless mesh networks are emerging as a key technology for next generation wireless networking. Due to the fact that their standardization is still on the way, the currently routing protocols of wireless mesh networks basically continue to use the algorithms in ad hoc networks. However, experiments and applications show that the routing protocols in ad hoc networks are not appropriate for wireless mesh networks. On the basis of analyzing the flaws of AODV routing protocol, a novel routing protocol for wireless mesh networks, MODVWLS, is proposed to solve this problem. Weighted link state, which is composed of nodes available bandwidth, throughput and buffer saturation, is used as routing metric in MODVWLS. Each node periodically calculates the cost (weight) of transmission to its next hop, and the path with minimal accumulative weight from source to destination is finally used as preferred route. In order to utilize disengaged nodes sufficiently and balance network load dynamically, overload alarm mechanism is adopted to initiate new route discovering. Weight calculation, message formats, route discovering, and maintaining process are also introduced in detail. Finally, several simulations are conducted by ns-2, and the results show that MODVWLS is better than AODV on packet delivery fraction, end-to-end delay and normalized routing load, it is thus more appropriate for wireless mesh networks.
Exemplar-Based Image Completion Using Global Optimization
Chen Zhonggui, Liu Ligang, and Wang Guojin
2009, 46(1):  144-150. 
Asbtract ( 584 )   HTML ( 0)   PDF (1624KB) ( 651 )  
Related Articles | Metrics
Image completion, which aims to remove objects or recover the damaged portions in a given image, is an important task in photo editing. Recently, exemplar-based methods are considered to complete images with large portions removed. However, structure inconsistency of the reconstructed texture often appear when using those methods. In this paper, a new exemplar-based algorithm is proposed to obtain global texture consistency by using global optimization. First, an energy function is defined for measuring the quality of the reconstructed region. Then, the image completion problem is formulated as minimization of the energy function which is done in an iterative form. Finally, the slight color differences between the known region and the filled region are revised by the Poisson image editing method. Compared with the existing exemplar-based methods which do greedy region-growing, the proposed method not only reconstructs the local color texture of missing region, but also preserves the global structural texture of the image. An adaptive sampling method, which is based on the saliency map of the image, is also adopted to construct the searching space. It dramatically reduces the searching space and accelerates the nearest neighbor searching. The effectiveness of the proposed method is demonstrated on several examples and comparisons.
A Real-Time Algorithm for Character Reactive Animation Generation
Pan Zhigeng, Cheng Xi, and Tang Bing
2009, 46(1):  151-158. 
Asbtract ( 471 )   HTML ( 3)   PDF (1386KB) ( 574 )  
Related Articles | Metrics
By combining motion capture data and dynamic simulation, realistic character animations can be obtained, which can interactively respond to contact forces in the environment. However, the previous character animation generation methods cost so much searching time to find out the appropriate motion capture sequences in the database so that the motion generation process can not run online. Moreover, animators need much manual adjustments to achieve the final realistic character animation. The authors present a parallel algorithm of two processes, and employ an artificial neural network to predict and pre-classify the recovery motion database in order to reduce the size of the search region. The artificial neural network is trained offline by a set of recovery motion capture data sequences and the database is classified according to the recovery motion strategy of the characters. The artificial neural network accepts several key DOFs of the character body segments as input, and outputs the subset label of the recovery motion sequence database as a result. In addition, the matching algorithm is improved for searching for motion capture sequences. The experiment demonstrates that the characters can be controlled and switched between motion capture data and dynamic simulation control modes naturally, and the system can generate reactive virtual character animation in real-time.
Study of Loading Strategy in Shared-Nothing Event Stream Parallel Database Systems
Liu Ying, , Wang Qirong, and Sun Ninghui
2009, 46(1):  159-166. 
Asbtract ( 562 )   HTML ( 0)   PDF (2129KB) ( 497 )  
Related Articles | Metrics
Skew is one of the most important problems in parallel systems, which has a great impact on the parallel systems performance. The event stream system is the back-end data processing and analysis systems of data stream management systems (DSMS). It is different from the traditional database systems due to the new workload characterization. This kind of systems receive continuous, fast-coming and large volume of event stream data on one side, and supply quick response to the users’ queries on the other side. Under such a condition the common data redistribution solutions to data skew are not suitable any longer. In this paper a periodical counting based capability aware (PCCA) loading strategy is presented based on DBroker, which is a shared nothing event stream parallel database system for the backbone network monitoring application. This loading strategy not only keeps the event stream data being loaded fast and correctly, but also recognizes and prevents the system from the skew automatically, according to the loading capability of each node adaptively.What’s more, it forms a good data distribution foundation for query service. Finally PCCA loading strategy is proven to provide much better performance than the other three methods in both simulation model analysis and real system testing.
The Tradeoff Cache Between Latency and Capacity in Chip Multiprocessors
Xiao Junhua, Feng Zijun, and Zhang Longbing
2009, 46(1):  167-175. 
Asbtract ( 539 )   HTML ( 1)   PDF (2213KB) ( 550 )  
Related Articles | Metrics
Chip multiprocessors (CMP) have become the main stream microprocessor architecture. In CMP, the cache, especially the last level cache, is the critical part of its performance and becomes a focus of current research activities. CMP cache faces the conflicting requirements of satisfying both latency and capacity, and has to trade off between techniques that reduce off-chip and cross-chip misses. The private cache design minimizes the cache access latency but reduces the total effective cache capacity. The shared cache design maximizes the effective cache capacity but incurs long hit latency. In this paper, a CMP cache design (tradeoff cache between latency and capacity,TCLC) is proposed. TCLC is a private and shared hybrid design. TCLC can dynamically identify the cache blocks shared type and optimize them respectively. The private type is optimized through migration policy, the shared read-only type is optimized through replication policy, and the shared read-write type is optimized through center placement policy. TCLC tries to make cache access latency close to private design, and effective cache capacity close to shared design, which can mitigate the impact of the wire delay and reduce the average memory access latency. The experiment results indicate that this proposal performs 13.7% better than a private cache and 12% better than a shared cache.