Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 March 2010, Volume 47 Issue 3
Paper
Integrative Reasoning with Topological, Directional and Size Information Based on MBR
Chen Juan, Liu Dayou, Jia Haiyang, and Zhang Changhai
2010, 47(3):  . 
Asbtract ( 347 )   PDF (1033KB) ( 685 )  
Related Articles | Metrics
It is inadequate considering only one aspect of spatial information in practical problems, where several aspects are usually involved together. Reasoning with multi-aspect spatial information has become one of the focuses of qualitative spatial reasoning. Current research about the integrative reasoning concentrates on the reasoning with two aspects information and lacks the work over three or more aspects. To solve this problem, the extended rectangle relation is proposed to realize the integrative representing and reasoning of topology, direction and size information. Considering the high cost of representing and reasoning with single aspect spatial information and the need of efficiency, the minimal bounding rectangle (MBR) is used to approximate regions; so the spatial relations between regions can be presented by the relevant relations between the projections of MBRs on each axis. The translating algorithm which converts the RCC8, cardinal direction and size relations into extended rectangle relations is given. The basic reverse and composing operations are discussed, and it is pointed out that the composition of extended rectangle relations is based on consistency not existence. According to the definitions of convex and strongly preconvex extended rectangle relations, the consistency of the network consisting of these two sets of relations is proved to be decided in polynomial time.
Current Progress and Research Issues in Underwater Sensor Networks
Guo Zhongwen, Luo Hanjiang, Hong Feng, Yang Meng, and Lionel M Ni
2010, 47(3):  377-389. 
Asbtract ( 954 )   HTML ( 12)   PDF (1600KB) ( 2077 )  
Related Articles | Metrics
With the emphasis on effective safeguarding of the national marine rights and interests,the upsurge in marine economic development and the significant progress in wireless sensor networks, underwater sensor networks have become a new hot research area. Underwater sensor networks are deployed to perform collaborative tasks such as oceanographic data collection, pollution monitoring, offshore exploration, disaster prevention, assisted navigation, and tactical surveillance. Underwater sensor networks use acoustic signals to communicate with each other, thus the propagation delay is large due to the slow acoustic signal propagation speed. Underwater acoustic communication channel has limited bandwidth capacity because of the significant frequency and distance dependent attenuation. The harsh underwater channel also has high bit error rates. Thus, some new algorithms need designing for underwater sensor networks. An overview of underwater wireless communication technologies, the design of communication hardware, the underwater network architecture and the characteristics of underwater communication channels are introduced firstly. Then given is a more detailed description of eight important research topics: acoustic modulation, medium access control, routing protocols, transport protocols, cross-layer design, networks information processing, localization techniques and simulation environments. Research organizations on underwater sensor networks are also introduced. Finally presented are present problems encountered in current research and future research directions in this exciting and emerging research area.
Paper
On the Joint Optimization of Coding Cost and Link Cost with Network Coding
Deng Liang, Zhao Jin, and Wang Xin
2010, 47(3):  390-397. 
Asbtract ( 679 )   HTML ( 0)   PDF (965KB) ( 600 )  
Related Articles | Metrics
Network coding is a kind of novel network transmission technology, which can achieve the network multicast capacity. In this paper, an overhead optimization problem (OOS) is presented to minimize the sum cost of network coding and network link in networks with network coding. This optimization takes not only the link transmission overhead but also the coding overhead incurred by network coding into consideration. By using two different network information flow models, which are the receivers data flow mixture model and the data element flow model, the general forms of both kinds of overhead are presented separately. As this problem is NP-hard, only the approximately optimized solution can be got through heuristic algorithms. In the experiments, different genetic algorithms performance are evaluated under two different information flow models and different random generated topologies. Because it makes the optimization problem even more complex that takes the network coding overhead into consideration, the computed approximate solution may be poorer than only considering the network link cost. Simulation results show that only under the condition of the overhead coefficient γ>1000, the joint optimization can get a better solution. This result may have considerable importance when network coding comes into practice in the future.
Steady State Throughput Modeling of TCP NewReno
Sun Wei, Wen Tao, Feng Ziqin, and Guo Quan
2010, 47(3):  398-406. 
Asbtract ( 807 )   HTML ( 0)   PDF (880KB) ( 1351 )  
Related Articles | Metrics
The authors present a simple and accurate stochastic model for the steady-state throughput of the slow but steady variant of TCP NewReno. The model describes the relations between TCP NewReno throughput and round trip time, rate of packet loss and retransmission timeout value. This work motivated by the recent measurement studies indicates that there are more than half of TCP streams which use TCP NewReno instead of Reno fast recovery with a non SACK receiver. The proposed model builds upon the approach proposed by Padhye for TCP Reno throughput modeling but extends it by capturing the effect of the fast recovery algorithm and taking into consideration the slow start phase after timeout expiration. The measurement studies suggest that these behaviors are important from a modeling perspective. And the proposed model is formulated by using a new loss model instead of foregoing loss model used by Padhye, which can better represent the loss scenarios encountered by TCP on the Internet. Validation by NS2 simulator shows that using Padhyes model to estimate TCP NewReno throughput may introduce significant error while the proposed model is able to accurately predict the steady-state throughput for TCP NewReno over a wide range of network conditions.
Distributed Topology Control Algorithm for Ad Hoc Networks Using Steered Beam Directional Antennas
Wang Dong, Chen Wenbin, Li Xiaohong, Hu Ting, and Zhang Dafang,
2010, 47(3):  407-415. 
Asbtract ( 506 )   HTML ( 2)   PDF (1093KB) ( 602 )  
Related Articles | Metrics
The topology of Ad Hoc networks using directional antennas is more complex than that with traditional omni-directional antennas. Based on a given steered beam directional antennas and using a method to obtain local neighbor location information, the authors propose a distributed topology control algorithm—SDTC(the steered beam directional antenna based topology control algorithm). The topology is controlled not only by adjusting the transmission powers of nodes but also by changing the direction, beam width and gain of directional antenna. Each node in the networks needs to take the responsibility of collecting information of its neighbor nodes, choosing the optimum adjacent nodes by using the strategy of power control scheduling, and then selecting the minimum transmission power as its transmission power, which can cover all of the optimum adjacent nodes it has chosen. The algorithm preserves the connectivity of the resulting topology, which can be transformed into the one equipped with bi-directional links as well. At the same time, the resulting network topology can reduce the energy consumption, decrease the traffic interference, and improve the network throughput as the results of less node transmission power and lower node degrees. Simulation results show that the proposed algorithm significantly improves the network performance. And the steered beam directional antenna based topology control algorithm for mobile Ad Hoc network remains to be studied in the future.
Similar Sub-Sequences Search over Multi-Dimensional Time Series Data
Cheng Wencong, Zou Peng, and Jia Yan
2010, 47(3):  416-425. 
Asbtract ( 589 )   HTML ( 0)   PDF (1077KB) ( 820 )  
Related Articles | Metrics
When Euclidean distance between time series changes greatly with the compared time series moving slightly along the time-axis, a dynamic time warping distance is suggested as a more robust distance than Euclidean distance. Dynamic time warping distance is widely used as similarity measure in the domain of similar sub-sequences search over time series data. The similarity search in the single dimension may not get enough similar sub-sequences as the results to do further analysis and support the decision making. In this paper the problem is extended to the multi-dimensional scenario by introducing a data cube model which is well-studied in the multi-dimensional data analysis domain. Based on the data cube model the authors define the similar sub-sequences in multi-dimensional time series data and propose a nave algorithm to get more useful search results with extra valuable information. However, the efficiency of the nave algorithm is very poor which limits its application. So the efficiency of the nave algorithm is improved by studying the correlation of the cells among the neighboring levels in the data cube on the basis of keeping the accuracy of the search results. Extensive experiments based on the real network security dataset demonstrate the effectiveness of the proposed methods.
An Ant Colony Algorithm Based on Multiple-Grain Representation for the Traveling Salesman Problems
Ji Junzhong, Huang Zhen, Liu Chunnian, and Dai Qiguo
2010, 47(3):  434-444. 
Asbtract ( 618 )   HTML ( 0)   PDF (1785KB) ( 802 )  
Related Articles | Metrics
Ant colony optimization (ACO) is a population-based metaheuristic technique to effectively solve combination optimization problems. However, it is still an active research topic how to improve the performance of ACO algorithms. Though there are many algorithms to effectively solve traveling salesman problems (TSPs), there is an application bottleneck that the ACO algorithm costs too much time in order to get an optimal solution. To improve the time performance of ACO in solving large scale TSPs, a fast algorithm is presented in this paper. Firstly, a novel multiple-grain representation model of TSPs is proposed. Based on the model, a new algorithm for TSPs is presented, which mainly contains six phases, i.e. a granularity partition algorithm based on density clustering, an ACO algorithm based on the coarse grain,a connection algorithm between two granularities,an ACO algorithm in the same granularity, a fusion algorithm among granularities, and a subsection optimization algorithm regardless of granularities. The analysis of computation complexity and the experimental results for large number of TSPs demonstrate that the proposed algorithm can greatly improve the speed of convergence in contrast to the classical ACO algorithm, and is highly competitive in time performance compared with some latest elitist ACO algorithms.
Learning Non-Deterministic Action Models for Web Services from WSBPEL Programs
Rao Dongning, Jiang Zhihua, Jiang Yunfei, and Wu Kangheng
2010, 47(3):  445-454. 
Asbtract ( 402 )   HTML ( 0)   PDF (985KB) ( 571 )  
Related Articles | Metrics
AI planning is a promising method for Web service composition (WSC). But it encounters knowledge engineering bottleneck for there is no predefined action models for Web services (WS) in planning language under most circumstances. For example, using planning to find WSC solutions needs action models for WS, which is difficult for engineers to write. Considering that most existing WSC solutions are manually written in business process language for Web services (WSBPEL), it is preferred to learn action models from existing WSC solutions. In the mean time, WS may have multiple outcomes. For this non-deterministic nature of WS and the hidden semantic requirements for WS in existing WSC solutions, the corresponding action model for WS should be a non-deterministic action with condition effects and it should embody semantic requirements too. So firstly WSBPEL programs are translated into a labeled transition system (LTS). Then the scope of learning action models is extended into non-deterministic planning (NDP) with condition effects, and non-deterministic action models are learned from LTS. A system called ARMS-WS is implemented, and experimental results show that non-deterministic action models can be learned from WSBPEL programs. This work helps avoid writing planning description for existing Web services, and makes planning tools more applicable for real-world WSC problems.
A Hybrid Fault Diagnosis Model in Distributed Application Management
Li Yunchun and Qin Xianlong
2010, 47(3):  455-462. 
Asbtract ( 433 )   HTML ( 1)   PDF (1678KB) ( 649 )  
Related Articles | Metrics
Fault management is a key research topic in the field of distributed applications management. Due to the dynamic and complexity of distributed applications, traditional methods cant meet the need of the fault management. Autonomic computing becomes a solution to solve the problem in order to realize systems self-management. Basically, self-management is divided into two procedures: self-awareness and self-adapting. This paper mainly deals with actualizing system self-awareness based on fault diagnosis. Firstly, a hybrid fault diagnosis model is proposed after analyzing the fault propagation in distributed application management. According to this model, the fault diagnosis process is divided into two steps: application service fault diagnosis and network service fault diagnosis. Secondly, because the observation of the network faults is uncertain and inaccurate, fault diagnosis model is mapped to Bayesian network to carry out uncertainty reasoning. Finally, due to the complexity of the exact inference algorithm in Bayesian network, some improvements are added to the original inference algorithm for diagnosing the root cause based on multi-layers Bayesian network corresponding to multi-layers FPM model. As experiments shown, the improved algorithm accelerates inferring procedure.
Mining Top-K Significant Itemsets in Landmark Windows over Data Streams
Yang Bei, and Huang Houkuan
2010, 47(3):  463-473. 
Asbtract ( 666 )   HTML ( 1)   PDF (1699KB) ( 674 )  
Related Articles | Metrics
Frequent itemset mining over data streams becomes a hot topic in data mining and knowledge discovery recently, which has been applied to different areas. However, the setting of a minimum support threshold needs some domain knowledge. It will bring many difficulties or much burden to users if the support threshold is not set reasonably. It is interesting for users to find top-K significant itemsets over data streams. A dynamic incremental approximate algorithm, TOPSIL-Miner, is presented to mine top-K significant itemsets in landmark windows. A new data structure, TOPSIL-Tree, is designed to store the potential significant itemsets, and other data structures of maximum support list, ordered item list, TOPSET and minimum support list are devised to maintain the information about mining results. Moreover, three optimization strategies are exploited to reduce the time and space cost of the algorithm: 1) pruning trivial nodes in the current data stream; 2) promoting mining support threshold during mining process heuristically and adaptively; and 3) promoting pruning threshold dynamically. The accuracy of the algorithm is also analyzed. Extensive experiments are performed to evaluate the good effectiveness, the high efficiency and precision of the algorithm.
Distance-Based Outlier Detection on Uncertain Data
Yu Hao, Wang Bin, Xiao Gang, and Yang Xiaochun,
2010, 47(3):  474-484. 
Asbtract ( 823 )   HTML ( 3)   PDF (2394KB) ( 825 )  
Related Articles | Metrics
Outlier detection is one of the valuable techniques in many applications, such as network intrusion detection, event detection in wireless sensor network (WSN), and so on. This technique has been well studied on deterministic databases. However, it is a new task on emerging uncertain database. Using the new uncertain data model, many real applications, such as wireless sensor network, data integration, and data mining, can be better described. The feasibility of such applications can be further enhanced. In this paper, a new definition of outlier on uncertain data is defined. Based on it, some efficient filtering approaches for outlier detection are proposed, including a basic filtering approach, called b-RFA, and an improved filtering approach, called o-RFA. Moreover, a probability approach, called DPA, is proposed to efficiently detect outlier on uncertain database. The approach b-RFA utilizes the property of non-outlier to reduce the times of detection. Moreover, o-RFA improves b-RFA by mining and using the data distribution. Furthermore, DPA finds the recursion rule in probability computation and greatly improves the efficiency of single data detection. Finally, the experimental results show that the proposed approaches can efficiently prune the candidates and reduce the corresponding searching space, and improve the performance of query processing on uncertain data.
An Online Adaptive Network Anomaly Detection System-Model and Algorithm
Wei Xiaotao, Huang Houkuan, and Tian Shengfeng
2010, 47(3):  485-492. 
Asbtract ( 742 )   HTML ( 3)   PDF (1208KB) ( 1270 )  
Related Articles | Metrics
The extensive usage of Internet and computer networks makes security a critical issue. There is an urgent need for network intrusion detection systems which can actively defend networks against the growing security threats. In this paper, a light weighted online adaptive network anomaly detection system model is presented. The related influence function based anomaly detection algorithm is also provided. The system can process network traffic data stream in real-time, gradually build up its local normal pattern base and intrusion pattern base under a little supervising of the administrator, and dynamically update the contents of the knowledge base according to the changing of the network application patterns. At the checking mode, the system can detect not only the learned intrusion patterns but also the unseen intrusion patterns. The model has a relatively simple architecture, which makes it efficient for processing online network traffic data. Also the detecting algorithm takes little computational time and memory space. The system is tested on the DARPA KDD 99 intrusion detection datasets. It scans 10% of the training dataset and the testing dataset only once. Within 40 seconds the system can finish the whole learning and checking tasks. The experimental results show that the presented model achieves a detection rate of 91.32% and a false positive rate of only 0.43%. It is also capable of detecting new type of intrusions.
VFRS: A Novel Approach for Intrusion Tolerance in Virtual Computing Environment
Zhao Feng, Jin Hai, Jin Li, and Yuan Pingpeng,
2010, 47(3):  493-499. 
Asbtract ( 462 )   HTML ( 3)   PDF (1075KB) ( 705 )  
Related Articles | Metrics
With the emergence of multi-core processor, virtualization technology has attracted attention and developed rapidly in recent years. Virtual computing environment based on virtual machine becomes a hot topic in the field of network computing. Virtual computing environment is open, complex and dynamic, which has brought new challenges to system security, especially to intrusion tolerance. In this paper, VFRS method is proposed in order to protect sensitive data from intrusion in virtual computing environment. Firstly, a probability computing model is constructed to present system call sequences and the SCSFA algorithm is designed to predict the attempt of intrusion and to determine what need to protect, which is based on the analysis of system call sequence in virtual computing systems; Secondly, the sensitive data protected are divided into a number of film data, and for the goals of random errors tolerance, each tablet data are redundant backup based on Byzantine fault tolerance; Then, the redundant data are distributed to different virtual machines. VFRS method can predict the anomaly intrusion and well tolerate the complicated errors in virtual computing environment. The experimental results show that VFRS is effective and of high performance compared with related work. Some key issues of the VFRS method are also discussed and analyzed in detail.
Reactive Worms Propagation Modeling and Analysis in Peer-to-Peer Networks
Feng Chaosheng, , Qin Zhiguang, Laurence Cuthbert, and Laurissa Tokarchuk
2010, 47(3):  500-507. 
Asbtract ( 463 )   HTML ( 1)   PDF (938KB) ( 563 )  
Related Articles | Metrics
Worms have posed a serious threat to Internet. Meanwhile, worms have posed a more serious threat to P2P networks based on Internet. The key properties of P2P networks, which bring facilities to users, result in vulnerabilities to attacks from P2P worms. Considering that models of the other two P2P worms of three classes of P2P worms,namely passive worms and active worms, have been proposed and reactive worms have seriously threatened P2P networks, the models of propagation and immunization of P2P reactive worms are proposed. Furthermore, the condition of worm free in the stead state is deduced from the model of propagation of reactive worms. In order to validate the epidemic model proposed and the condition of worms free in the steady state, large scale simulation experiments are carried out with the software Matlab. All the simulations validate the model and the necessary conditions. In addition, all the simulations show that it is the prevalent index that is in charge of whether worms are prevalent and the degree of worm spread if it will be prevalent. Obviously, the prevalent index is very helpful to find worm-throttling strategies. Analysis of the P2P-related parameters, which determine the value of the prevalent index, shows that decreasing the download rate is the most effective method of throttling worm spread before the corresponding patches are released.
A Multilevel Security Model Based on Time Limit
Fan Yanfang, Han Zhen, Cao Xianggang, and He Yongzhong
2010, 47(3):  508-514. 
Asbtract ( 404 )   HTML ( 2)   PDF (793KB) ( 569 )  
Related Articles | Metrics
Bell-Lapadula model (BLP) is a classic model which is broadly used in military security domain. Existing research results havent taken into account the confidentiality period of the objects. In fact, the secrets preserved in objects have confidentiality period in the lifecycle of the objects. Exceeding the confidentiality period, the objects should be downgraded or declassified. In this paper, a multilevel security model based on the time limit is proposed. Through adding time parameters and checking functions to the BLP model, the objects can be downgraded or declassified when they exceed the confidentiality period. It solves the current problem of only setting the security level of the objects but keeping the security level of the objects unchanged for ever. The model restrains the usage ranges of trusted subjects, so the possible damage ranges induced by trusted subjects can be reduced. In the meantime, subjects with higher security level can write information to objects with lower security level through setting the confidentiality period of the objects flexibly without leaking high level secrets. This model improves the flexibility of the BLP model and expands the application in classified electronic file management. Through the noninterference theory, it is proved that the model meets multilevel security policy.
A Subjective Trust Management Model Based on Certainty-Factor for MANETs
Luo Junhai and Fan Mingyu
2010, 47(3):  515-523. 
Asbtract ( 491 )   HTML ( 0)   PDF (1054KB) ( 646 )  
Related Articles | Metrics
In mobile Ad-hoc networks (MANETs), due to their anonymous and self-organization nature mobile nodes have to manage the risks involved in the transactions without prior knowledge about each others reputation. The sample space of evidence may be not integrative or reliable because of the change of network topology or the occurrence of wireless collision, so the existing trust evaluation models cant be applied. To solve the problem of trust evaluation and establish the trust relationship between nodes, CFSTrust, a subjective trust management model, is proposed to quantify and evaluate the trustworthiness of nodes. Its mathematical description and implementation are also given. The fuzzy-similarity and certainty-factor are used to model the issues of trust management. The evaluation of subjective trust is discussed, the derivation rules of recommendation trust relationships in MANETs are presented and a subjective trust management model is provided. Theoretical analyses and experimental results show the model is more effective compared with the trust model based on evidence-evaluation (TrustNet). The formal model proposed provides a new valuable way for MANETs environments.
An Optimization of Broadcast on Godson-T Many-Core System Architecture
Bao Ergude, Li Weisheng, Fan Dongrui, Yang Yang, and Ma Xiaoyu
2010, 47(3):  524-531. 
Asbtract ( 932 )   HTML ( 5)   PDF (1646KB) ( 675 )  
Related Articles | Metrics
Godson-T is a large scale many-core system architecture to be implemented by ultra-deep submicron MOS technology under development by the Group of Advanced Microsystem in the Key Laboratory of Computer System and Architecture of the Institute of Computer Technology, Chinese Academy of Sciences. The single port design of Godson-T on-chip memory saves the chips total area but limits the efficiency of data sharing. Broadcast is a basic parallel algorithm used to accelerate data sharing process, but implementing the traditional algorithm on Godson-T requires a large amount of synchronization and mutual exclusion expenses and therefore could not bring a good performance. Based on Godson-T system architecture, the authors optimize the important parallel algorithm Broadcast and enhance the efficiency of concurrent read. Three techniques are proposed for the optimization: eliminating bulk synchronization among threads, establishing mapping table between source addresses and destination addresses, and rearranging assembly instructions in Broadcast kernel. The first one reduces expenses of synchronizing a large amount of threads, the second one provides a quicker method for destination address search, and the last one fully makes use of the advantage of Godson-T architecture. The optimized Broadcast algorithm on Godson-T system architecture performs well; especially when core number is 32, the speedup of the algorithm can reach 5.8.
Performance Analysis of the 2-D Networks-On-Chip for Local Uniform Random Communication Pattern
Wang Wei, Qiao Lin, Yang Guangwen, and Tang Zhizhong
2010, 47(3):  532-540. 
Asbtract ( 404 )   HTML ( 0)   PDF (1162KB) ( 531 )  
Related Articles | Metrics
As the continuity and penetrability to the 2-D networks-on-chip performance analysis based on the global uniform random communication pattern, the model of the local uniform random communication pattern, as well as that of the global one, are described; and the relationship between the two models is analyzed. Then, based on the unique interconnection networks-on-chip router as well as the communication protocol, which are both raised by the authors, using the link number to indicate the cost, the regular pattern of the changes of those different structures performance with the changes of the probability of local communications is pointed out. Finally, those different structures are classified simply according to the relative performance. The results show that the global uniform random communication pattern is just a special case of the local uniform random communication pattern, and the performance of each kind of structures is improved with the increasing of the probability of local communications. Correspondingly, the quadrilateral and the triangle mesh networks and their deformations are more suitable for small probability of local communications or communication-intensive applications; and the hexagonal mesh, the multi-ring and their deformations may achieve better overall performance when the probability of local communications is larger or the amount of traffic is smaller.
An Extended π-Calculus and Its Transactional Bisimulation
Yuan Min, Huang Zhiqiu, Cao Zining, and Xiao Fangxiong
2010, 47(3):  541-548. 
Asbtract ( 518 )   HTML ( 0)   PDF (945KB) ( 549 )  
Related Articles | Metrics
As Web service transactions have been firmly established and widely adopted, it is important to adopt a suitable formal method to specify transaction processing in Web services. However, current research has focused on the modeling for transaction and validating of standards, the study of Web service transactional properties is still lacking. There exists a formal method of transaction processing which appends some operators and makes the syntax and reduction rules so complex that it is hard to analyze further the transactional properties. Aimed at this problem, without introducing any action operator, an extended π-calculus is proposed based on the connection between the process interactives and the transmission process of message relative to the transaction scope. And the membrane activities are defined. This biological membrane structure naturally characterizes the exception handling and compensation transactions in the multi scopes for Web services. The syntax, structural congruence and reduction rules of the extended π-calculus are given successively. According to the transactional dependency, a new weak transactional open bisimulation relationship is also presented to characterize the equivalent relationship in visible transactional action. Based on the bisimulation, its equivalency analysis is explored. All of the results can help to lay a substantial foundation for a further analysis of transactional properties for Web services.
Survey of Shape from Image
Shu Bo, Qiu Xianjie, and Wang Zhaoqi
2010, 47(3):  549-560. 
Asbtract ( 832 )   HTML ( 6)   PDF (1391KB) ( 1746 )  
Related Articles | Metrics
3D modeling is an important and interesting problem in both computer vision and computer graphics fields. Recently, image based modeling is receiving more research focuses because it is of low cost, easy to handle and can generate models with very high precision. The technology is also widely used in digital cultural heritage, movies, games, video surveillance, etc. This technology is highly valuable in both research and application. Image based modeling focuses on reconstructing 3D models of objects or scenes directly from single image, image sequences or videos. The whole modeling procedure can be fully automated or facilitated by human interaction. The key problem of image based modeling is shape from image, which focuses on reconstructing 3D geometry information from images. However, review work is not available on this topic recently, which has obviously blocked the development of image based modeling research. As a result, presented in this paper is a review of analysis and discussion on shape from image. Different methods are classified based on the visual clues used in modeling from a computer vision perspective. Basic functions and related works are also introduced for each method. After analyzing and comparing these methods individually, a conclusion is drawn which includes the characteristics, problems and future of shape from image.