Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 March 2007, Volume 44 Issue 3
Paper
A Sensor Localization Algorithm in Wireless Sensor Networks Based on Nonmetric Multidimensional Scaling
Xiao Ling, Li Renfa, and Luo Juan
2007, 44(3):  . 
Asbtract ( 295 )   PDF (430KB) ( 522 )  
Related Articles | Metrics
Multidimensional scaling (MDS) is an efficient data analysis technique. Applying the MDS technology to sensor localization is a novel idea. An NMDS-RSSI (nonmetric MDS and received signal strength indication) localization algorithm is presented, which runs nonmetric MDS on RSSI between nodes instead of distance to localization, so it avoids the process of transforming RSSI into distance by other localization approaches based on RSSI, and can reduce errors caused by this kind of transformation. By using nonmetric MDS, the localization approach provides robust localization even in the presence of random RSS fluctuations due to multi-path fading and shadowing through extensive simulations and a representative set of real experiments.
Designing IBGP Networks Based on Traffic Sensitivity: Models and Analysis
Zhao Feng, Lu Xicheng, Zhu Peidong, and Liu Yaping
2007, 44(3):  . 
Asbtract ( 301 )   PDF (541KB) ( 403 )  
Related Articles | Metrics
This paper focuses on the design of robust IBGP route reflection networks, which are very important to the reliability and stability of Internet. A new approach is proposed to calculate the failure probability of IBGP sessions based on the probability distribution of IGP routing recovery time. To measure the robustness of IBGP, a new metric: TS (traffic sensitivity) is presented. Based on this metric, the optimization problem of finding the most robust IBGP route reflection topologies is investigated, in which a cluster is allowed to have one or more redundant route reflectors and the maximum number of IBGP sessions a router can have is limited. The relationship between the route reflectors redundancy and the robustness is discussed, and the lower bound of this problem optimization is given. And for a special case that there is a redundant route reflector within each cluster, the solvability conditions for the problem are given and it is shown that the problem in general is NP-hard.
At-Speed Current Test for Testing AT89C51 Microprocessors
Xun Qinglai, Kuang Jishun, and Min Yinghua
2007, 44(3):  . 
Asbtract ( 192 )   PDF (432KB) ( 433 )  
Related Articles | Metrics
At-speed current testing is a novel method to test digital circuit. This paper is an experimental research for at-speed current testing to be applied to AT89C51 microprocessor. To make the test process feasible, some instruction sequences are carefully selected to let them be able to execute repeatedly by the microprocessor, and the average current consumed by AT89C51 is then measured with a simple current meter. Test generation for the instruction sequence is presented. Experimental results show that the instruction level at-speed current testing technique is feasible for and applicable to AT89C51 microprocessor testing.
A Parallel Computing Algorithm and Its Application in New Generation of Numerical Weather Prediction System (GRAPES)
Wu Xiangjun, Jin Zhiyan, Chen Dehui , Song Junqiang, and Yang Xuesheng
2007, 44(3):  . 
Asbtract ( 365 )   PDF (370KB) ( 582 )  
Related Articles | Metrics
GRAPES (global and regional assimilation and prediction system) is a new generation of numerical weather prediction (NWP) system of China. For such a NWP system, the design of software architecture becomes an important issue. In this paper a parallel algorithm of GRAPES is introduced. It considers not only the computing efficiency, but also the portable and sustainable development. The emphasis is put on the development of a special parallel method on the polar area in which the grid points convergence for a global version of GRAPES. Experiments are carried out on the IBM-Cluster 1600 in China Meteorological Administration (CMA). The results show that the parallel computing algorithm is correct, stable and efficient for operational implementation of GRAPES in near future.
Texture Recognition Using the Wold Model and Support Vector Machines
Li Jie, Zhu Weile, Wang Lei, and Yang Haomiao
2007, 44(3):  . 
Asbtract ( 259 )   PDF (487KB) ( 401 )  
Related Articles | Metrics
A new method, based on the Wold texture model and support vector machines (SVMs), is proposed for texture recognition to alleviate the difficulties of charactering texture with different rotation and scale changes. First, Fourier transform and adaptive power spectrum decomposition are performed. Sector energy and ring energy of spectrum are extracted and their mean and standard deviations are calculated as texture features. Then, a texture image is rotated to place its dominant direction at 0° according to the spectral energy distribution. Co-occurrence-matrix-based features and wavelet statistical features of the rotated image are calculated as basic texture features. Texture recognition using the proposed method shows a high performance conducted with two different texture databases both of which include 25 kinds of monochromatic natural textures.
A Real-Time Fault-Tolerant Scheduling Algorithm for Distributed Systems Based on Deferred Active Backup-Copy
Luo Wei, Yang Fumin, Pang Liping, and Li Jun
2007, 44(3):  . 
Asbtract ( 298 )   PDF (487KB) ( 477 )  
Related Articles | Metrics
The primary/backup copy scheme plays a vital role in the context of real-time fault-tolerant scheduling based on distributed systems. However, traditional active backup copies are required to be completely executed on the backup processors even in fault-free scenario, thereby increasing unnecessary redundancies. In this paper, a novel deferred active backup-copy technique is proposed which is integrated with the fixed-priority scheduling algorithm to exploit redundancies of active backup copies. The proposed technique exploits the processor redundancies through scheduling active backup copies as late as possible and terminating the execution of backup copies when corresponding primary copies are successfully completed. Moreover, based on the aforementioned technique, a “best-fit” heuristic algorithm is designed with the aim of achieving minimal number of processors. Therefore, compared with similar algorithms, this algorithm can further reduce processors needed while guaranteeing real-time and fault-tolerant properties of distributed systems. Finally, simulation experiments are carried out to prove the feasibility and effectiveness of the algorithm.
An Approach to Immunity-Based Performance Monitoring and Evaluation for Computing Systems
Xu Jian, Zhang Kun, Liu Fengyu, and Xu Manwu
2007, 44(3):  . 
Asbtract ( 280 )   PDF (381KB) ( 406 )  
Related Articles | Metrics
Simulating biological immune mechanism to implement performance monitoring and evaluation of computing systems is a new research approach for the distributed high performance computing environment. First, to simulate the biological immune mechanisms, the immune process's characteristics and computing system's rejuvenation are analyzed and compared. Second, to monitor and diagnose performance degradation, the logical and mathematical models of system rejuvenation are set up, which use an extension of the standard?proc performance interface offered by Linux systems to collect resource information form both local and remote hosts, then the principle of self-nonself discrimination inspired by immunology to diagnose system performance. On the basis of both the logical and mathematical model, an application regarding audio-video resource transaction process system as background is studied, a two-stage hyper-exponential model is proposed to evaluate system performance, and the effect imposed by a performance monitoring agent is evaluated. The results of experiments and case study indicate that this method is effective and feasible to monitor and evaluate the performance of a distributed computing system.
A Complex Scripts Processing Model Based on Predication Rules
Jia Yanmin, Wu Jian, and Husela
2007, 44(3):  . 
Asbtract ( 238 )   PDF (412KB) ( 388 )  
Related Articles | Metrics
In processing computer display and printing, complex scripts exhibit very sophisticated language features. A complex scripts processing model based on predication rules is brought forward. The glyph layout features of complex scripts are formalized by predication rules. According to the process of complex scripts processing steps, the software system framework implementing this model is designed. By separating the language features of complex scripts from the programming control logic, the system flexibility is improved. Furthermore, it's convenient to add supports of new complex scripts. The development of office suite for Mongolian, Tibetan and Uighur languages has proven that this model is useful and effective.
An Effective Low-Power Scan Architecture—PowerCut
Wang Wei, Han Yinhe, Hu Yu, Li Xiaowei, and Zhang Yousheng
2007, 44(3):  . 
Asbtract ( 407 )   PDF (341KB) ( 462 )  
Related Articles | Metrics
It is obvious that scan testing is the prevalent design for testability (DFT) in very large scale integrated circuits test. However, scan architecture in digital circuits causes much power consumption because when scan vectors are loaded into a scan chain, the effect of scan-ripple propagates to the combinational logic and redundant switching occurs in the combinational gates during the entire vectors shifting period. Hence, low-power design becomes a challenge for scan test. In this paper, a low-power scan architecture—PowerCut is proposed for minimizing power consumption during scan test, which is based on scan chain modification techniques. Blocking logic using transmission gates is inserted into the scan chain to reduce the dynamic power in shift cycle. At the same time, based on minimum leakage vector, a controlling unit is inserted. It makes the circuit slip into low leakage state during shift cycle. Thus, leakage power is also decreased. Experiments results indicate that this architecture can effectually reduce the power during scan test, and it has little improvement in area or delay overhead, compared with other low cost existing methods.
Construction of a Cooperative-Server-Group-Based Volunteer Computing Environment
Xu Shengchao, Jin Hai, Zhang Qin, and Shi Ke
2007, 44(3):  . 
Asbtract ( 214 )   PDF (417KB) ( 423 )  
Related Articles | Metrics
A cooperative-server-group-based volunteer computing environment called P2HP is presented in this paper. All the nodes in P2HP are assigned as a certain role, named monitor, dispatcher, worker, DataPool and thus form a scalable layered topology for P2HP. P2HP is open, ease of use, scalable, fault tolerant and platform independent. A convenient API (application programming interface) set is also provided to support the development of parallel applications running on P2HP. The results obtained from performance analysis show that P2HP is a feasible approach for parallel processing.
Link Recommendation in Web Index Page Based on Multi-Instance Learning Techniques
Xue Xiaobing, Han Jieling, Jiang Yuan, and Zhou Zhihua
2007, 44(3):  . 
Asbtract ( 278 )   PDF (340KB) ( 487 )  
Related Articles | Metrics
In Web index page, recommending links of interest is beneficial for users to access Web resources efficiently. However, users won't spend a lot of time labeling samples and the data provided by them may just indicate whether or not a Web index page contains contents in which they are interested but give no information about which link really meets their interests. Therefore, the problem of link recommendation in Web index page is quite difficult since the training data lacks links' label while prediction for links of interest in a new Web index page is required. This problem is converted to a unique multi-instance learning problem and then solved by the proposed CkNN-ROI algorithm. Experiments show that this algorithm is more effective than other ones on solving this difficult link recommendation problem.
VLSI Implementation of an AES Algorithm Resistant to Differential Power Analysis Attack
Zhao Jia, Zeng Xiaoyang, Han Jun, Wang Jing, and Chen Jun
2007, 44(3):  . 
Asbtract ( 270 )   PDF (411KB) ( 667 )  
Related Articles | Metrics
Proposed in this paper is a low cost VLSI implementation of an AES algorithm resistant to DPA (differential power analysis) attack using masking. In order to minimize the influence of the modification on the hardware while enabling it to be resistant to DPA, methods such as altering calculation order, module reuse and composite field computation to reduce chip area and maintain its speed are employed. Using the HHNEC 0.25μm CMOS technology, area of the design is about 48(kilo) equivalent gates and its system frequency is up to 70MHz. The through put of the 128bit data encryption and decryption is as high as 380Mbps.
Implicit Surfaces Based on BP Neural Networks
Li Daolun, Lu Detang, Kong Xiangyan, and Wu Gang
2007, 44(3):  . 
Asbtract ( 400 )   PDF (362KB) ( 580 )  
Related Articles | Metrics
Neural networks, combined with implicit polynomials, can be employed to represent 3D surfaces which are described by the zero-set of a neural network. First, an explicit function is constructed based on the implicit function. Then the explicit function is approximated by a BP neural network. Finally, the zero-set of the neural network which is the implicit surface is extracted from the simulation surface. The method is not sensitive to the error, the number of the constraint points, and the distance between the boundary points and inner/extern points. Experimental results are given to verify the effectiveness of surface reconstruction.
Static Detection of Deadlocks in OpenMP Fortran Programs
Wang Zhaofei and Huang Chun
2007, 44(3):  . 
Asbtract ( 622 )   PDF (436KB) ( 408 )  
Related Articles | Metrics
The deadlocks related to barriers are one kind of the major factors that cause OpenMP programs to malfunction. Static detection of these hazards can help enhance the correctness of OpenMP programs before they are executed. For convenience of detection, this kind of deadlocks is classified into two categories. By searching and data flow analysis, the first and the second category of deadlocks are detected according to the existence rule and nonuniformity rule respectively. Traditional control flow graph is extended to represent OpenMP programs. For each detected deadlock, backtracking is used to record the related paths in the control flow graph, and static branch prediction is employed to quantify its severity. Based on these ideas, a tool, called C-Checker, to statically detect deadlocks in OpenMP Fortran programs is implemented. Experiments show the C-Checker can effectively detect the deadlocks concerned.
A Fast Bayesian Network Structure Learning Algorithm
Ji Junzhong, Liu Chunnian, and Yan Jing
2007, 44(3):  . 
Asbtract ( 326 )   PDF (461KB) ( 638 )  
Related Articles | Metrics
Bayesian network (BN) is one of the most important theoretical models for uncertainty knowledge expression and reasoning. So far, many BN structure learning algorithms have been proposed. In this paper, a fast algorithm FI-B&B-MDL is developed, which considerably speeds up the original I-B&B-MDL algorithm. Unlike I-B&B-MDL, the new FI-B&B-MDL first uses only order-0 and a small number of order-1 independence tests to obtain an original structure graph so that the number of independence tests and database passes can be decreased, and then takes mutual information between nodes as the heuristic knowledge to lead MDL searches so that the cut-offs of B&B search trees can be increased, and consequently the search process is accelerated. Experimental results show that the new algorithm is effective and efficient in large scale databases, and it is faster than the original algorithm.
Automatic Image Annotation Based on Concept Indexing
Lu Jing and Ma Shaoping
2007, 44(3):  . 
Asbtract ( 267 )   PDF (503KB) ( 367 )  
Related Articles | Metrics
Automatic image annotation is an important but highly challenging problem in content-based image retrieval. A new procedure for providing images with semantic keywords is introduced. To over the semantic gap, classified images are used to train a special multi-class classifier based on support vector machine (SVM), which maps the visual image feature into the model space to achieve the concept indexing. The model-vectors that construct the model space are the combination of the multi-class classifier's outputs, and applied to each individual image. Soft labels are then given to the unannotated images during the propagation procedure in the model space, and as keyword, each label is associated with a membership confidence estimated by a biased kernel regression algorithm. Thus conceptualized annotations of images could be provided to users. The empirical study on the COREL image database shows that the proposed model-vectors outperform visual features 14.0% in F-measure for annotation comparatively.
FPGA-Based Real-Time Imaging System for Spaceborne SAR
Guo Meng, , Jian Fangjun, Zhang Qin, Xu Bin, , Wang Zhensong, and Han Chengde
2007, 44(3):  . 
Asbtract ( 322 )   PDF (409KB) ( 497 )  
Related Articles | Metrics
With the rapid development pace of the SAR (synthetic aperture RADAR, SAR) missions, the demands for high data bandwidth of the satellite downlink are required. Reducing the data volume is the necessary task for the SAR missions. On-board SAR image processing has been one of the methods for effective reduction in data volume, since the SAR image data can be compressed much more easily than the SAR raw data. In recent years, FPGA-based real-time imaging for spaceborne SAR is an active research field. The goal of this research work is to design an FPGA-based system which can implement the real-time spaceborne SAR image processing. The parameters of spaceborne SAR are studied. Then a novel high-performance scalable architecture is proposed, which maps the CS (chirp scaling, CS) algorithm to the hardware system, through the analysis of the performance requirements and algorithm specifications. The prototype system implementation and functional verification are also presented. Experiment results show that with one signal processing unit that works at 50MHz, the system can process 512MB SAR raw data within about 11 seconds. The system has attractive merits on high performance and low mass, and is an excellent candidate for the real-time on-board SAR image processing system.
Trend Sequences Analysis of Temporal Data and a Subsequence Matching Algorithm
Chen Dangyang, Jia Suling, Wang Huiwen, and Luo Chang
2007, 44(3):  . 
Asbtract ( 332 )   PDF (335KB) ( 508 )  
Related Articles | Metrics
In current trend sequences nominal scale and edit distance are used to measure trend values, distance between trend sequences respectively. The analysis of this kind of trend sequences essentially belongs to the domain of character string analysis. These traditional trend sequences are called character trend sequence (CTS) in this paper. The largest problem about analysis of CTSs is to use very few indexes to depict trends of sequences which have a very large range of variety, so little information included in temporal data sequences is preserved in CTSs. To overcome demerits of traditional trend sequences' analysis in temporal data mining, two concepts which are number trend sequence (NTS) and trend sequences unwrapping are put forward. According to features of NTSs, radians which slopes correspond to are used to represent trends of line segments. Dynamic time warping double restrictions quick searching (DTW-DRQS) algorithm is designed to solve the problem of subsequence matching between NTSs. The algorithm includes three parts: DTW sequential searching, the mechanism of double restrictions and the mechanism of redundancy control. DTW sequential searching is the basic framework of the algorithm, and; the mechanism of double restrictions can accelerate the calculation process of DTW distance; the mechanism of redundancy control can eliminate redundant subsequences in the result set.
The LOBA Representation of Speech Acts
Pan Yu, Cao Cungen, and Sui Yuefei
2007, 44(3):  . 
Asbtract ( 258 )   PDF (411KB) ( 501 )  
Related Articles | Metrics
The study of speech acts in multi-agent system is very interesting and important. Discussed in this paper are speech acts of pragmatics from the perspective of establishing practical reasoning agents. Now the research on speech acts in MAS (multi-agents-system) focuses on three aspects: 1) the ontology of speech acts; 2) the mechanism for an agent to deduce reasonable speech acts; and 3) the mechanism for an agent to correctly deal with speech acts performed by other agents. The three aspects are different but closely interconnected. This paper focuses on the aspect 2) and aspect 3), and formalizes the model with LOBA (logic of believable agents). Considering the model of agents' cognitive processes, a series of cognitive elements are discussed, which include perception, belief, emotion, desire, goal, intention, and commitment, and the dynamic relations among these elements are analyzed by introducing corresponding cognitive actions. On the basis of the above work, this paper describes how agents generate and deal with speech acts. As for LOBA, it expands the work of KARO and LORA logic, treats agents' emotions and cognitive actions as modal operators, and interprets cognitive actions which only occur in agents' brains by a three-layer model. By virtue of these measures, LOBA logic can describe the practical reasoning processes of agents with stronger expressivity than ever.
Critical Term for Performances of Push Model and Pull Model in Agent Communication and Implementation of Push Model
Guo Zhongwen, Liu Hui, and Shang Chuanjin
2007, 44(3):  . 
Asbtract ( 263 )   PDF (400KB) ( 453 )  
Related Articles | Metrics
In various situations, mobile agents at different hosts must cooperate with one another by exchanging information and making decisions collectively. Communication efficiency is one of the most important factors affecting the performance of the highly dynamic and large-scale mobile agents system. To improve the communication efficiency and find a reliable location-transparency communication infrastructure for mobile agents, push model and pull model are two basic models used in communication algorithms. A detailed analysis of the two models is given and a theoretical formula of their performance comparison is derived. The formula indicates that only when the information traffic between mobile agents is extremely high, the performance of the pull model outperforms the push model, otherwise, the performance of the push model outperforms the pull model. In addition, an efficient mailbox-based communication algorithm using the push model by means of network compartmentalization is implemented. Besides the great reduction of the communication delay, the algorithm alleviates the load of the network and can be used to communicate efficiently and transparently between mobile agents.
A Logical Exception Handling Method in Agent Communication
Bai Yan and Liu Dayou
2007, 44(3):  . 
Asbtract ( 205 )   PDF (408KB) ( 427 )  
Related Articles | Metrics
In an open multi-agent of dynamic network environment, different mobile agents hope to carry on the communication for the problem of a certain domain. There is a need to avoid appearance of logical exception phenomenon, and the term in this domain must be made to obtain consistency. A layer ontology services communication model named LOSCM is proposed in this paper. LOSCM has two primary advantages. First, it fully takes into account the factors that affect the communication, and using ontology can be represented in agent's knowledge base. Second, provided ontology does not belong to the public data source or have public ontologies, according to the concept loss degree and the concept related degree, LOSCM establishes a layering algorithm, which is able to avoid logical exception handling policies, ensure the right understanding of communication entity concept, and resolve the consistency problem of concept translation. It is shown by experiments that LOSCM is superior to other methods, and can effectively improve the communication consistency of a mobile agent system under certain conditions.
An Agent Organization Structure for Solving DCOP Based on the Partitions of Constraint Graph
He Lijian and Zhang Wei
2007, 44(3):  . 
Asbtract ( 299 )   PDF (245KB) ( 451 )  
Related Articles | Metrics
The distributed constraint optimization problem (DCOP) is able to model a wide variety of distributed reasoning problems that arise in multiagent systems (MAS), and distributed algorithms of solving DCOP have already become one of the most important bases of MAS. Some previous algorithms, which emphasize the asynchronous communication, distributed computation and quality guarantees, such as Adopt, can obtain optimal solution of DCOP by negotiation among peer to peer agents. However, there is still possibility to improve in organization structure of solving problems. A novel organization structure of multi-agent is put forward, which adopts the idea of combination of decentralization and centralization, the partitions of constraint graph method and the notions of core node, and the main communication road. The asynchronous and distributed algorithm of DCOP in this organization structure can improve the efficiency in execution and adaptation in dynamics. Moreover, it can unite the solving method of DCOP dealing with a variable per agent and multiple variables per agent.
Algorithms of Mining Global Maximum Frequent Itemsets Based on FP-Tree
Wang Liming and Zhao Hui
2007, 44(3):  . 
Asbtract ( 465 )   PDF (379KB) ( 524 )  
Related Articles | Metrics
Mining maximum frequent itemsets is a key problem in data mining field with numerous important applications. The present algorithms need scanning the database many times for updating the set of maximum frequent itemsets and are based on local databases. The algorithms of mining global maximum frequent itemsets are very few. Therefore, an algorithm for mining global maximum frequent itemsets is proposed, which can conveniently get all global maximum frequent itemsets using FP-tree structure by one time mining, and superset checking is very simple and speedy. FP-tree structure has provided a kind of convenient depth-first mining method. The algorithm combines FP-tree with restrained sub-tree for mining global maximum frequent itemsets and adopts an efficient distributed PDDM algorithm for broadcasting itemsets information and improves the expansibility and the concurrence. The PDDM algorithm is based on previous DDM algorithm and improves I?O problem and communication of previous distributed algorithms. Experimental results testify the feasibility and effectiveness of the algorithm.
Reducing Gaussian Kernel's Local Risks by Global Kernel and Two-Stage Model Selection Based on Genetic Algorithms
Chang Qun, Wang Xiaolong, Lin Yimeng, Daniel S. Yeung, and Chen Qingcai
2007, 44(3):  . 
Asbtract ( 213 )   PDF (377KB) ( 383 )  
Related Articles | Metrics
In classification by support vector machines with the Gaussian kernel, the kernel width defines the generalization scale in the pattern space or in the feature space. However, the Gaussian kernel with constant width is not well adaptive everywhere in the pattern space since the patterns are not evenly distributed. That is, the over-fitting learning will appear in the dense areas and otherwise the under-fitting learning in the sparse areas. To reduce such local risks, a secondary kernel with global character is introduced for the Gaussian kernel. Here the Gaussian kernel is regarded as the primary kernel. The constructed hybrid kernel is called the primary-secondary kernel (PSK). The positive definiteness of PSK with given constraints is proved by virtue of the power series. For support vector machines with PSK, the two-stage model selection based on genetic algorithms is proposed to tune the model parameters. That is, the algorithms firstly tune the model parameters with Gaussian kernel. Then the model parameters with the Gaussian kernel keep unchanged and the model parameters with the secondary kernel are further tuned. The two-stage model selection algorithms aim to overcome the problem of the optimization tendency embodied in the optimization algorithms. For the support vector machines with multiple parameters, the optimization tendency often causes the failure of the model selection. Finally, the experiments demonstrate that PSK performs better than the Gaussian kernel and also validate the efficiency of the proposed model selection algorithms.
Study of Spline-Curves with Shape Parameters
Liu Xumin, Huang Houkuan, Wang Liuqiang, and Ma Sujing
2007, 44(3):  . 
Asbtract ( 356 )   PDF (571KB) ( 586 )  
Related Articles | Metrics
To modify the curve shapes by means of adjusting the shape parameter is a topic of great significance in computer-aided geometric design. In order to adjust effectively the shape of a curve by using shape parameters and boost up its flexibility, the representations and properties of five types of B-spline curves with shape parameters are studied. For these curves, the approaching degree to their control polygon can be adjusted through the change of the value of shape parameters, and the curves with different continuity can be gained. The effects of the shape parameters on the curve shapes are analyzed, the range of shape parameters is presented, and the characteristics of each modeling means are compared. Through formula evolvements, experiments and illustrations with examples, some new approaches to present free-curves by means of different shape parameter values are also worked out. Experiments show that C-B spline curve, uniform B-spline with shape parameter, hyperbolic polynomial uniform B-spline with shape parameter and trigonometric polynomial uniform B-spline with shape parameter can be used to produce some frequently used free form curves in the industrial field when the shape parameter is assigned with some specific values. This process is simpler than producing these free form curves by using controlling vertex.