Abstract:
Nonlinear dimensionality reduction is a challenging problem encountered in a variety of high dimensional data analysis, including machine learning, pattern recognition, scientific visualization, and neural computation. Based on the notion of local isometry, maximum variance unfolding (MVU) is proposed recently for nonlinear dimensionality reduction. By pulling the input patterns as far apart as possible subject to the strict local distance-preserving constraints, MVU can learn the faithful low dimensional structure of the manifold embedded in the high dimensional space. However, the final optimization of MVU needs to solve a semidefinite programming (SDP) problem; the huge computational and memory complexity makes MVU infeasible for large-scale implementations. To solve the problem, a fast algorithm of MVU, called relaxed MVU (RMVU), is proposed. In RMVU, originated from the approximate local structure preservation idea of Laplacian eigenmaps, the strict local distance-preserving constraints of MVU are relaxed. The optimization finally becomes a generalized eigen-decomposition problem and the computational intensity is significantly reduced. For addressing the more large-scale data sets, an improved RMVU, called landmark-based relaxed MVU (LRMVU), is also proposed. The theoretical analysis and experiments on the artificial data set and actual images show that the proposed algorithms RMVU and LRMVU can largely reduce the computational and memory complexity of MVU.