When the neural network applies to optimal calculation, the ideal situation is that there is a unique equilibrium point which is globally asymptotically stable and the neural network tends to the equilibrium point. The problem of the globally asymptotical stability of recurrent neural networks with time varying delay is investigated. By transforming the delayed neural model to the describer model and then employing the Lyapunov-Krasovskii stability theorem, linear matrix inequality (LMI) technique, S procedure, and some algebraic inequality method, a new sufficient condition is derived, which is determined by the coefficients of the model and includes more tuning parameters for determining the globally asymptotical stability of recurrent neural networks with time-varying delay. The condition is easily verified numerically by the interior-point algorithm for convex quadratic programming because it can be changed as a set of linear matrix inequalities. The proposed result is further applied to two special cases: cellular neural network model with time delay and recurrent neural networks with constant delays. It is shown by theoretical analysis and computer simulations that the presented results provide several new sufficient conditions for the asymptotical stability of the investigated delayed neural network model.