Text entailment recognition aims at automatically determining whether there is an entailment relationship between the given premise and hypothesis (usually two sentences). It is a basic and challenging task in natural language processing. Current dominant models, which are based on deep learning, usually encode the semantic representations of two sentences separately, instead of considering them as a whole. Besides, most of them do not leverage both the sentence-level global and ngram-level local information when capturing the semantic relationship. The recently proposed S-LSTM can learn semantic representations of a sentence and its ngrams simultaneously, achieving promising performance on tasks such as text classification. Considering the above, a model based on an extended S-LSTM is proposed for textual entailment recognition. On the one hand, S-LSTM is extended to learn semantic representations of the premise and hypothesis simultaneously, which regards them as a whole. On the other hand, to obtain better semantic representation, both the sentence-level and ngram-level information are used to capture the semantic relationships. Experimental results, on the English SNLI dataset and Chinese CNLI dataset, show that the performance of the proposed model is better than baselines.