In many real tasks, there are usually abundant unlabeled data but only a few labeled data, and therefore, semi-supervised learning has attracted significant attention in the past few years. Disagreement-based semi-supervised learning approaches are a kind of state-of-the-art paradigm of semi-supervised learning, where multiple classifiers are generated to label unlabeled instances for each other. Co-training is the first and seminal work in this category. However, during the labeling process, most current co-training style approaches consider only the confidence of the predictor but not any helpfulness for the learner. In this paper, inspired by the real-world teaching-learning system, we propose a teaching-learning model named “TaLe” for co-training, within which the predictor is considered as a teacher who is teaching while the other is the student who is learning. Based on this model, a new variant of co-training algorithm named CoSnT is presented to consider both the confidence of the teacher and the need of the student. Intuitively, the convergence efficiency of co-training can be improved. Experiments on both multi-view and single-view data sets validate the efficiency and even outperformance of CoSnT over both standard co-training algorithm CoTrain that considers only teacher's confidence and CoS algorithm that considers only student's need.