Relation extraction (RE) is a basic task in natural language processing, which supports plenty of downstream tasks, e.g., dialogue generation, machine reading comprehension, etc. In real life, due to the continuously emerging new relation labels, the speed and cost of human annotation cannot catch up with the data quantity that the training of the traditional supervised RE models demands. Facing this practical challenge, the neural snowball model proposes a bootstrapping method that transfers the RE knowledge from limited labeled instances to iteratively annotate unlabeled data as to increase the amount of labeled data, thereby improving the classification performance of the model. However, the fixed threshold selection and the equally treated unlabeled data make the neural snowball model vulnerable to noise data. To solve these two defects, an adaptive self-training relation extraction (Ada-SRE) model is proposed. In specific, for the fixed-threshold issue, Ada-SRE proposes an adaptive threshold module by the meta learning of threshold, which can provide an appropriate threshold for each relation category. For the equally-treated issue, Ada-SRE designs a gradient-feedback strategy to weight each selected example, avoiding the interference of noise data. The experimental results show that compared with the neural snowball model, Ada-SRE has a better relation extraction ability.