Abstract:
In recent years, many research efforts have focused on combining data-driven machine learning with knowledge-driven logical reasoning, aiming to improve the performance of machine learning systems. Many works have attempted to use abductive reasoning to integrate machine learning and logical reasoning into a unified framework. These methods typically first generate pseudo-labels through machine learning models, and then use abductive reasoning to revise the inconsistent pseudo-labels, which serves to update the machine learning model and the above routine repeats. However, there may be incorrect labels in the abduced labels, which could have a negative impact on the training of the machine learning model and are challenging to detect. We propose abductive learning with rejection reasoning, a method that takes into account both model uncertainty and reasoning uncertainty of the abduced labels. This strategy comprehensively evaluates the reliability of abductive results from the perspective of both the data and knowledge levels, and avoids the negative impact of unreliable abduced labels on model training by rejecting some of the abductive reasoning results. Empirical studies show that the proposed method can reduce the proportion of incorrect abduced labels. In turn, this facilitates a faster training process for abductive learning and improves its overall performance.