The problem of algorithmic fairness has a long history, and it has been constantly renovated with the process of social change. With the acceleration of digital transformation, the root cause of algorithmic fairness problem has gradually shifted from social bias to data bias and model bias. Meanwhile, algorithmic exploitation has become more hidden and far-reaching. Although various fields of social science have studied the problem of fairness for a long time, most of them only stay in qualitative expression. As an intersection of computer science and social science, algorithmic fairness under digital transformation should not only inherit the basic theories of various fields of social science, but also provide the methods and capabilities of fairness computing. Therefore, we start with the definition of algorithmic fairness, and summarize the existing algorithmic fairness computing methods from the three dimensions of social bias, data bias and model bias. Finally, we compare algorithmic fairness indicators and methods by experiments, and then analyze the challenges of algorithmic fairness computing. Our experiments show that there is a trade-off relationship between the fairness and accuracy of original models, and there is a consistent relationship between the fairness and accuracy of fairness methods. Regarding fairness indicators, there is a significant difference in the correlation between different fairness indicators, indicating the importance of diverse fairness indicators. Regarding fairness methods, a single fairness method has limited effect, indicating the importance of exploring combinations of fairness methods.