Overview of the Frontier Progress of Causal Machine Learning
-
摘要:
机器学习是实现人工智能的重要技术手段之一,在计算机视觉、自然语言处理、搜索引擎与推荐系统等领域有着重要应用.现有的机器学习方法往往注重数据中的相关关系而忽视其中的因果关系,而随着应用需求的提高,其弊端也逐渐开始显现,在可解释性、可迁移性、鲁棒性和公平性等方面面临一系列亟待解决的问题.为了解决这些问题,研究者们开始重新审视因果关系建模的必要性,相关方法也成为近期的研究热点之一.在此对近年来在机器学习领域中应用因果技术和思想解决实际问题的工作进行整理和总结,梳理出这一新兴研究方向的发展脉络.首先对与机器学习紧密相关的因果理论做简要介绍;然后以机器学习中的不同问题需求为划分依据对各工作进行分类介绍,从求解思路和技术手段的视角阐释其区别与联系;最后对因果机器学习的现状进行总结,并对未来发展趋势做出预测和展望.
Abstract:Machine learning is one of the important technical means to realize artificial intelligence, and it has important applications in the fields of computer vision, natural language processing, search engines and recommendation systems. Existing machine learning methods often focus on the correlations in the data and ignore the causality. With the increase in application requirements, their drawbacks have gradually begun to appear, facing a series of urgent problems in terms of interpretability, transferability, robustness, and fairness. In order to solve these problems, researchers have begun to re-examine the necessity of modeling causal relationship, and related methods have become one of the recent research hotspots. We organize and summarize the work of applying causal techniques and ideas to solve practical problems in the field of machine learning in recent years, and sort out the development venation of this emerging research direction. First, we briefly introduce the closely related causal theory to machine learning. Then, we classify and introduce each work based on the needs of different problems in machine learning, explain their differences and connections from the perspective of solution ideas and technical means. Finally, we summarize the current situation of causal machine learning, and make predictions and prospects for future development trends.
-
-
表 1 因果方法在可解释性问题上的应用
Table 1 Application of Causal Methods on Interpretability Problems
表 2 因果方法在可迁移性问题上的应用
Table 2 Application of Causal Methods on Transferability Problems
表 3 因果方法在鲁棒性问题上的应用
Table 3 Application of Causal Methods on Robustness Problems
分类 子类别 典型思路和方法 反事实数据增强 伪相关特征反事实 构造额外训练数据,在保持预测结果不变的前提下微调数据[88-93] 因果特征反事实 构造额外训练数据,更改关键因果特征并修改预测结果[92-95] 因果效应校准 基于后门调整 根据对问题的认识指出混杂因素,对其估计后消除影响[96-99] 基于中介分析 根据对问题的认识指出中介变量,对其估计后消除影响[97,100-102] 不变性学习 稳定学习 将每个特征视为处理变量,通过样本加权消除混杂,识别因果特征[103-107] 不变因果预测 基于多环境训练数据,利用假设检验确定因果特征集合[108-110] 不变风险最小化 基于多环境训练数据,在模型优化目标中添加跨环境不变约束,学习因果特征[111- 113] 表 4 因果方法在公平性问题上的应用
Table 4 Application of Causal Methods on Fairness Problems
-
[1] LeCun Y, Bengio Y, Hinton G. Deep learning[J]. Nature, 2015, 521(7553): 436−444
[2] He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification[C] //Proc of the IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2015: 1026−1034 [3] Brock A, Donahue J, Simonyan K. Large scale GAN training for high fidelity natural image synthesis [C/OL] //Proc of the 7th Int Conf on Learning Representations. 2019 [2021-11-03]. https://openreview.net/pdf?id=B1xsqj09Fm
[4] Brown T B, Mann B, Ryder N, et al. Language models are few-shot learners[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 1877−1901
[5] Silver D, Huang A, Maddison C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484−489
[6] Senior A W, Evans R, Jumper J, et al. Improved protein structure prediction using potentials from deep learning[J]. Nature, 2020, 577(7792): 706−710
[7] Gunning D, Aha D. DARPA’s explainable artificial intelligence (XAI) program[J]. AI Magazine, 2019, 40(2): 44−58
[8] Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[C/OL] //Proc of the 2nd Int Conf on Learning Representations. 2014 [2021-11-03]. https://arxiv.org/abs/1312.6199
[9] Barocas S, Hardt M, Narayanan A. Fairness in machine learning [EB/OL]. 2017 [2021-11-13]. https://fairmlbook.org/pdf/fairmlbook.pdf
[10] Pearl J, Mackenzie D. The Book of Why: The New Science of Cause and Effect[M]. New York: Basic Books, 2018
[11] Pearl J. Theoretical impediments to machine learning with seven sparks from the causal revolution[J]. arXiv preprint, arXiv: 1801.04016, 2018
[12] 苗旺, 刘春辰, 耿直. 因果推断的统计方法[J]. 中国科学: 数学, 2018, 48(12): 1753-1778 Miao Wang, Liu Chunchen, Geng Zhi. Statistical approaches for causal inference [J]. SCIENTIA SINICA Mathematica, 2018, 48(12): 1753-1778 (in Chinese)
[13] Guo Ruocheng, Cheng Lu, Li Jundong, et al. A survey of learning causality with data: Problems and methods[J]. ACM Computing Surveys, 2020, 53(4): 1−37
[14] Kuang Kun, Li Lian, Geng Zhi, et al. Causal inference [J]. Engineering, 2020, 6(3): 253−263
[15] Schölkopf B. Causality for machine learning [J]. arXiv preprint, arXiv: 1911.10500, 2019
[16] Schölkopf B, Locatello F, Bauer S, et al. Toward causal representation learning[J]. Proceedings of the IEEE, 2021, 109(5): 612−634
[17] Splawa-Neyman J, Dabrowska D M, Speed T P. On the application of probability theory to agricultural experiments. Essay on principles. Section 9[J]. Statistical Science, 1990, 5(4): 465−472
[18] Rubin D B. Estimating causal effects of treatments in randomized and nonrandomized studies[J]. Journal of Educational Psychology, 1974, 66(5): 688−701
[19] Pearl J. Causality[M]. Cambridge, UK: Cambridge University Press, 2009
[20] Granger C W J. Investigating causal relations by econometric models and cross-spectral methods[J]. Econometrica, 1969, 37(3): 424−438
[21] Rubin D B. Randomization analysis of experimental data: The Fisher randomization test comment[J]. Journal of the American Statistical Association, 1980, 75(371): 591−593
[22] Rosenbaum P R, Rubin D B. The central role of the propensity score in observational studies for causal effects[J]. Biometrika, 1983, 70(1): 41−55
[23] Hirano K, Imbens G W, Ridder G. Efficient estimation of average treatment effects using the estimated propensity score[J]. Econometrica, 2003, 71(4): 1161−1189
[24] Robins J M, Rotnitzky A, Zhao Lueping. Estimation of regression coefficients when some regressors are not always observed[J]. Journal of the American Statistical Association, 1994, 89(427): 846−866
[25] Dudík M, Langford J, Li Lihong. Doubly robust policy evaluation and learning[C] //Proc of the 28th Int Conf on Machine Learning. Madison, WI: Omnipress, 2011: 1097−1104
[26] Kuang Kun, Cui Peng, Li Bo, et al. Estimating treatment effect in the wild via differentiated confounder balancing[C] //Proc of the 23rd ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2017: 265−274
[27] Imbens G W, Rubin D B. Causal Inference in Statistics, Social, and Biomedical Sciences[M]. Cambridge, UK: Cambridge University Press, 2015
[28] Yao Liuyi, Chu Zhixuan, Li Sheng, et al. A survey on causal inference [J]. arXiv preprint, arXiv: 2002.02770, 2020
[29] Pearl J. Causal diagrams for empirical research[J]. Biometrika, 1995, 82(4): 669−688
[30] Spirtes P, Glymour C. An algorithm for fast recovery of sparse causal graphs[J]. Social Science Computer Review, 1991, 9(1): 62−72
[31] Verma T, Pearl J. Equivalence and synthesis of causal models[C] //Proc of the 6th Annual Conf on Uncertainty in Artificial Intelligence. Amsterdam: Elsevier, 1990: 255−270
[32] Spirtes P, Glymour C N, Scheines R, et al. Causation, Prediction, and Search[M]. Cambridge, MA: MIT Press, 2000
[33] Schwarz G. Estimating the dimension of a model[J]. The Annals of Statistics, 1978, 6(2): 461−464
[34] Chickering D M. Optimal structure identification with greedy search[J]. Journal of Machine Learning Research, 2002, 3(Nov): 507−554 [35] Shimizu S, Hoyer P O, Hyvärinen A, et al. A linear non-Gaussian acyclic model for causal discovery[J]. Journal of Machine Learning Research, 2006, 7(10): 2003−2030
[36] Zhang Kun, Hyvärinen A. On the identifiability of the post-nonlinear causal model[C] //Proc of the 25th Conf on Uncertainty in Artificial Intelligence. Arlington, VA: AUAI Press, 2009: 647−655
[37] Pearl J. Direct and indirect effects[C] //Proc of the 17th Conf on Uncertainty in Artificial Intelligence. San Francisco, CA: Morgan Kaufmann Publishers Inc, 2001: 411−420
[38] VanderWeele T. Explanation in Causal Inference: Methods for Mediation and Interaction[M]. Oxford, UK: Oxford University Press, 2015
[39] 陈珂锐,孟小峰. 机器学习的可解释性[J]. 计算机研究与发展,2020,57(9):1971−1986 doi: 10.7544/issn1000-1239.2020.20190456 Chen Kerui, Meng Xiaofeng. Interpretation and understanding in machine learning[J]. Journal of Computer Research and Development, 2020, 57(9): 1971−1986 (in Chinese) doi: 10.7544/issn1000-1239.2020.20190456
[40] Ribeiro M T, Singh S, Guestrin C. “Why should I trust you?” Explaining the predictions of any classifier[C] //Proc of the 22nd ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2016: 1135−1144
[41] Selvaraju R R, Cogswell M, Das A, et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization[C] //Proc of the IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 618-626 [42] Sundararajan M, Taly A, Yan Qiqi. Axiomatic attribution for deep networks[C] //Proc of the 34th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2017: 3319−3328
[43] Lundberg S M, Lee S I. A unified approach to interpreting model predictions[C] //Proc of the 31st Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2017: 4765−4774
[44] Alvarez-Melis D, Jaakkola T. A causal framework for explaining the predictions of black-box sequence-to-sequence models[C] //Proc of the 2017 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2017: 412−421 [45] Schwab P, Karlen W. CXPlain: Causal explanations for model interpretation under uncertainty[C] //Proc of the 33rd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2019: 10220−10230
[46] Chattopadhyay A, Manupriya P, Sarkar A, et al. Neural network attributions: A causal perspective[C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2019: 981−990
[47] Frye C, Rowat C, Feige I. Asymmetric Shapley values: Incorporating causal knowledge into model-agnostic explainability[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 1229−1239
[48] Heskes T, Sijben E, Bucur I G, et al. Causal Shapley values: Exploiting causal knowledge to explain individual predictions of complex models [C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 4778−4789
[49] Goyal Y, Wu Ziyan, Ernst J, et al. Counterfactual visual explanations[C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2019: 2376−2384
[50] Wang Pei, Vasconcelos N. SCOUT: Self-aware discriminant counterfactual explanations[C] //Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 8981−8990
[51] Hendricks L A, Hu Ronghang, Darrell T, et al. Generating counterfactual explanations with natural language[J]. arXiv preprint, arXiv: 1806.09809, 2018
[52] Chang Chunhao, Creager E, Goldenberg A, et al. Explaining image classifiers by counterfactual generation[C/OL] //Proc of the 7th Int Conf on Learning Representations, 2019 [2021-11-03]. https://openreview.net/pdf?id=B1MXz20cYQ
[53] Kanehira A, Takemoto K, Inayoshi S, et al. Multimodal explanations by predicting counterfactuality in videos[C] //Proc of the 32nd IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 8594−8602
[54] Akula A R, Wang Shuai, Zhu Songchun. CoCoX: Generating conceptual and counterfactual explanations via fault-lines[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 2594−2601
[55] Madumal P, Miller T, Sonenberg L, et al. Explainable reinforcement learning through a causal lens[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 2493−2500
[56] Mothilal R K, Sharma A, Tan C. Explaining machine learning classifiers through diverse counterfactual explanations[C] //Proc of the 2020 Conf on Fairness, Accountability, and Transparency. New York: ACM, 2020: 607−617 [57] Albini E, Rago A, Baroni P, et al. Relation-based counterfactual explanations for Bayesian network classifiers[C] //Proc of the 29th Int Joint Conf on Artificial Intelligence, Red Hook, NY: Curran Associates Inc, 2020: 451−457
[58] Kenny E M, Keane M T. On generating plausible counterfactual and semi-factual explanations for deep learning[C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 11575−11585
[59] Abrate C, Bonchi F. Counterfactual graphs for explainable classification of brain networks[J]. arXiv preprint, arXiv: 2106.08640, 2021
[60] Yang Fan, Alva S S, Chen Jiahao, et al. Model-based counterfactual synthesizer for interpretation[J]. arXiv preprint, arXiv: 2106.08971, 2021
[61] Parmentier A, Vidal T. Optimal counterfactual explanations in tree ensembles[J]. arXiv preprint, arXiv: 2106.06631, 2021
[62] Besserve M, Mehrjou A, Sun R, et al. Counterfactuals uncover the modular structure of deep generative models[C/OL] //Proc of the 8th Int Conf on Learning Representations. 2020 [2021-11-03]. https://openreview.net/pdf?id=SJxDDpEKvH
[63] Kanamori K, Takagi T, Kobayashi K, et al. DACE: Distribution-aware counterfactual explanation by mixed-integer linear optimization[C] //Proc of the 19th Int Joint Conf on Artificial Intelligence. Red Hook, NY: Curran Associates Inc, 2020: 2855−2862
[64] Kanamori K, Takagi T, Kobayashi K, et al. Ordered counterfactual explanation by mixed-integer linear optimization[C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 11564−11574
[65] Tsirtsis S, Gomez-Rodriguez M. Decisions, counterfactual explanations and strategic behavior[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 16749−16760
[66] Karimi A H, von Kügelgen B J, Schölkopf B, et al. Algorithmic recourse under imperfect causal knowledge: A probabilistic approach[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 265−277
[67] Rojas-Carulla M, Schölkopf B, Turner R, et al. Invariant models for causal transfer learning[J]. The Journal of Machine Learning Research, 2018, 19(1): 1309−1342
[68] Guo Jiaxian, Gong Mingming, Liu Tongliang, et al. LTF: A label transformation framework for correcting target shift[C] //Proc of the 37th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2020: 3843−3853
[69] Cai Ruichu, Li Zijian, Wei Pengfei, et al. Learning disentangled semantic representation for domain adaptation[C] //Proc of the 28th Int Joint Conf on Artificial Intelligence. Red Hook, NY: Curran Associates Inc, 2019: 2060−2066
[70] Zhang Kun, Schölkopf B, Muandet K, et al. Domain adaptation under target and conditional shift[C] //Proc of the 30th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2013: 819−827
[71] Gong Mingming, Zhang Kun, Liu Tongliang, et al. Domain adaptation with conditional transferable components[C] //Proc of the 33rd Int Conf on Machine Learning. Cambridge, MA: JMLR, 2016: 2839−2848
[72] Teshima T, Sato I, Sugiyama M. Few-shot domain adaptation by causal mechanism transfer[C] //Proc of the 37th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2020: 9458−9469
[73] Edmonds M, Ma Xiaojian, Qi Siyuan, et al. Theory-based causal transfer: Integrating instance-level induction and abstract-level structure learning[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 1283−1291
[74] Etesami J, Geiger P. Causal transfer for imitation learning and decision making under sensor-shift[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 10118−10125
[75] Yue Zhongqi, Zhang Hanwang, Sun Qianru, et al. Interventional few-shot learning[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 2734−2746
[76] Zhang Kun, Gong Mingming, Stojanov P, et al. Domain adaptation as a problem of inference on graphical models[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 4965−4976
[77] Schölkopf B, Janzing D, Peters J, et al. On causal and anticausal learning[C] //Proc of the 29th Int Conf on Machine Learning. Madison, WI: Omnipress, 2012: 459−466
[78] Zhang Kun, Gong Mingming, Schölkopf B. Multi-source domain adaptation: A causal view[C] //Proc of the 29th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2015: 3150−3157
[79] Bagnell J A. Robust supervised learning[C] //Proc of the 20th National Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2005: 714−719
[80] Hu Weihua, Niu Gang, Sato I, et al. Does distributionally robust supervised learning give robust classifiers?[C] //Proc of the 35th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2018: 2029−2037
[81] Rahimian H, Mehrotra S. Distributionally robust optimization: A review[J]. arXiv preprint, arXiv: 1908.05659, 2019
[82] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[C/OL] //Proc of the 5th Int Conf on Learning Representations. 2017 [2021-11-14]. https://openreview.net/pdf?id=B1xsqj09Fm
[83] Xu Han, Ma Yao, Liu Haochen, et al. Adversarial attacks and defenses in images, graphs and text: A review[J]. International Journal of Automation and Computing, 2020, 17(2): 151−178
[84] Gururangan S, Swayamdipta S, Levy O, et al. Annotation artifacts in natural language inference data[C] //Proc of the 16th Conf of the North American Chapter of the ACL: Human Language Technologies, Vol 2. Stroudsburg, PA: ACL, 2018: 107−112
[85] Zhang Guanhua, Bai Bing, Liang Jian, et al. Selection bias explorations and debias methods for natural language sentence matching datasets[C] //Proc of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 4418−4429
[86] Clark C, Yatskar M, Zettlemoyer L. Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases[C] //Proc of the 2019 Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA: ACL, 2019: 4060−4073 [87] Cadene R, Dancette C, Cord M, et al. Rubi: Reducing unimodal biases for visual question answering[C] //Proc of the 33rd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2019: 841−852
[88] Lu Kaiji, Mardziel P, Wu Fangjing, et al. Gender bias in neural natural language processing[G] //LNCS 12300: Logic, Language, and Security: Essays Dedicated to Andre Scedrov on the Occasion of His 65th Birthday. Berlin: Springer, 2020: 189−202 [89] Maudslay R H, Gonen H, Cotterell R, et al. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution[C] //Proc of the 2019 Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA: ACL, 2019: 5270−5278 [90] Zmigrod R, Mielke S J, Wallach H, et al. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology[C] //Proc of the 57th Annual Meeting of the ACL. Stroudsburg, PA: ACL, 2019: 1651−1661
[91] Kaushik D, Hovy E, Lipton Z. Learning the difference that makes a difference with counterfactually-augmented data[C/OL] //Proc of the 8th Int Conf on Learning Representations. 2020 [2021-11-14]. https://openreview.net/pdf?id=Sklgs0NFvr
[92] Agarwal V, Shetty R, Fritz M. Towards causal VQA: Revealing and reducing spurious correlations by invariant and covariant semantic editing[C] //Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 9690−9698
[93] Chang Chunhao, Adam G A, Goldenberg A. Towards robust classification model by counterfactual and invariant data generation[C] //Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 15212−15221
[94] Wang Zhao, Culotta A. Robustness to spurious correlations in text classification via automatically generated counterfactuals[C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 14024−14031
[95] Chen Long, Yan Xin, Xiao Jun, et al. Counterfactual samples synthesizing for robust visual question answering[C] //Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 10800−10809
[96] Wu Yiquan, Kuang Kun, Zhang Yating, et al. De-biased court’s view generation with causality[C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 763−780 [97] Qi Jia, Niu Yulei, Huang Jianqiang, et al. Two causal principles for improving visual dialog[C] //Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 10860−10869
[98] Wang Tan, Huang Jiangqiang, Zhang Hanwang, et al. Visual commonsense R-CNN[C] //Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 10760−10770
[99] Zhang Dong, Zhang Hanwang, Tang Jinhui, et al. Causal intervention for weakly-supervised semantic segmentation[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 655−666
[100] Tang Kaihua, Niu Yulei, Huang Jianqiang, et al. Unbiased scene graph generation from biased training[C] //Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 3716−3725
[101] Tang Kaihua, Huang Jianqiang, Zhang Hanwang. Long-tailed classification by keeping the good and removing the bad momentum causal effect[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 1513−1524
[102] Niu Yulei, Tang Kaihua, Zhang Hanwang, et al. Counterfactual VQA: A cause-effect look at language bias[C] //Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 12700−12710
[103] Kuang Kun, Cui Peng, Athey S, et al. Stable prediction across unknown environments[C] //Proc of the 24th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining. New York: ACM, 2018: 1617−1626
[104] Shen Zheyan, Cui Peng, Kuang Kun, et al. Causally regularized learning with agnostic data selection bias[C] //Proc of the 26th ACM Int Conf on Multimedia. New York: ACM, 2018: 411−419
[105] Kuang Kun, Xiong Ruoxuan, Cui Peng, et al. Stable prediction with model misspecification and agnostic distribution shift[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 4485−4492
[106] Shen Zheyan, Cui Peng, Zhang Tong, et al. Stable learning via sample reweighting[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 5692−5699
[107] Zhang Xingxuan, Cui Peng, Xu Renzhe, et al. Deep stable learning for out-of-distribution generalization[J]. arXiv preprint, arXiv: 2104.07876, 2021
[108] Peters J, Bühlmann P, Meinshausen N. Causal inference by using invariant prediction: Identification and confidence intervals[J]. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2016, 78(5): 947−1012
[109] Christina H D, Nicolai M, Jonas P. Invariant causal prediction for nonlinear models[J/OL]. Journal of Causal Inference, 2018, 6(2): 20170016 [2021-11-15]. https://www.degruyter.com/document/doi/10.1515/jci-2017-0016/pdf
[110] Pfister N, Bühlmann P, Peters J. Invariant causal prediction for sequential data[J]. Journal of the American Statistical Association, 2019, 114(527): 1264−1276
[111] Arjovsky M, Bottou L, Gulrajani I, et al. Invariant risk minimization[J]. arXiv preprint, arXiv: 1907.02893, 2019
[112] Zhang A, Lyle C, Sodhani S, et al. Invariant causal prediction for block MDPs[C] //Proc of the 37th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2020: 11214−11224
[113] Creager E, Jacobsen J H, Zemel R. Environment inference for invariant learning[C] //Proc of the 38th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2021: 2189−2200
[114] Kaushik D, Setlur A, Hovy E H, et al. Explaining the efficacy of counterfactually augmented data[C/OL] //Proc of the 9th Int Conf on Learning Representations. 2021 [2021-11-14]. https://openreview.net/pdf?id=HHiiQKWsOcV
[115] Abbasnejad E, Teney D, Parvaneh A, et al. Counterfactual vision and language learning[C] //Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 10044−10054
[116] Liang Zujie, Jiang Weitao, Hu Haifeng, et al. Learning to contrast the counterfactual samples for robust visual question answering[C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 3285−3292 [117] Teney D, Abbasnedjad E, van den Hengel A. Learning what makes a difference from counterfactual examples and gradient supervision[C] //Proc of the 16th European Conf on Computer Vision. Berlin: Springer, 2020: 580−599
[118] Fu T J, Wang X E, Peterson M F, et al. Counterfactual vision-and-language navigation via adversarial path sampler[C] //Proc of the 16th European Conf on Computer Vision. Berlin: Springer, 2020: 71−86
[119] Parvaneh A, Abbasnejad E, Teney D, et al. Counterfactual vision-and-language navigation: Unravelling the unseen[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 5296−5307
[120] Sauer A, Geiger A. Counterfactual generative networks[C/OL] //Proc of the 9th Int Conf on Learning Representations. 2021 [2021-11-14]. https://openreview.net/pdf?id=BXewfAYMmJw
[121] Mao Chengzhi, Cha A, Gupta A, et al. Generative interventions for causal learning[C] //Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 3947−3956
[122] Zeng Xiangji, Li Yunliang, Zhai Yuchen, et al. Counterfactual generator: A weakly-supervised method for named entity recognition[C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 7270−7280 [123] Fu T J, Wang Xin, Grafton S, et al. Iterative language-based image editing via self-supervised counterfactual reasoning[C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 4413−4422 [124] Pitis S, Creager E, Garg A. Counterfactual data augmentation using locally factored dynamics[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 3976−3990
[125] Zhang Junzhe, Kumor D, Bareinboim E. Causal imitation learning with unobserved confounders[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 12263−12274
[126] Coston A, Kennedy E, Chouldechova A. Counterfactual predictions under runtime confounding[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 4150−4162
[127] Atzmon Y, Kreuk F, Shalit U, et al. A causal view of compositional zero-shot recognition[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 1462−1473
[128] Yang Zekun, Feng Juan. A causal inference method for reducing gender bias in word embedding relations[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 9434−9441
[129] Schölkopf B, Hogg D W, Wang Dun, et al. Modeling confounding by half-sibling regression[J]. Proceedings of the National Academy of Sciences, 2016, 113(27): 7391−7398 doi: 10.1073/pnas.1511656113
[130] Shin S, Song K, Jang J H, et al. Neutralizing gender bias in word embedding with latent disentanglement and counterfactual generation[C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing: Findings. Stroudsburg, PA: ACL, 2020: 3126−3140 [131] Yang Zekun, Liu Tianlin. Causally denoise word embeddings using half-sibling regression[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 9426−9433
[132] Yang Xu, Zhang Hanwang, Qi Guojin, et al. Causal attention for vision-language tasks[C] //Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 9847−9857
[133] Tople S, Sharma A, Nori A. Alleviating privacy attacks via causal learning[C] //Proc of the 37th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2020: 9537−9547
[134] Zhang Cheng, Zhang Kun, Li Yingzhen. A causal view on robustness of neural networks[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 289−301
[135] Sun Xinwei, Wu Botong, Liu Chang, et al. Latent causal invariant model[J]. arXiv preprint, arXiv: 2011.02203, 2020
[136] Mitrovic J, McWilliams B, Walker J C, et al. Representation learning via invariant causal mechanisms[C/OL] //Proc of the 9th Int Conf on Learning Representations. 2021 [2021-11-14]. https://openreview.net/pdf?id=9p2ekP904Rs
[137] Mahajan D, Tople S, Sharma A. Domain generalization using causal matching[J]. arXiv preprint, arXiv: 2006.07500, 2020
[138] Zhang Weijia, Liu Lin, Li Jiuyong. Robust multi-instance learning with stable instances[C] //Proc of the 24th European Conf on Artificial Intelligence. Ohmsha: IOS, 2020: 1682−1689
[139] Kleinberg J, Mullainathan S, Raghavan M. Inherent trade-offs in the fair determination of risk scores[J]. arXiv preprint, arXiv: 1609.05807, 2016
[140] Grgic-Hlaca N, Zafar M B, Gummadi K P, et al. The case for process fairness in learning: Feature selection for fair decision making[C/OL] //Proc of Symp on Machine Learning and the Law at the 30th Conf on Neural Information Processing Systems. 2016 [2021-11-17]. http: //www.mlandthelaw.org/papers/grgic.pdf [141] Dwork C, Hardt M, Pitassi T, et al. Fairness through awareness[C] //Proc of the 3rd Innovations in Theoretical Computer Science Conf. New York: ACM, 2012: 214−226
[142] Calders T, Kamiran F, Pechenizkiy M. Building classifiers with independency constraints[C] //Proc of the 9th IEEE Int Conf on Data Mining Workshops. Piscataway, NJ: IEEE, 2009: 13−18
[143] Hardt M, Price E, Srebro N. Equality of opportunity in supervised learning[C] //Proc of the 30th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2016: 3315−3323
[144] Xu Renzhe, Cui Peng, Kuang Kun, et al. Algorithmic decision making with conditional fairness[C] //Proc of the 26th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining. New York: ACM, 2020: 2125−2135
[145] Kusner M J, Loftus J, Russell C, et al. Counterfactual fairness[C] //Proc of the 31st Int Conf on Neural Information Processing Systems. New York: ACM, 2017: 4066−4076
[146] Kilbertus N, Rojas-Carulla M, Parascandolo G, et al. Avoiding discrimination through causal reasoning[C] //Proc of the 31st Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2017: 656−666
[147] Nabi R, Shpitser I. Fair inference on outcomes[C] //Proc of the 32nd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2018: 1931−1940
[148] Chiappa S. Path-specific counterfactual fairness[C] //Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 7801−7808
[149] Wu Yongkai, Zhang Lu, Wu Xintao, et al. PC-fairness: A unified framework for measuring causality-based fairness[C] //Proc of the 33rd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2019: 3404−3414
[150] Wu Yongkai, Zhang Lu, Wu Xintao. Counterfactual fairness: Unidentification, bound and algorithm[C] //Proc of the 28th Int Joint Conf on Artificial Intelligence. Red Hook, NY: Curran Associates Inc, 2019: 1438−1444
[151] Huang P S, Zhang Huan, Jiang R, et al. Reducing sentiment bias in language models via counterfactual evaluation[C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing: Findings. Stroudsburg, PA: ACL, 2020: 65−83 [152] Garg S, Perot V, Limtiaco N, et al. Counterfactual fairness in text classification through robustness[C] //Proc of the 33rd AAAI/ACM Conf on AI, Ethics, and Society. Menlo Park, CA: AAAI, 2019: 219−226
[153] Hu Yaowei, Wu Yongkai, Zhang Lu, et al. Fair multiple decision making through soft interventions[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 17965−17975
[154] Goel N, Amayuelas A, Deshpande A, et al. The importance of modeling data missingness in algorithmic fairness: A causal perspective[C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 7564−7573
[155] Xu Depeng, Wu Yongkai, Yuan Shuhan, et al. Achieving causal fairness through generative adversarial networks[C] //Proc of the 28th Int Joint Conf on Artificial Intelligence. Red Hook, NY: Curran Associates Inc, 2019: 1452−1458
[156] Khademi A, Lee S, Foley D, et al. Fairness in algorithmic decision making: An excursion through the lens of causality[C] //Proc of the 28th World Wide Web Conf. New York: ACM, 2019: 2907−2914
[157] Zhang Junzhe, Bareinboim E. Fairness in decision-making—The causal explanation formula[C] //Proc of the 32nd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2018: 2037−2045
[158] Zhang Junzhe, Bareinboim E. Equality of opportunity in classification: A causal approach[C] //Proc of the 32nd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2018: 3671−3681
[159] Wang Hao, Ustun B, Calmon F. Repairing without retraining: Avoiding disparate impact with counterfactual distributions[C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2019: 6618−6627
[160] Creager E, Madras D, Pitassi T, et al. Causal modeling for fairness in dynamical systems[C] //Proc of the 37th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2020: 2185−2195
[161] Swaminathan A, Joachims T. Batch learning from logged bandit feedback through counterfactual risk minimization[J]. The Journal of Machine Learning Research, 2015, 16(1): 1731−1755
[162] Swaminathan A, Joachims T. The self-normalized estimator for counterfactual learning[C] //Proc of the 29th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2015: 3231−3239
[163] Wu Hang, Wang May. Variance regularized counterfactual risk minimization via variational divergence minimization[C] //Proc of the 35th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2018: 5353−5362
[164] London B, Sandler T. Bayesian counterfactual risk minimization[C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2019: 4125−4133
[165] Faury L, Tanielian U, Dohmatob E, et al. Distributionally robust counterfactual risk minimization[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 3850−3857
[166] Schnabel T, Swaminathan A, Singh A, et al. Recommendations as treatments: Debiasing learning and evaluation[C] //Proc of the 33rd Int Conf on Machine Learning. Cambridge, MA: JMLR, 2016: 1670−1679
[167] Yang Longqi, Cui Yin, Xuan Yuan, et al. Unbiased offline recommender evaluation for missing-not-at-random implicit feedback[C] //Proc of the 12th ACM Conf on Recommender Systems. New York: ACM, 2018: 279−287
[168] Bonner S, Vasile F. Causal embeddings for recommendation[C] //Proc of the 12th ACM Conf on Recommender Systems. New York: ACM, 2018: 104−112
[169] Narita Y, Yasui S, Yata K. Efficient counterfactual learning from bandit feedback[C] //Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 4634−4641
[170] Zou Hao, Cui Peng, Li Bo, et al. Counterfactual prediction for bundle treatment[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 19705−19715
[171] Xu Da, Ruan Chuanwei, Korpeoglu E, et al. Adversarial counterfactual learning and evaluation for recommender system[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 13515−13526
[172] Lopez R, Li Chenchen, Yan Xiang, et al. Cost-effective incentive allocation via structured counterfactual inference[C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 4997−5004
[173] Joachims T, Swaminathan A, Schnabel T. Unbiased learning-to-rank with biased feedback[C] //Proc of the 10th ACM Int Conf on Web Search and Data Mining. New York: ACM, 2017: 781−789
[174] Wang Xuanhui, Golbandi N, Bendersky M, et al. Position bias estimation for unbiased learning to rank in personal search[C] //Proc of the 11th ACM Int Conf on Web Search and Data Mining. New York: ACM, 2018: 610−618
[175] Ai Qingyao, Bi Keping, Luo Cheng, et al. Unbiased learning to rank with unbiased propensity estimation[C] //Proc of the 41st Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2018: 385−394
[176] Agarwal A, Takatsu K, Zaitsev I, et al. A general framework for counterfactual learning-to-rank[C] //Proc of the 42nd Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2019: 5−14
[177] Jagerman R, de Rijke M. Accelerated convergence for counterfactual learning to rank[C] //Proc of the 43rd Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2020: 469−478
[178] Vardasbi A, de Rijke M, Markov I. Cascade model-based propensity estimation for counterfactual learning to rank[C] //Proc of the 43rd Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2020: 2089−2092
[179] Jagerman R, Oosterhuis H, de Rijke M. To model or to intervene: A comparison of counterfactual and online learning to rank from user interactions[C] //Proc of the 42nd Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2019: 15−24
[180] Bottou L, Peters J, Quiñonero-Candela J, et al. Counterfactual reasoning and learning systems: The example of computational advertising[J]. The Journal of Machine Learning Research, 2013, 14(1): 3207−3260
[181] Lawrence C, Riezler S. Improving a neural semantic parser by counterfactual learning from human bandit feedback[C] //Proc of the 56th Annual Meeting of the ACL, Vol 1. Stroudsburg, PA: ACL, 2018: 1820−1830
[182] Bareinboim E, Forney A, Pearl J. Bandits with unobserved confounders: A causal approach[C] //Proc of the 29th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2015: 1342−1350
[183] Lee S, Bareinboim E. Structural causal bandits: Where to intervene?[C] //Proc of the 32nd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2018: 2568−2578
[184] Lee S, Bareinboim E. Structural causal bandits with non-manipulable variables[C] //Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 4164−4172
[185] Haan P, Jayaraman D, Levine S. Causal confusion in imitation learning[C] //Proc of the 33rd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2019: 11698−11709
[186] Kyono T, Zhang Yao, van der Schaar M. CASTLE: Regularization via auxiliary causal graph discovery[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 1501−1512
[187] Yang Mengyue, Liu Frurui, Chen Zhitang, et al. CausalVAE: Disentangled representation learning via neural structural causal models[C] //Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 9593−9602
[188] Zinkevich M, Johanson M, Bowling M, et al. Regret minimization in games with incomplete information[C] //Proc of the 20th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2007: 1729−1736
[189] Brown N, Lerer A, Gross S, et al. Deep counterfactual regret minimization[C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2019: 793−802
[190] Farina G, Kroer C, Brown N, et al. Stable-predictive optimistic counterfactual regret minimization[C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2019: 1853−1862
[191] Brown N, Sandholm T. Solving imperfect-information games via discounted regret minimization[C] //Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 1829−1836
[192] Li Hui, Hu Kailiang, Zhang Shaohua, et al. Double neural counterfactual regret minimization[C/OL] //Proc of the 8th Int Conf on Learning Representations. 2020 [2021-11-14]. https://openreview.net/pdf?id=ByedzkrKvH
[193] Oberst M, Sontag D. Counterfactual off-policy evaluation with Gumbel-max structural causal models[C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: JMLR, 2019: 4881−4890
[194] Buesing L, Weber T, Zwols Y, et al. Woulda, coulda, shoulda: Counterfactually-guided policy search[C/OL] //Proc of the 9th Int Conf on Learning Representations. 2019 [2021-11-14]. https://openreview.net/pdf?id=BJG0voC9YQ
[195] Chen Long, Zhang Hanwang, Xiao Jun, et al. Counterfactual critic multi-agent training for scene graph generation[C] //Proc of the 2019 IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2019: 4613−4623 [196] Zhu Qingfu, Zhang Weinan, Liu Ting, et al. Counterfactual off-policy training for neural dialogue generation[C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 3438−3448 [197] Choi S, Park H, Yeo J, et al. Less is more: Attention supervision with counterfactuals for text classification[C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 6695−6704 [198] Zhang Zhu, Zhao Zhou, Lin Zhejie, et al. Counterfactual contrastive learning for weakly-supervised vision-language grounding[C] //Proc of the 34th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2020: 655-666
[199] Kocaoglu M, Snyder C, Dimakis A G, et al. CausalGAN: Learning causal implicit generative models with adversarial training[C] //Proc of the 6th Int Conf on Learning Representations, 2018 [2021-11-03]. https://openreview.net/pdf?id=BJE-4xW0W
[200] Kim H, Shin S, Jang J H, et al. Counterfactual fairness with disentangled causal effect variational autoencoder[C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 8128−8136
[201] Qin Lianhui, Bosselut A, Holtzman A, et al. Counterfactual story reasoning and generation[C] //Proc of the 2019 Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA: ACL, 2019: 5046−5056 [202] Hao Changying, Pang Liang, Lan Yanyan, et al. Sketch and customize: A counterfactual story generator[C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 12955−12962.
[203] Madaan N, Padhi I, Panwar N, et al. Generate your counterfactuals: Towards controlled counterfactual generation for text[C] //Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 13516−13524
[204] Peysakhovich A, Kroer C, Lerer A. Robust multi-agent counterfactual prediction[C] //Proc of the 33rd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2019: 3083−3093
[205] Baradel F, Neverova N, Mille J, et al. CoPhy: Counterfactual learning of physical dynamics[C/OL] //Proc of the 8th Int Conf on Learning Representations. 2020 [2021-11-14]. https://openreview.net/pdf?id=SkeyppEFvS
-
期刊类型引用(19)
1. 包晓丽. 可信数据空间:技术与制度二元共治. 浙江学刊. 2024(01): 89-100+239-240 . 百度学术
2. 林宁,张亮. 基于联邦学习的个性化推荐系统研究. 科技创新与生产力. 2024(04): 27-30 . 百度学术
3. 李璇,邓天鹏,熊金波,金彪,林劼. 基于模型后门的联邦学习水印. 软件学报. 2024(07): 3454-3468 . 百度学术
4. 洪榛,冯王磊,温震宇,吴迪,李涛涛,伍一鸣,王聪,纪守领. 基于梯度回溯的联邦学习搭便车攻击检测. 计算机研究与发展. 2024(09): 2185-2198 . 本站查看
5. 陈卡. 基于模型分割的联邦学习数据隐私保护方法. 电信科学. 2024(09): 136-145 . 百度学术
6. 余晟兴,陈钟. 基于同态加密的高效安全联邦学习聚合框架. 通信学报. 2023(01): 14-28 . 百度学术
7. 林莉,张笑盈,沈薇,王万祥. FastProtector:一种支持梯度隐私保护的高效联邦学习方法. 电子与信息学报. 2023(04): 1356-1365 . 百度学术
8. 顾育豪,白跃彬. 联邦学习模型安全与隐私研究进展. 软件学报. 2023(06): 2833-2864 . 百度学术
9. 郭松岳,王阳谦,柏思远,刘永恒,周骏,王梦鸽,廖清. 面向数据混合分布的联邦自适应交互模型. 计算机研究与发展. 2023(06): 1346-1357 . 本站查看
10. 陈宛桢,张恩,秦磊勇,洪双喜. 边缘计算下基于区块链的隐私保护联邦学习算法. 计算机应用. 2023(07): 2209-2216 . 百度学术
11. 高莹,陈晓峰,张一余,王玮,邓煌昊,段培,陈培炫. 联邦学习系统攻击与防御技术研究综述. 计算机学报. 2023(09): 1781-1805 . 百度学术
12. 张连福,谭作文. 一种面向多模态医疗数据的联邦学习隐私保护方法. 计算机科学. 2023(S2): 933-940 . 百度学术
13. 周赞,张笑燕,杨树杰,李鸿婧,况晓辉,叶何亮,许长桥. 面向联邦算力网络的隐私计算自适激励机制. 计算机学报. 2023(12): 2705-2725 . 百度学术
14. 莫慧凌,郑海峰,高敏,冯心欣. 基于联邦学习的多源异构数据融合算法. 计算机研究与发展. 2022(02): 478-487 . 本站查看
15. 陈前昕,毕仁万,林劼,金彪,熊金波. 支持多数不规则用户的隐私保护联邦学习框架. 网络与信息安全学报. 2022(01): 139-150 . 百度学术
16. 侯坤池,王楠,张可佳,宋蕾,袁琪,苗凤娟. 基于自编码神经网络的半监督联邦学习模型. 计算机应用研究. 2022(04): 1071-1074+1104 . 百度学术
17. 詹玉峰,王家盛,夏元清. 面向联邦学习的数据交易机制. 指挥与控制学报. 2022(02): 122-132 . 百度学术
18. 肖林声,钱慎一. 基于并行同态加密和STC的高效安全联邦学习. 通信技术. 2021(04): 922-928 . 百度学术
19. 刘飚,张方佼,王文鑫,谢康,张健毅. 基于矩阵映射的拜占庭鲁棒联邦学习算法. 计算机研究与发展. 2021(11): 2416-2429 . 本站查看
其他类型引用(45)