• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Lü Yougang, Hao Jitai, Wang Zihan, Gao Shen, Ren Pengjie, Chen Zhumin, Ma Jun, Ren Zhaochun. Legal Judgment Prediction Based on Chain of Judgment[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330868
Citation: Lü Yougang, Hao Jitai, Wang Zihan, Gao Shen, Ren Pengjie, Chen Zhumin, Ma Jun, Ren Zhaochun. Legal Judgment Prediction Based on Chain of Judgment[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330868

Legal Judgment Prediction Based on Chain of Judgment

Funds: This work was supported by the National Key Research and Development Program of China (2022YFC3303004), the Natural Science Foundation of China (62272274, 62372275, 62202271, T2293773, 62102234, 62072279), and the Natural Science Foundation of Shandong Province (ZR2021QF129).
More Information
  • Author Bio:

    Lü Yougang: born in 1998. PhD candidate. His main research interests include natural language processing. (youganglyu@mail.sdu.edu.cn)

    Hao Jitai: born in 2001. Master candidate. His main research interests include natural language processing. (202215112@mail.sdu.edu.cn)

    Wang Zihan: born in 1995. PhD candidate. His main research interests include natural language processing. (zihanwang.sdu@gmail.com)

    Gao Shen: born in 1994. PhD, assistant professor, master supervisor. Member of CCF. His main research interests include natural language processing and information retrieval. (shengao@sdu.edu.cn)

    Ren Pengjie: born in 1990. PhD, professor, PhD supervisor. Member of CCF. His main research interests include natural language processing and information retrieval. (renpengjie@sdu.edu.cn)

    Chen Zhumin: born in 1977. PhD, professor, PhD supervisor. Senior member of CCF. His main research interests include natural language processing, big data analytics and recommender systems. (chenzhumin@sdu.edu.cn)

    Ma Jun: born in 1956. PhD, professor, PhD supervisor. Senior member of CCF. His main research interests include information retrieval, data mining and natural language processing. (majun@sdu.edu.cn)

    Ren Zhaochun: born in 1987. PhD, associate professor, PhD supervisor. Member of CCF. His main research interests include natural language processing and information retrieval. (z.ren@liacs.leidenuniv.nl)

  • Received Date: October 30, 2023
  • Revised Date: January 12, 2025
  • Accepted Date: January 25, 2025
  • Available Online: January 25, 2025
  • Legal intelligence aims to analyze texts within the legal domain automatically by employing various natural language processing (NLP) technologies. This field has garnered significant attention from the NLP community. One of the most critical tasks in legal intelligence is Legal Judgment Prediction (LJP). This task seeks to forecast judgment outcomes, such as applicable law articles, charges, and penalties, based on the fact descriptions of legal cases, making it a promising application of artificial intelligence (AI) techniques. However, current LJP methods primarily address cases with a single defendant, neglecting the complexities of cases involving multiple defendants. In real-world criminal cases, multiple defendants are often involved, creating intricate interactions that single-defendant LJP technologies cannot accurately handle. These existing technologies struggle to distinguish judgment outcomes for different defendants in such scenarios. To advance research in LJP tasks involving multiple defendants, this paper presents a large-scale multi-defendant LJP dataset with three key characteristics: 1) It is the largest manually annotated dataset for multi-defendant LJP; 2) It necessitates distinguishing legal judgment predictions for each defendant; 3) It includes comprehensive judgment chains, covering criminal relationships, sentencing contexts, law articles, charges, and penalties. Furthermore, this paper conducts an extensive and detailed analysis of the dataset, examining the distribution of law articles, charges, penalties, criminal relationships, sentencing contexts, text length, and number of defendants. It also provides statistical insights into multi-defendant judgment results and the chain of judgment based outcomes. Additionally, this paper introduces a novel chain of judgment based method, featuring a strategy for generating judgment chains related to the crime facts and a comparison strategy to differentiate correct judgment chains from easily confused ones, enhancing overall effectiveness. Experimental results reveal that the multi-defendant LJP dataset presents a significant challenge to existing LJP methods and pre-trained models. However, the chain of judgment based LJP method significantly surpasses baseline methods, highlighting the crucial role of judgment chains in improving LJP.

  • [1]
    Zhong Haoxi, Guo Zhipeng, Tu Cunchao, et al. Legal judgment prediction via topological learning [C] //Proc of the 2018 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2018: 3540−3549
    [2]
    Lauderdale B, Clark T. The Supreme Court's many median justices[J]. American Political Science Review, 2012, 106(4): 847−866 doi: 10.1017/S0003055412000469
    [3]
    Pan Sicheng, Lu Tun, Gu Ning, et al. Charge prediction for multidefendant cases with multi-scale attention [C] //Proc of the 14th ChineseCSCW. Berlin: Springer, 2019: 766−777
    [4]
    Xiao Chaojun, Zhong Haoxi, Guo Zhipeng, et al. CAIL2018: A large-scale legal dataset for judgment prediction [J]. arXiv preprint, arXiv: 1807.02478, 2018
    [5]
    Talmor A, Tafjord O, Clark P, et al. Leap-of-thought: Teaching pre-trained models to systematically reason over implicit knowledge [C/OL] //Proc of the 33rd Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2019[2025-01-04]. https://proceedings.neurips.cc/paper_files/paper/2020/hash/e992111e4ab9985366e806733383bd8c-Abstract.html
    [6]
    Yao Huihan, Chen Ying, Ye Qinyuan, et al. Refining language models with compositional explanations [C] //Proc of the 34th Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2020: 8954−8967
    [7]
    Bai Jinze, Bai Shuai, Chu Yunfei, et al. Qwen technical report[J]. arXiv preprint, arXiv: 2309.16609, 2023
    [8]
    Izacard G, Grave E. Leveraging passage retrieval with generative models for open domain question answering [C] //Proc of the 16th Conf of the European Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2021: 874−880
    [9]
    Kort F. Predicting supreme court decisions mathematically: A quantitative analysis of the “right to counsel” cases[J]. The American Political Science Review, 1957, 51(1): 1−12 doi: 10.2307/1951767
    [10]
    Nagel S. Applying correlation analysis to case prediction[J]. Texas Law Review, 1963, 42: 1006−1018
    [11]
    Segal J. Predicting supreme court cases probabilistically: The search and seizure cases, 1962-1981[J]. American Political Science Review, 1984, 78(4): 891−900 doi: 10.2307/1955796
    [12]
    Aletras N, Tsarapatsanis D, Preotiuc-Pietro D, et al. Predicting judicial decisions of the european court of human rights: A natural language processing perspective[J]. PeerJ computer science, 2016, 2: 93 doi: 10.7717/peerj-cs.93
    [13]
    Sulea O, Zampieri M, Malmasi S, et al. Exploring the use of text classification in the legal domain [C/OL] //Proc of the 2nd Workshop on Automated Semantic Analysis of Information in Legal Texts Co-located with the 16th Int Conf on Artificial Intelligence and Law. 2017[2025-01-01]. https://ceur-ws.org/Vol-2143/paper5.pdf
    [14]
    Katz D, Bommarito M, Blackman J, et al. A general approach for predicting the behavior of the supreme court of the united states[J]. Plos One, 2017, 12(4): 0174698
    [15]
    Dong Qian, Niu Shuzi. Legal judgment prediction via relational learning [C] //Proc of the 44th Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2021: 983−992
    [16]
    Jiang Xin, Ye Hai, Luo Zhunchen, et al. Interpretable rationale augmented charge prediction system [C] //Proc of the 27th Int Conf on Computational Linguistics. Stroudsburg, PA: ACL, 2018: 146−151
    [17]
    Zhong Haoxi, Wang Yuzhong, Tu Cunchao, et al. Iteratively questioning and answering for interpretable legal judgment prediction [C] //Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020: 1250−1257
    [18]
    Yue Linan, Liu Qi, Jin Binbin, et al. Neurjudge: A circumstance-aware neural framework for legal judgment prediction [C] // Proc of the 44th Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2021: 973−982
    [19]
    Hu Zikun, Li Xiang, Tu Cunchao, et al. Few-shot charge prediction with discriminative legal attributes [C] //Proc of the 27th Int Conf on Computational Linguistics. Stroudsburg, PA: ACL, 2018: 487−498
    [20]
    Lv Yougang, Wang Zihan, Ren Zhaochun, et al. Improving legal judgment prediction through reinforced criminal element extraction[J]. Information Processing & Management, 2022, 59(1): 102780
    [21]
    Feng Yi, Li Chuanyi, Vincent N. Legal judgment prediction via event extraction with constraints [C] // Proc of the 60th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2022: 648−664
    [22]
    Luo Bingfeng, Feng Yansong, Xu Jianbo, et al. Learning to predict charges for Criminal cases with legal basis [C] //Proc of the 2017 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2017: 2727−2736
    [23]
    Wang Pengfei, Fan Yu, Niu Shuzi, et al. Hierarchical matching network for crime classification [C] //Proc of the 42nd int ACM SIGIR Conf Research and Development in Information Retrieval. New York: ACM, 2019: 325−334
    [24]
    Xu Nuo, Wang Pinghui, Chen Long, et al. Distinguish contusing law articles for legal judgment prediction [C] //Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 3086−3095
    [25]
    Le Yuquan, Zhao Yuming, Chen Meng, et al. Legal charge prediction via bilinear attention network [C] //Proc of the 31st ACM Int Conf on Information & Knowledge Management. New York: ACM, 2022: 1024−1033
    [26]
    Liu Dugang, Du Weihao, Li Lei, et al. Augmenting legal judgment prediction with contrastive case relations [C] //Proc of the 29th Int Conf on Computational Linguistics. Stroudsburg, PA: ACL, 2022: 2658−2667
    [27]
    Zhang Han, Dou Zhicheng, Zhu Yutao, et al. Contrastive learning for legal judgment prediction[J]. ACM Transactions on Information Systems, 2023, 41(4): 1−25
    [28]
    Chalkidis T, Fergadiotis M, Malakasiotis P, et al. LEGAL-BERT: Preparing the muppets for court [C] //Proc of the 2020 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 2898−2904
    [29]
    Chalkidis N, Fergadiotis M, Tsarapatsanis D, et al. Paragraph-level rationale extraction through regularization: A case study on european court of human rights cases [C] //Proc of the 2021 Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2021: 226−241
    [30]
    Xiao Chaojun, Hu Xueyu, Liu Zhiyuan, et al. Lawformer: A pre-trained language model for chinese legal long documents[J]. Al Open, 2021, 2: 79−84 doi: 10.1016/j.aiopen.2021.06.003
    [31]
    Omar Z, Jason E, Christine D. Using "annotator rationales" to improve machine learning for text categorization [C] //Proc of the 2007 Conf of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2007: 260−267
    [32]
    Ling W, Yogatama D, Dyer C, et al. Program induction by rationale generation: Learning to solve and explain algebraic word problems [C] //Proc of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2017: 158−167
    [33]
    Oana-Maria C, Tim R, Thomas L, et al. e-snli: Natural language inference with natural language explanations [C] //Proc of the 31st Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2018: 9560−9572
    [34]
    Rajani N, MCCann B, Xiong Caiming, et al. Explain yourself'! leveraging language models for commonsense reasoning [C] //Proc of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 4932−4942
    [35]
    Dan H, Saurav K, Akul A, et al. Measuring mathematical problem solving with the MATH dataset [C/OL] //Proc of the 1st Neural Information Processing Systems Track on Datasets and Benchmarks. 2021[2025-01-04]. https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html
    [36]
    Nye M, Andreassen A, Ari G, et al. Show your work: Scratchpads for intermediate computation with language models [J]. arXiv preprint, arXiv: 2112.00114, 2021
    [37]
    Wei J, Wang X, Schuurmans D, et al. Chain of thought prompting elicits reasoning in large Ianguage models [J]. arXiv preprint, arXiv: 2201.11903, 2022
    [38]
    Huang J, Chang K. Towards reasoning in large language models: A survey [J]. arXiv preprint, arXiv: 2212.10403, 2022
    [39]
    Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020, 21(140): 1−67
    [40]
    Pitoura E, Tsaparas P, Flouris G, et al. On measuring bias in online information[J]. ACM SIGMOD Record, 2017, 46(4): 16−21
    [41]
    Wang Xuezhi, Wei J, Schuurmans D, et al. Self-consistency improves chain of thought reasoning in language models [C/OL] //Proc of the 11th Int Conf on Learning Representations. 2023[2025-01-04]. https://openreview.net/forum?id=1PL1NIMMrw
    [42]
    Cui Yiming, Che Wanxiang, Liu Ting, et al. Pre-training with whole word masking for chinese BERT[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3504−3514 doi: 10.1109/TASLP.2021.3124365
    [43]
    Loshchilov I, Hutter F. Decoupled weight decay regularization [C/OL] // Proc of the 7th Int Conf on Learning Representations. 2019[2025-01-04]. https://openreview.net/forum?id=Bkg6RiCqY7
    [44]
    Hu E, Shen Y, Wallis P, et al. LoRA: Low-rank adaptation of large language models [C/OL] //Proc of the 10th Intl Conf on Learning Representations. 2022[2025-01-04]. https://openreview.net/forum?id=nZeVKeeFYf9
    [45]
    舒文韬,李睿潇,孙天祥,等. 大型语言模型:原理、实现与发展[J]. 计算机研究与发展,2024,61(2):351−361 doi: 10.7544/issn1000-1239.202330303

    Shu Wentao, Li Ruixiao, Sun Tianxiang, et al. Large-scale language modeling: Principles, implementation and development[J]. Computer Research and Development, 2024, 61(2): 351−361 (in Chinese) doi: 10.7544/issn1000-1239.202330303
  • Cited by

    Periodical cited type(6)

    1. 韩宇捷,徐志杰,杨定裕,黄波,郭健美. CDES:数据驱动的云数据库效能评估方法. 计算机科学. 2024(06): 111-117 .
    2. 刘传磊,张贺,杨贺. 地铁保护区智能化巡查系统开发及应用研究. 现代城市轨道交通. 2024(09): 23-30 .
    3. 董文,张俊峰,刘俊,张雷. 国产数据库在能源数字化转型中的创新应用研究. 信息通信技术与政策. 2024(10): 68-74 .
    4. 阎开. 计算机检测维修与数据恢复技术及应用研究. 信息记录材料. 2023(08): 89-91 .
    5. 冯丽琴,冯花平. 基于人脸识别的可控化学习数据库系统设计. 数字通信世界. 2023(10): 69-71 .
    6. 张惠芹,章小卫,杜坤,李江. 基于数字孪生的高校实验室高温设备智能化监管体系的探究. 实验室研究与探索. 2023(11): 249-252+282 .

    Other cited types(11)

Catalog

    Article views (66) PDF downloads (22) Cited by(17)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return