ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2021, Vol. 58 ›› Issue (8): 1761-1772.doi: 10.7544/issn1000-1239.2021.20210298

Special Issue: 2021人工智能前沿进展专题

Previous Articles     Next Articles

Pretraining Financial Language Model with Multi-Task Learning for Financial Text Mining

Liu Zhuang1, Liu Chang2, Wayne Lin3, Zhao Jun4   

  1. 1(School of Applied Finance and Behavioral Science, Dongbei University of Finance and Economics, Dalian, Liaoning 116025);2(China Petroleum Materials Procurement Center, Shenyang 110031);3(School of Computer Science, University of Southern California, Los Angeles, CA, USA 90007);4(IBM Research, Beijing 100101)
  • Online:2021-08-01
  • Supported by: 
    This work was supported by the Basic Scientific Research Project (General Program) of Department of Education of Liaoning Province and the University-Industry Collaborative Education Program of the Ministry of Education of China (202002037015).

Abstract: Financial text mining is becoming increasingly important as the number of financial documents rapidly grows. With the progress in machine learning, extracting valuable information from financial literature has gained attention among researchers, and deep learning has boosted the development of effective financial text mining models. However, as deep learning models require a large amount of labeled training data, applying deep learning to financial text mining is often unsuccessful due to the lack of training data in financial fields. Recent researches on training contextualized language representation models on text corpora shed light on the possibility of leveraging a large number of unlabeled financial text corpora. We introduce F-BERT (BERT for financial text mining), which is a domain specific language representation model pre-trained on large-scale financial corpora. Based on the BERT architecture, F-BERT effectively transfers the knowledge from a large amount of financial texts to financial text mining models with minimal task-specific architecture modifications. The results show that our F-BERT outperforms most current state-of-the-art models, which demonstrates the effectiveness and robustness of the proposed F-BERT.

Key words: BERT, financial text mining, multi-task learning, pre-training, transfer learning, fintech

CLC Number: