ISSN 1000-1239 CN 11-1777/TP

• •

### 一种新的半监督归纳迁移学习框架：Co-Transfer

1. （桂林电子科技大学计算机与信息安全学院 广西桂林 541004）（广西图像图形与智能处理重点实验室（桂林电子科技大学）广西桂林 541004）（上海大学计算机工程与科学学院 上海 200444）（ymwen@guet.edu.cn）
• 出版日期: 2022-08-24

### A New Semi-Supervised Inductive Transfer Learning Framework: Co-Transfer

Wen Yimin, Yuan Zhe, Yu Hang

1. （School of Computer Science and Information Safety, Guilin University of Electronic Technology, Guilin, Guangxi 541004）（Guangxi Key Laboratory of Image and Graphic Intelligent Processing（Guilin University of Electronic Technology）, Guilin, Guangxi 541004）（School of Computer Engineering and Science, Shanghai University, Shanghai 200444）
• Online: 2022-08-24

Abstract: In many practical data mining scenarios, such as network intrusion detection, Twitter spam detection, and computer-aided diagnosis, a source domain that is different from but related to a target domain is very common. Generally, a large amount of unlabeled data is available in both source and target domains, but labeling each of them is difficult, expensive, time-consuming, and sometime unnecessary. Therefore, it is very important and worthwhile to fully explore the labeled and unlabeled data in source and target domains to handle classification tasks in target domain. To leverage transfer learning and semi-supervised learning, this paper proposes a new inductive transfer learning framework named Co-Transfer. Co-Transfer first generates three TrAdaBoost classifiers for transfer learning from the original source domain to the original target domain, and meanwhile another three TrAdaBoost classifiers are generated for transfer learning from the original target domain to the original source domain, by bootstrapping samples from the original labeled data. In each round of Co-Transfer, each group of TrAdaBoost classifiers are refined by using the carefully labeled data, one part of which is the original labeled samples, the second part is the samples labeled by itself, and the other is labeled by another group of TrAdaBoost classifiers. Finally, the group of TrAdaBoost classifiers learned to transfer from the original source domain to the original target domain produce the final hypothesis. Experimental results on UCI and text classification task datasets illustrate that Co-Transfer can significantly improves generalization performance by exploring labeled and unlabeled data across different tasks. Code is available at https://gitee.com/ymw12345/co-transfer.git.