CHENG Yuhu, CAO Ge, WANG Xuesong, et al., “Weighted Multi-source TrAdaBoost,” Chinese Journal of Electronics, vol. 22, no. 3, pp. 505-510, 2013,
Citation: CHENG Yuhu, CAO Ge, WANG Xuesong, et al., “Weighted Multi-source TrAdaBoost,” Chinese Journal of Electronics, vol. 22, no. 3, pp. 505-510, 2013,

Weighted Multi-source TrAdaBoost

Funds:  This work is supported by the National Natural Science Foundation of China (No.60974050, No.61072094), Program for New Century Excellent Talents in University (No.NCET-08-0836, No.NCET-10-0765), Specialized Research Fund for the Doctoral Program of Higher Education of China (No.20110095110016).
  • Received Date: 2012-04-01
  • Rev Recd Date: 2012-06-01
  • Publish Date: 2013-06-15
  • In order to take full advantage of valuable information from all source domains and to avoid negative transfer resulted from irrelevant information, a kind of weighted multi-source TrAdaBoost algorithm is proposed. At first, some weak classifiers are respectively trained based on training sample sets constituted by both each source domain and the target domain. Then we assign a weight to each weak classifier according to its error on the target training set. In the third step, a candidate classifier is obtained based on the weighted sum of all weak classifiers. In the fourth step, sample weights of the source and target domains are updated according to the error of the candidate classifier on corresponding domains. At last, all weak classifiers are retrained based on the training samples with new updated weights. The above steps repeated until the number of maximum iterations is reached. Experimental results on bimonthly datasets show that, compared with TrAdaBoost and multi-source TrAdaBoost, the proposed algorithm has higher classification accuracy.
  • loading
  • H. Wang, Y. Gao, X.G. Chen, “Transfer of reinforcement learning: the state of the art”, Acta Electronica Sinica, Vol.36, No.12, pp.39-43, 2008. (in Chinese)
    Y. Feng, H.B. Wang, D.Q. Zheng, G.L. Fei, “Research on transfer learning approach for text categorization”, Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence, Nanjing, China, pp.418-422, 2010.
    D. Zhang, L. Si, “Multiple instance transfer learning”, Proceedings of the IEEE International Conference on Data Mining Workshops, Miami, USA, pp.406-411, 2009.
    X.S. Wang, J. Pan, Y.H. Cheng, “Ant-Q algorithm based on knowledge transfer”, Acta Electronica Sinica, Vol.39, No.10, pp.2359-2365, 2011. (in Chinese)
    Q. Yang, V.W. Zheng, B. Li, H.H. Zhuo, “Transfer learning by reusing structured knowledge”, AI Magazine, Vol.32, No.2, pp.95-106, 2011.
    S.J. Pan, Q. Yang, “A survey on transfer learning”, IEEE Transactions on Knowledge and Data Engineering, Vol.22, No.10, pp.1345-1359, 2010.
    W. Dai, Q. Yang, G. Xue, Y. Yu, “Boosting for transfer learning”, Proceedings of the 24th International Conference on Machine Learning, Corvallis, pp.193-200, 2007.
    R. Samdani, W. Yih, “Domain adaptation with ensemble of feature groups”, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, pp.1458-1464, 2011.
    B. Cao, S.J. Pan, Y. Zhang, D.Y. Yeung, Q. Yang, “Adaptive transfer learning”, Proceedings of the 24th AAAI Conference on Artificial Intelligence, Atlanta, USA, pp.407-412, 2010.
    Y. Yao, G. Doretto, “Boosting for transfer learning with multiple the sources”, Proceedings of Computer Vision and Pattern Recognition, San Francisco, USA, pp.1855-1862, 2010.
    Y. Freund, R.E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting”, Journal of Computer and System Science, Vol.55, No.1, pp.119-139, 1997.
  • 加载中


    通讯作者: 陈斌,
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (934) PDF downloads(2753) Cited by()
    Proportional views


    DownLoad:  Full-Size Img  PowerPoint