Volume 32 Issue 1
Jan.  2023
Turn off MathJax
Article Contents
DAI Leichao, FENG Lin, SHANG Xinglin, SU Han. Cross Modal Adaptive Few-Shot Learning Based on Task Dependence[J]. Chinese Journal of Electronics, 2023, 32(1): 85-96. doi: 10.23919/cje.2021.00.093
Citation: DAI Leichao, FENG Lin, SHANG Xinglin, SU Han. Cross Modal Adaptive Few-Shot Learning Based on Task Dependence[J]. Chinese Journal of Electronics, 2023, 32(1): 85-96. doi: 10.23919/cje.2021.00.093

Cross Modal Adaptive Few-Shot Learning Based on Task Dependence

doi: 10.23919/cje.2021.00.093
Funds:  This work was supported by the National Natural Science Foundation of China (61876158) and Fundamental Research Funds for the Central Universities (2682021ZTPY030)
More Information
  • Author Bio:

    Leichao DAI was born in 1994. He received the B.S. degree in engineering from Sichuan Normal University, China, in 2018 and is currently pursuing the M.S. degree in software engineering at Sichuan Normal University. His main research interests include computer vision and pattern recognition. (Email: daileichao@gmail.com)

    Lin FENG (corresponding author) received the Ph.D. degree from Southwest Jiaotong University, China. He is a Professor of School of Computer Science, Sichuan Normal University. His research interests include machine learning and data mining. (Email: fenglin@sicnu.edu.cn)

    Xinglin SHANG was born in 1995. She received the B.S. degree in engineering from Sichuan Normal University, China, in 2019 and is currently pursuing the M.S. degree in software engineering at Sichuan Normal University. Her main research interest includes machine learning. (Email: 1250919363@qq.com)

    Han SU received the Ph.D. degree from Harbin Engineering University. She is a Professor of School of Computer Science, Sichuan Normal University. Her research interests include pattern recognition and image processing. (Email: jkxy_sh@sicnu.edu.cn)

  • Received Date: 2021-03-16
  • Accepted Date: 2022-02-28
  • Available Online: 2022-04-19
  • Publish Date: 2023-01-05
  • Few-shot learning (FSL) is a new machine learning method that applies the prior knowledge from some different domains tasks. The existing FSL models of metric-based learning have some drawbacks, such as the extracted features cannot reflect the true data distribution and the generalization ability is weak. In order to solve the problem in the present, we developed a model named cross modal adaptive few-shot learning based on task dependence (COOPERATE for short). A feature extraction and task representation method based on task condition network and auxiliary co-training is proposed. Semantic representation is added to each task by combining both visual and textual features. The measurement scale is adjusted to change the property of parameter update of the algorithm. The experimental results show that the COOPERATE has the better performance comparing with all approaches of the monomode and modal alignment FSL.
  • loading
  • [1]
    F. F. Li, R. Fergus, and P. Perona, “One-shot learning of object categories,” IEEE Transaction on Pattern Analysis And Machine Intelligence, vol.28, no.4, pp.594–611, 2006. doi: 10.1109/TPAMI.2006.79
    [2]
    C. Xing, N. Rostamzadeh, B. Oreshkin, et al., “Adaptive cross-modal few-shot learning,” in Proceedings of the 33rd International Conference on Neural Information Processing Systems (NIPS’19), Vancouver City, Canada, pp.4847–4857, 2019.
    [3]
    C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, pp.1126–1135, 2017.
    [4]
    M. Nikhil, R. Mostafa, X. Chen, et al., “A simple neural attentive meta-learner,” arXiv preprint, arXiv: 1707.03141, 2018.
    [5]
    S. Ravi and H. Larochelle, “Optimization as a model for few-shot learning,” The 5th International Conference on Learning Representations (ICLR 2017 Oral), Toulon, France, https://openreview. net/forum?id=rJY0-Kcll, 2017.
    [6]
    Y. Liu, J. Lee, M. Park, et al., “Learning to propagate labels: Transductive propagation network for few-shot learning,” arXiv preprint, arXiv: 1805.10002, 2019.
    [7]
    M. Ren, R. Liao, E. Fetaya, et al., “Incremental few-shot learning with attention attractor networks,” in Proceedings of 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver City, Canada, pp.5276–5286, 2019.
    [8]
    W. Y. Sung, D. Y. Kim, S. Jun, et al., “XtarNet: Learning to extract task-adaptive representation for incremental few-shot learning,” arXiv preprint , arXiv: 2003.08561 , 2020.
    [9]
    V. OriolL, B. Charles, L. Timothy, et al., “Matching networks for one shot learning,” in Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS’16), Barcelona, Spain, pp.3630–3638, 2016.
    [10]
    J. Snell, K. Swersky, and R. S. Zemel, “Prototypical networks for few-shot learning,” in Proceedings of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, USA, pp.4077–4087, 2017.
    [11]
    X. Jiang, M. Havaei, F. Varno, et al., “Learning to learn with conditional class dependencies,” The 7th International Conference on Learning Representations (ICLR (Poster) 2019), New Orleans, Louisiana, USA, https://openreview.net/forum?id=BJfOXnActQ, 2019.
    [12]
    M. Ren, E. Triantafillou, S. Ravi, et al., “Meta-learning for semi-supervised few-shot classification,” arXiv preprint, arXiv: 1803.00676, 2018.
    [13]
    F. Sung, Y. Yang, L. Zhang, et al., “Learning to compare: Relation network for few-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, pp.1199–1208, 2018.
    [14]
    Y. Yu, L. Feng, G. G. Wang, et al., “A few-shot learning model based on semi-supervised with pseudo label,” Acta Electronica Sinica, vol.47, no.11, pp.2284–2291, 2019.
    [15]
    A. Frome, G. S. Corrado, J. Shlens, et al., “DeViSE: A deep visual-semantic embedding model,” in Proceedings of the 27th Conference and Workshop on Neural Information Processing Systems (NeurIPS), Lake Tahoe, USA, pp.2121–2129, 2013.
    [16]
    T. Y. H. Hubert, L. K. Huang, and R. Salakhutdinov, “Learning robust visual-semantic embeddings,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp.3571–3580, 2017.
    [17]
    E. Schonfeld, S. Ebrahimi, S. Sinha, et al., “Generalized zero-and few-shot learning via aligned variational autoencoders,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), California, USA, pp.8247–8255, 2019.
    [18]
    B. Oreshkin, L. P. Rodriguez, and A. Lacoste, “TADAM: Task dependent adaptive metric for improved few-shot learning,” in Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS), Montreal, Canada, pp.719–729, 2018.
    [19]
    E. Perez, F. Strub, V. H. De, et al., “FiLM: Visual reasoning with a general conditioning layer”, arXiv preprint, arXiv: 1709.07871, 2018.
    [20]
    F. Lyu, L. Y. Li, S. S. Victor, et al., “Multi-label image classification via coarse-to-fine attention,” Chinese Journal of Electronics, vol.28, no.6, pp.1118–1126, 2019. doi: 10.1049/cje.2019.07.015
    [21]
    O. Vinyals, C. Blundell, T. Lillicrap, et al., “Matching networks for one shot learning,” in Proceedings of the 30th Neural Information Processing Systems (NeurIPS), Barcelona, Spain, pp.3630–3638, 2016.
    [22]
    X. S. Wang, Y. R. Li, and Y. H. Cheng, “Hyperspectral image classification based on unsupervised heterogeneous domain adaptation cycleGan,” Chinese Journal of Electronics, vol.29, no.4, pp.608–614, 2020. doi: 10.1049/cje.2020.05.003
    [23]
    J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp.1532–1543, 2014.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)  / Tables(6)

    Article Metrics

    Article views (249) PDF downloads(25) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return