Citation: | DAI Leichao, FENG Lin, SHANG Xinglin, et al., “Cross Modal Adaptive Few-Shot Learning Based on Task Dependence,” Chinese Journal of Electronics, vol. 32, no. 1, pp. 85-96, 2023, doi: 10.23919/cje.2021.00.093 |
[1] |
F. F. Li, R. Fergus, and P. Perona, “One-shot learning of object categories,” IEEE Transaction on Pattern Analysis And Machine Intelligence, vol.28, no.4, pp.594–611, 2006. doi: 10.1109/TPAMI.2006.79
|
[2] |
C. Xing, N. Rostamzadeh, B. Oreshkin, et al., “Adaptive cross-modal few-shot learning,” in Proceedings of the 33rd International Conference on Neural Information Processing Systems (NIPS’19), Vancouver City, Canada, pp.4847–4857, 2019.
|
[3] |
C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, pp.1126–1135, 2017.
|
[4] |
M. Nikhil, R. Mostafa, X. Chen, et al., “A simple neural attentive meta-learner,” arXiv preprint, arXiv: 1707.03141, 2018.
|
[5] |
S. Ravi and H. Larochelle, “Optimization as a model for few-shot learning,” The 5th International Conference on Learning Representations (ICLR 2017 Oral), Toulon, France, https://openreview. net/forum?id=rJY0-Kcll, 2017.
|
[6] |
Y. Liu, J. Lee, M. Park, et al., “Learning to propagate labels: Transductive propagation network for few-shot learning,” arXiv preprint, arXiv: 1805.10002, 2019.
|
[7] |
M. Ren, R. Liao, E. Fetaya, et al., “Incremental few-shot learning with attention attractor networks,” in Proceedings of 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver City, Canada, pp.5276–5286, 2019.
|
[8] |
W. Y. Sung, D. Y. Kim, S. Jun, et al., “XtarNet: Learning to extract task-adaptive representation for incremental few-shot learning,” arXiv preprint , arXiv: 2003.08561 , 2020.
|
[9] |
V. OriolL, B. Charles, L. Timothy, et al., “Matching networks for one shot learning,” in Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS’16), Barcelona, Spain, pp.3630–3638, 2016.
|
[10] |
J. Snell, K. Swersky, and R. S. Zemel, “Prototypical networks for few-shot learning,” in Proceedings of 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, USA, pp.4077–4087, 2017.
|
[11] |
X. Jiang, M. Havaei, F. Varno, et al., “Learning to learn with conditional class dependencies,” The 7th International Conference on Learning Representations (ICLR (Poster) 2019), New Orleans, Louisiana, USA, https://openreview.net/forum?id=BJfOXnActQ, 2019.
|
[12] |
M. Ren, E. Triantafillou, S. Ravi, et al., “Meta-learning for semi-supervised few-shot classification,” arXiv preprint, arXiv: 1803.00676, 2018.
|
[13] |
F. Sung, Y. Yang, L. Zhang, et al., “Learning to compare: Relation network for few-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, pp.1199–1208, 2018.
|
[14] |
Y. Yu, L. Feng, G. G. Wang, et al., “A few-shot learning model based on semi-supervised with pseudo label,” Acta Electronica Sinica, vol.47, no.11, pp.2284–2291, 2019.
|
[15] |
A. Frome, G. S. Corrado, J. Shlens, et al., “DeViSE: A deep visual-semantic embedding model,” in Proceedings of the 27th Conference and Workshop on Neural Information Processing Systems (NeurIPS), Lake Tahoe, USA, pp.2121–2129, 2013.
|
[16] |
T. Y. H. Hubert, L. K. Huang, and R. Salakhutdinov, “Learning robust visual-semantic embeddings,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp.3571–3580, 2017.
|
[17] |
E. Schonfeld, S. Ebrahimi, S. Sinha, et al., “Generalized zero-and few-shot learning via aligned variational autoencoders,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), California, USA, pp.8247–8255, 2019.
|
[18] |
B. Oreshkin, L. P. Rodriguez, and A. Lacoste, “TADAM: Task dependent adaptive metric for improved few-shot learning,” in Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS), Montreal, Canada, pp.719–729, 2018.
|
[19] |
E. Perez, F. Strub, V. H. De, et al., “FiLM: Visual reasoning with a general conditioning layer”, arXiv preprint, arXiv: 1709.07871, 2018.
|
[20] |
F. Lyu, L. Y. Li, S. S. Victor, et al., “Multi-label image classification via coarse-to-fine attention,” Chinese Journal of Electronics, vol.28, no.6, pp.1118–1126, 2019. doi: 10.1049/cje.2019.07.015
|
[21] |
O. Vinyals, C. Blundell, T. Lillicrap, et al., “Matching networks for one shot learning,” in Proceedings of the 30th Neural Information Processing Systems (NeurIPS), Barcelona, Spain, pp.3630–3638, 2016.
|
[22] |
X. S. Wang, Y. R. Li, and Y. H. Cheng, “Hyperspectral image classification based on unsupervised heterogeneous domain adaptation cycleGan,” Chinese Journal of Electronics, vol.29, no.4, pp.608–614, 2020. doi: 10.1049/cje.2020.05.003
|
[23] |
J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp.1532–1543, 2014.
|