Open-World Class Incremental Learning with Adaptive Threshold
DOI:
https://doi.org/10.54097/e3fv8w77Keywords:
Class incremental learning, Open world, Out-of-distribution detectionAbstract
Existing class incremental learning (CIL) settings are mainly based on the closed-world assumption. Its training and testing sets only contain in-distribution (ID) samples. However, CIL methods are typically applied in open-world scenarios where out-of-distribution (OOD) samples are widely present, leading to unpredictable behavior of intelligent agents. The OOD categories may also change as the incremental tasks progress. In this paper, we focus on handling a realistic and challenging new setting called Open-World Class Incremental Learning (OWCIL). Specifically, OWCIL includes the accumulated OOD categories during testing to simulate the open-world continual learning scenario. Moreover, it does not provide any OOD samples for model training. Existing methods integrate techniques for CIL and OOD detection to address scenarios similar to the OWCIL. However, they either fail to handle evolving OOD categories or require OOD samples during training, limiting their performance under OWCIL. We propose an adaptive threshold (AT) method to handle the accumulated OOD categories. Additionally, a discriminative optimization method is introduced to determine the threshold without OOD samples. The effectiveness of our proposed method has been validated through extensive experiments on multiple benchmark datasets under OWCIL. Detailed analysis shows that both AT and discriminative optimization can clearly boost performance.
Downloads
References
[1] J. Read, A. Bifet, B. Pfahringer, and G. Holmes. Batchincremental versus instance-incremental learning in dynamic and evolving data. In Advances in Intelligent Data Analysis XI: 11th International Symposium, IDA 2012, Helsinki, Finland, October 25-27, 2012. Proceedings 11, pages 313–323. Springer, 2012. 1.
[2] G. M. Van de Ven, T. Tuytelaars, and A. S. Tolias. Three types of incremental learning. Nature Machine Intelligence, 4(12):1185–1197, 2022. 1.
[3] L. Wang, X. Zhang, H. Su, and J. Zhu. A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487, 2023. 1, 3.
[4] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017. 1, 3.
[5] Z. Li and D. Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017. 1.
[6] S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001–2010, 2017. 1, 3.
[7] F.-Y. Wang, D.-W. Zhou, H.-J. Ye, and D.-C. Zhan. Foster: Feature boosting and compression for class-incremental learning. In European conference on computer vision, pages 398–414. Springer, 2022. 1, 3.
[8] S. Yan, J. Xie, and X. He. Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3014–3023, 2021. 1, 3.
[9] B. Zhao, X. Xiao, G. Gan, B. Zhang, and S.-T. Xia. Maintaining discrimination and fairness in class incremental learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 13208– 13217, 2020. 1.
[10] G. Petit, A. Popescu, H. Schindler, D. Picard, and B. Delezoide. Fetril: Feature translation for exemplar-free classincremental learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3911–3920, 2023. 1, 3, 4, 6.
[11] F. Zhu, Z. Cheng, X.-y. Zhang, and C.-l. Liu. Classincremental learning via dual augmentation. Advances in Neural Information Processing Systems, 34:14306–14318, 2021. 1, 3, 4, 6.
[12] F. Zhu, X.-Y. Zhang, C. Wang, F. Yin, and C.-L. Liu. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5871– 5880, 2021. 1, 3, 4, 6.
[13] K. Zhu, W. Zhai, Y. Cao, J. Luo, and Z.-J. Zha. Selfsustaining representation expansion for non-exemplar classincremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9296–9305, 2022. 1, 3, 4, 6.
[14] D. Hendrycks and K. Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv e-prints, pages arXiv–1610, 2016. 1, 3, 4, 6.
[15] W. Liu, X. Wang, J. Owens, and Y. Li. Energy-based out-ofdistribution detection. Advances in neural information processing systems, 33:21464–21475, 2020. 1, 3, 5, 6.
[16] M. Sensoy, L. Kaplan, and M. Kandemir. Evidential deep learning to quantify classification uncertainty. Advances in neural information processing systems, 31, 2018. 1, 3.
[17] E. Aguilar, B. Raducanu, P. Radeva, and J. Van de Weijer. Continual evidential deep learning for out-of-distribution detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3444–3454, 2023. 1, 3.
[18] R. Aljundi, D. O. Reino, N. Chumerin, and R. E. Turner. Continual novelty detection. In Conference on Lifelong Learning Agents, pages 1004–1025. PMLR, 2022. 1, 3.
[19] G. Kim, B. Liu, and Z. Ke. A multi-head model for continual learning via out-of-distribution replay. In Conference on Lifelong Learning Agents, pages 548–563. PMLR, 2022. 1, 3.
[20] T. Ahmad, A. R. Dhamija, M. Jafarzadeh, S. Cruz, R. Rabinowitz, C. Li, and T. E. Boult. Variable few shot class incremental and open world learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3688–3699, 2022. 1, 2, 4.
[21] A. Bendale and T. Boult. Towards open world recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1893–1902, 2015. 1, 2, 4, 6, 8.
[22] E. M. Rudd, L. P. Jain, W. J. Scheirer, and T. E. Boult. The extreme value machine. IEEE transactions on pattern analysis and machine intelligence, 40(3):762–768, 2017. 1, 2, 4, 6.
[23] D.-W. Zhou, Y. Yang, and D.-C. Zhan. Learning to classify with incremental new class. IEEE Transactions on Neural Networks and Learning Systems, 33(6):2429–2443, 2021. 2, 4.
[24] M. Masana, X. Liu, B. Twardowski, M. Menta, A. D. Bagdanov, and J. Van De Weijer. Class-incremental learning: survey and performance evaluation on image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5):5513–5533, 2022. 3.
[25] D.-W. Zhou, Q.-W. Wang, Z.-H. Qi, H.-J. Ye, D.-C. Zhan, and Z. Liu. Deep class-incremental learning: A survey. arXiv preprint arXiv:2302.03648, 2023. 3.
[26] J. L. McClelland, B. L. McNaughton, and R. C. O’Reilly. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419, 1995. 3.
[27] M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109–165. Elsevier, 1989. 3.
[28] J. Bang, H. Kim, Y. Yoo, J.-W. Ha, and J. Choi. Rainbow memory: Continual learning with a memory of diverse samples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8218–8227, 2021. 3.
[29] D. Rolnick, A. Ahuja, J. Schwarz, T. Lillicrap, and G. Wayne. Experience replay for continual learning. Advances in Neural Information Processing Systems, 32, 2019. 3.
[30] Y. Yang, D.-W. Zhou, D.-C. Zhan, H. Xiong, and Y. Jiang. Adaptive deep models for incremental learning: Considering capacity scalability and sustainability. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 74–82, 2019. 3.
[31] F. Zenke, B. Poole, and S. Ganguli. Continual learning through synaptic intelligence. In International conference on machine learning, pages 3987–3995. PMLR, 2017. 3.
[32] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 3
[33] Z.-H. Zhou and Y. Jiang. Nec4. 5: Neural ensemble based c4. 5. IEEE Transactions on knowledge and data engineering, 16(6):770–773, 2004. 3.
[34] D.-W. Zhou, Q.-W. Wang, H.-J. Ye, and D.-C. Zhan. A model or 603 exemplars: Towards memory-efficient classincremental learning. arXiv preprint arXiv:2205.13218, 2022. 3.
[35] J. Yang, K. Zhou, Y. Li, and Z. Liu. Generalized out-of-distribution detection: A survey. arXiv preprint arXiv:2110.11334, 2021. 3.
[36] D. Hendrycks, M. Mazeika, and T. Dietterich. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606, 2018. 3.
[37] J. He and F. Zhu. Out-of-distribution detection in unsupervised continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3850–3855, 2022. 3.
[38] G. Kim, C. Xiao, T. Konishi, and B. Liu. Learnability and algorithm for continual learning. In International Conference on Machine Learning, pages 16877–16896. PMLR, 2023. 3.
[39] A. Rios, N. Ahuja, I. Ndiour, U. Genc, L. Itti, and O. Tickoo. incdfm: Incremental deep feature modeling for continual novelty detection. In European Conference on Computer Vision, pages 588–604. Springer, 2022. 3.
[40] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Tront, 2009. 6
[41] Y. Le and X. Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015. 6.
[42] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. 6
[43] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015. 6.
[44] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence, 40(6):1452–1464, 2017. 6.
[45] D.-W. Zhou, F.-Y. Wang, H.-J. Ye, and D.-C. Zhan. Pycil: A python toolbox for class-incremental learning. arXiv eprints, pages arXiv–2112, 2021. 6.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Computer Life

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.