DTI-GNet: Dynamic Topology Inferenced Graph Convolution Network for Dance Action Recognition
-
Graphical Abstract
-
Abstract
As an extension of human action recognition, dance action recognition has been a significant research area with potential applications in dance education, entertainment, artistic protection and cultural heritage preservation. However, the current human action recognition methods face challenges in capturing rich geometric and physical characteristics of dance actions due to their diversity, high complexity and individual variation in execution. In this paper, a dynamic topology inferenced graph convolution network (DTI-GNet) is proposed for dance action recognition. First, a bone-joint features embedding encoding module (BJ-EM) is devised to infer the geometric and physical characteristics hidden in spatial structures and temporal dynamics during action performance, aiming to capture action-specific bone-joint dependencies. Second, a spatial-temporal dynamic topological encoding module (STD-EM) is specifically designed to exploit joint-bone geometric and physical properties, relaxing the restrictions of the fixed topology and overcoming oversmoothing problems encountered by stracked graph convolution layer with rigid topology. Finally, a dynamic topology inferenced spatial-temporal graph convolution layer (DTI-GConv) is developed as a fundamental unit to construct DTI-GNet, exploring the co-occurrence features and inter-dependencies between joints. Experimental results on two dance action datasets, MSDanceAction and InDanceAction, demonstrate the superiority of the proposed method for dance action recognition.
-
-