结合时空掩码和空间二维位置编码的手势识别
摘 要
目的 在动态手势序列特征提取时,忽略了不同动态手势手指间的相关性,是造成手势识别率不高的重要原因。针对此问题,提出了时空位置编码和掩码的方法进行手势识别,是首次对手部关节点进行空间二维位置编码。方法 首先,根据手部关节序列构造时空图,利用关节点平面坐标生成空间二维编码,并与时间轴的一维编码器融合,生成关节点的时空位置编码,可以有效处理空间上的异常姿态同时避免时间上的乱序问题;然后,将时空图按照人体手部生物结构进行分块,通过空间自注意力和空间掩码,获取手指与手指之间的潜在信息。采用时间维度扩张的策略,通过时间自注意力和时间掩码,捕获长时间手指序列动态演变信息。结果 在DHG-14/28(dynamichand gesture 14/28)数据集上,该算法比HPEV(hand posture evolution volume)算法平均识别率高出4.47%,比MS-ISTGCN(multi-stream improved spatio-temporal graph convolutional network)算法平均识别率高出2.71%;在SHREC’17 track数据集上,该算法比HPEV算法平均识别率高出0.47%,利用消融实验证明了本文策略的合理性。结论 通过大量实验评估,验证了基于分块和时空位置编码构造出来的模型很好地解决了上述问题,提高了手势识别率。
关键词
Gesture recognition by combining spatio-temporal mask and spatial 2D position encoding
Deng Gansen1, Ding Wenwen1, Yang Chao1, Ding Chongyang2(1.School of Mathematical Sciences, Huaibei Normal University, Huaibei 235000, China;2.School of Computer Science and Technology, Xidian University, Xi'an 710071, China) Abstract
Objective Gesture recognition often neglects the correlation between fingers and pays excessive attention to the node features,which is crucial for the low gesture recognition rate. For example,the index finger and thumb are physically disconnected,but their interaction is important for recognizing the“pinch”action. Thus,the low recognition rate is due to the inability to encode the spatial position of the hand node properly. Dividing the joint of the hand part into blocks is proposed to address the correlation between fingers. The aforementioned problem can be addressed byencoding the twodimensional position of the joint through its projection coordinates. The authors believe that this study is the first to encode the two-dimensional position of the node in space. Method The spatiotemporal graph is generated from the gesture sequence. This graph contains the physical connection of the node and its temporal information. Thus,the spatial and temporal characteristics are learned using mask operations. According to the three-dimensional space coordinates of joint nodes,the two-dimensional projection coordinates are obtained,and the two-dimensional projection coordinates are inputted into the two-dimensional space position encoder,which comprises sine and cosine functions with different frequencies. The plane where the projection coordinates are located is divided into several grid cells,and the encoder comprising sine and cosine functions is calculated in each grid cell. The encoders in all grids are combined to form sine and cosine functions with different frequencies to generate the final spatial two-dimensional position code. Embedding the encoded information into the spatial features of the nodes not only strengthens the spatial structure between them but also avoids the disorder of the nodes in the movement process. Using the graph convolutional network to aggregate and embed the spatial encoded node and neighbor features,the spatiotemporal graph features after the graph convolution are inputted into the spatial selfattention module to extract the inter-finger correlation. Taking each finger as the research object,the distribution of nodes in the spatiotemporal graph is divided into blocks according to the biological structure of the human hand. Each finger through a linear learnable change to generate the eigenvector of the finger query(Q),key(K),value(V). The selfattention mechanism is then used to calculate the correlation between fingers in each frame of the space-time graph,the correlation weight between fingers is obtained by combining the spatial mask matrix,and each finger feature is updated. While updating the finger features,the spatial mask matrix is used to disconnect the time relationship between fingers in the spatiotemporal graph,avoiding the influence of time dimension on the spatial correlation weight matrix. The time self-attention module is similarly used to learn the timing features of fingers in the spatiotemporal graph. First,temporal sequence embedding is conducted for each frame through temporal one-dimensional position coding to obtain the temporal sequence information of each frame during model learning. The time dimension expansion strategy is used to fuse the features of the two adjacent frames to capture the interframe correlation at a long distance. A learnable linear change then generates a feature vector query(Q),key(K),and value(V)for each frame. Finally,the self-attention mechanism is utilized to calculate the correlation between each frame in the space-time graph. Simultaneously,the correlation weight matrix between frames in the space-time graph is obtained by combining the time mask matrix,and the features of each frame are updated. Updating the features of each frame also uses the temporal mask matrix to avoid the influence of spatial dimension on the temporal correlation weight matrix. The fully connected network,ReLU activation function,and layer normalization are added to the end of each attention module to improve the training efficiency of the model,and the model finally outputs the learned feature vector for gesture recognition. Result The model is tested on two challenging datasets:DHG-14/28 and SHREC’17 track. The experimental results show that the model achieves the best recognition rate on DHG-14/28,which is 4. 47% and 2. 71% higher than the HPEV and the MS-ISTGCN algorithms,respectively,on average. On the SHREC’17 track dataset,the algorithm is 0. 47% higher than the HPEV algorithm on average. The ablation experiment proves the need of two-dimensional location coding in space. The experimental test shows that the model has the best recognition rate when node features are 64 dimensions and the number of self-attention head is 8. Conclusion Numerous experimental evaluations verified that the network model constructed by the block strategy and spatial two-dimensional position coding not only improves the spatial structure of the nodes but also enhances the recognition rate of gestures using the self-attention mechanism to learn the correlation between non-physically connected fingers.
Keywords
gesture recognition self-attention spatial two-dimensional position coding spatio-temporal mask hand segmentation
|