期刊论文详细信息
IEEE Access
Group Activity Recognition by Using Effective Multiple Modality Relation Representation With Temporal-Spatial Attention
Dong Wang1  Dezhong Xu1  Lifang Wu1  Meng Jian1  Xu Liu1  Heng Fu1 
[1] Faculty of Information Technology, Beijing University of Technology, Beijing, China;
关键词: Group activity recognition;    relation representation;    motion representation;    attention;   
DOI  :  10.1109/ACCESS.2020.2979742
来源: DOAJ
【 摘 要 】

Group activity recognition has received a great deal of interest because of its broader applications in sports analysis, autonomous vehicles, CCTV surveillance systems and video summarization systems. Most existing methods typically use appearance features and they seldom consider underlying interaction information. In this work, a technology of novel group activity recognition is proposed based on multi-modal relation representation with temporal-spatial attention. First, we introduce an object relation module, which processes all objects in a scene simultaneously through an interaction between their appearance feature and geometry, thus allowing the modeling of their relations. Second, to extract effective motion features, an optical flow network is fine-tuned by using the action loss as the supervised signal. Then, we propose two types of inference models, opt-GRU and relation-GRU, which are used to encode the object relationship and motion representation effectively, and form the discriminative frame-level feature representation. Finally, an attention-based temporal aggregation layer is proposed to integrate frame-level features with different weights and form effective video-level representations. We have performed extensive experiments on two popular datasets, and both have achieved state-of-the-art performance. The datasets are the Volleyball dataset and the Collective Activity dataset, respectively.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次