Sensors | |
Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction | |
Sergio Escalera1  Xavier Baró1  Jordi Vitrià1  Petia Radeva1  | |
[1] Centre de Visió per Computador, Campus UAB, Edifici O, Bellaterra, 08193 Barcelona, Spain; E-Mails: | |
关键词: social interaction; audio/visual data fusion; influence model; social network analysis; | |
DOI : 10.3390/s120201702 | |
来源: mdpi | |
【 摘 要 】
Social interactions are a very important component in people’s lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links’ weights are a measure of the “influence” a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.
【 授权许可】
CC BY
© 2012 by the authors; licensee MDPI, Basel, Switzerland
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202003190046171ZK.pdf | 1526KB | download |