| NEUROCOMPUTING | 卷:329 |
| Stream loss: ConvNet learning for face verification using unlabeled videos in the wild | |
| Article | |
| Rashedi, Elaheh1  Barati, Elaheh1  Nokleby, Matthew2  Chen, Xue-wen1  | |
| [1] Wayne State Univ, Dept Comp Sci, Detroit, MI 48202 USA | |
| [2] Wayne State Univ, Dept Elect & Comp Engn, Detroit, MI 48202 USA | |
| 关键词: Convolutional neural network; Face verification; Loss learning; Video stream; | |
| DOI : 10.1016/j.neucom.2018.10.041 | |
| 来源: Elsevier | |
PDF
|
|
【 摘 要 】
Face recognition tasks have seen a significantly improved performance due to ConvNets. However, less attention has been given to face verification from videos. This paper makes two contributions along these lines. First, we propose a method, called stream loss, for learning ConvNets using unlabeled videos in the wild. Second, we present an approach for generating a face verification dataset from videos in which the labeled streams can be created automatically without human annotation intervention. Using this approach, we have assembled a widely scalable dataset, FaceSequence, which includes 1.5M streams capturing similar to 500K individuals. Using this dataset, we trained our network to minimize the stream loss. The network achieves accuracy comparable to the state-of-the-art on the LFW and YTF datasets with much smaller model complexity. We also fine-tuned the network using the UB-A dataset. The validation results show competitive accuracy compared with the best previous video face verification results. (C) 2018 Elsevier B.V. All rights reserved.
【 授权许可】
Free
【 预 览 】
| Files | Size | Format | View |
|---|---|---|---|
| 10_1016_j_neucom_2018_10_041.pdf | 3143KB |
PDF