IEEE Access | |
Mobile Peer-to-Peer Assisted Coded Streaming | |
Peter Ekler1  Patrik J. Braun1  Adam Budai1  Marton Sipos1  Janos Levendovszky2  Frank H. P. Fitzek3  | |
[1] Department of Automation and Applied Informatics, Budapest University of Technology and Economics, Budapest, Hungary;Department of Networked Systems and Services, Budapest University of Technology and Economics, Budapest, Hungary;Deutsche Telekom Chair of Communication Networks, Technische Universit&x00E4; | |
关键词: Caching; analysis; network coding; peer-to-peer; system implementation; video streaming; | |
DOI : 10.1109/ACCESS.2019.2950800 | |
来源: DOAJ |
【 摘 要 】
Current video streaming services use a conventional, client-server network topology that puts a heavy load on content servers. Previous work has shown that Peer-to-Peer (P2P) assisted streaming solutions can potentially reduce this load. However, implementing P2P-assisted streaming poses several challenges in modern networks. Users tend to stream videos on the go, using their mobile devices. This mobility makes the network difficult to orchestrate. Furthermore, peers have to contribute their storage to the network, which is challenging, since mobile devices have limited resources compared to desktop machines. In this paper, we introduce an analytical framework for mobile P2P-assisted streaming to estimate the server load that we define as the minimum required server upload rate. Using our framework, we evaluate four caching strategies: infinite cache as a baseline, first in first out (FIFO), random, and Random Linear Network Coded (RLNC) cache. We verify our analytical results with empirical data that was obtained by carrying out extensive measurements on our working P2P system. Our results show that when employing FIFO, random, and RLNC caching strategies, the server load converges to that of the infinite cache as the cache size increases. With a limit of 5 P2P connections per peer, we show that using the random caching, peers can store 40% fewer packets and still achieve the same benefit as with FIFO caching. When using the RLNC caching, it is enough to store 50% fewer packets to achieve the same benefit.
【 授权许可】
Unknown