Large-scale machine learning has recently risen to prominence in settings of both industry and academia, driven by today's newfound accessibility to data-collecting sensors and high-volume data storage devices. The advent of these capabilities in industry, however, has raised questions about the privacy implications of new massively data-driven, subscribable services offered by corporations to individuals. Recent lines of research have developed algorithms designed to scale in distributed machine learning environments that make certain privacy guarantees to subscribers without hindering the quality of service the corporations are able to provide. In this work, we fully implement one such distributed optimization framework and rigorously test its parameterized convergence properties. We also develop a system of both disruptive and nondisruptive attacks designed to aggressively intrude upon subscribers' privacy and to glean subscribers' private data from information readily available within the framework's network. These attack techniques can be seamlessly integrated into the aforementioned distributed optimization framework and are shown to be a risk to the privacy of the system.
【 预 览 】
附件列表
Files
Size
Format
View
A framework for privacy-preserving, distributed machine learning using gradient obfuscation