The Internet is shaping our daily lives. On the one hand, social networks like Facebook and Twitter allow people to share their precious moments and opinions with virtually anyone around the world. On the other, services like Google, Netflix, and Amazon allow people to look up information, watch movies, and shop online anytime, anywhere. However, with this unprecedented level of connectivity comes the danger of being monitored. There is an increasing tension between the need to share data and the need to preserve the privacy of Internet users. The need for privacy appears in three main contexts: (1) the global privacy context, as in when private companies and public institutions release personal information about individuals to the public; (2) the local privacy context, as in when individuals disclose their personal information with potentially malicious service providers; (3) the multi-party privacy context, as in when different parties cooperate to interactively compute a function that is defined over all the parties' data.Differential privacy has recently surfaced as a strong measure of privacy in all three contexts. Under differential privacy, privacy is achieved by randomizing the data before releasing it. This leads to a fundamental tradeoff between privacy and utility. In this thesis, we take a concrete step towards understanding the fundamental structure of privacy mechanisms that achieve the best privacy-utility tradeoff. This tradeoff is formulated as a constrained optimization problem: maximize utility subject to differential privacy constraints. We show, perhaps surprisingly, that in all three privacy contexts, the optimal privacy mechanisms have the same combinatorial staircase structure. This deep result is a direct consequence of the geometry of the constraints imposed by differential privacy on the privatization mechanisms.
【 预 览 】
附件列表
Files
Size
Format
View
The fundamental limits of statistical data privacy