学位论文详细信息
Niceness assumptions for Learning Algorithms
Probabilistic Lipschitzness;Retaining Distance;Dimensionality Reduction;PCA;Computer Science
Kushagra, Shrinu
University of Waterloo
关键词: Probabilistic Lipschitzness;    Retaining Distance;    Dimensionality Reduction;    PCA;    Computer Science;   
Others  :  https://uwspace.uwaterloo.ca/bitstream/10012/8596/3/Kushagra_Shrinu.pdf
瑞士|英语
来源: UWSPACE Waterloo Institutional Repository
PDF
【 摘 要 】

Various machine learning algorithms like Neural Networks, Linear Regression, FeatureLearning etc. are being employed successfully in a wide variety of applications like computer vision, speech recognition, bioinformatics etc. However, many of these learning algorithms have been shown to be NP-Hard. Furthermore some of these algorithms are even hard NP-Hard approximate. The intuition behind the success of these algorithms is that in practical applications the input data is not `worst-case;; and has certain `nice;; properties. In this thesis, we take a few small steps towards bridging the apparent gap between what is predicted by theory and what is actually happening in practice. We consider two different niceness assumptions.The notion of Metric Distortion is fairly common for dimensionality reduction techniques.The goal is to obtain reduction techniques such that the distortion is small for allpairs of points. We show via an example that Metric Distortion is not good at modelingdimensionality reduction techniques which would perform quite well in practice. We introduce Retaining Distances, a probabilistic notion for modeling dimensionality reduction techniques which preserve most of the inter-point distances. Retaining Distance can viewed as a relaxation of Metric Distortion. We prove that common techniques like PCA can be modeled by our notion.Another niceness assumption inherent in many machine learning algorithms is that`close points tend to have same labels;;. A notion of Probabilistic Lipschitzness (PL) wasintroduced by Urner et. al [28] to capture this intuition. In this work, we propose a newdefinition of PL. We show that both these definitions are orthogonal to one another, in thesense that, one is not implied by (or a relaxation of) the other. We give sample complexityupper bounds for Nearest Neighbor under this new definition.The crux of the thesis is combining the two notions to show that information (niceness)is preserved across dimensions. We prove that if we have PL in a higher dimension and any dimensionality-reduction technique retains distances then we have PL in reduced dimension as well. That is, a distance retaining reduction preserves PL. In other words, the niceness properties that existed in the original dimension also exist in reduced dimension space.Towards the end, we validate both our notions experimentally. We show how our notionof retaining distance maybe employed in practice to capture the `usefulness;; of a reduction technique. We also perform experiments to show how the two notions of PL compare in practice.

【 预 览 】
附件列表
Files Size Format View
Niceness assumptions for Learning Algorithms 402KB PDF download
  文献评价指标  
  下载次数:7次 浏览次数:24次