We introduce an asymmetric sparse approximate embedding optimized for fast kernel comparison. In contrast to other methods that perform an explicit approximate embedding using kernel PCA followed by a distance compression technique in R^d, which loses information at both steps, our method utilizes the implicit kernel representation directly. In addition, we empirically demonstrate that our method needs no training step and can operate with a dictionary of random exemplars from the dataset. We evaluate our method on three databases used in large-scale image retrieval.