Word embedding is a popular representation of words in vector space, and its geometry reveals the lexical semantics. This thesis further explores the interesting geometric properties of word embedding, and looks into its interaction with the context representation. We propose an innovative method to detect whether a given word or phrase is used literally in a specific context. This work focuses on three specific applications in natural language processing: idiomaticity, sarcasm and metaphor detection. Extensive experiments have shown that this embedding-based method achieves good performance in multiple languages.