An Egocentric video is a long unedited video that offers a first-person view. The purposes of egocentric video summarization are to make storyboard that includes important things and to helpunderstand entire contents in a short time. Egocentric video recorded by wearable camera is consisted of frames whose background is constantly changing and moving. Since existing methods assume stable background, they are not suitable for the summarization of egocentric video. Recently, new summarization methods have been proposed to support egocentric video, but they take very long time to summarize. Therefore a new video summarization method is proposed in this thesis, which can summarize egocentric video well with considerably low computational loads. Specifically, to get important frames that frequently appear, we use spectral clustering with color histogram and select frames that have shortest distances from each of the clustering means as candidate frames. Next, blur, contrast, and skew of candidate frames are computed and then the frames having low values are removed from the candidates. Finally, features are extracted using sparse codes of SIFT from remained candidate frames and they are divided into clusters, and then final summarization result is obtained by graph matching. The experimental results show that the proposed method is faster than existing egocentric video summarization methods and produces optimum frames as summarization result. The result frames reflect relationship between clusters while having good quality in blur, contrast and skew.