In clinical environments sleep stagings are an important diagnostic tool. Currently, sleep stagings are created manually by technicians as existingfeature representations are insufficient for automated classifications. Classifying and segmenting these EEG signals is a non-trivial task due to a combination of high levels of noise, copious artifacts, variations from different recording equipment, and significant inter and intra patientvariability. Classical approaches have typically relied on extensive artifactelimination, a mixture of band power from a series of fixed frequencybands, and time domain features. In order to produce a more accurate fully automated sleep staging this workcreates a novel Dense Denoised Spectral (DDS) feature representation which exploits time and frequency modality to adaptively denoise single channel EEG recordings. The joint time-frequency structure is composed of both spectral and temporalbands which share similar level of activity. Even under noisy and variedconditions the joint modality in the data can be found through acombination of median operators, thresholding, sparse approximations,and consensus k-means. From the learned time-frequency segmentation either low rank representations can be constructed or the original representation can be denoised using the segments to create a better estimate of the features compared to fixed temporal-spectral windows used in prior work. The 2D time-frequency structure is learned for the DDS features independently for each patient which allows DDS features to adapt to individual differencesand significantly increase the total accuracy of a sleep staging.
【 预 览 】
附件列表
Files
Size
Format
View
Automatic spectral-temporal modality based EEG sleep staging