期刊论文详细信息
BMC Bioinformatics
Optimal combination of feature selection and classification via local hyperplane based learning strategy
Xiaoping Cheng2  Hongmin Cai2  Yue Zhang1  Bo Xu2  Weifeng Su1 
[1] BNU-HKBU United International College, Hong Kong, China
[2] School of Computer Science& Engineering, South China University of Technology, Guangdong, China
关键词: HKNN;    Local learning;    Classification;    Local hyperplane;    Feature weighting;   
Others  :  1230986
DOI  :  10.1186/s12859-015-0629-6
 received in 2014-11-05, accepted in 2015-05-29,  发布年份 2015
【 摘 要 】

Background

Classifying cancers by gene selection is among the most important and challenging procedures in biomedicine. A major challenge is to design an effective method that eliminates irrelevant, redundant, or noisy genes from the classification, while retaining all of the highly discriminative genes.

Results

We propose a gene selection method, called local hyperplane-based discriminant analysis (LHDA). LHDA adopts two central ideas. First, it uses a local approximation rather than global measurement; second, it embeds a recently reported classification model, K-Local Hyperplane Distance Nearest Neighbor(HKNN) classifier, into its discriminator. Through classification accuracy-based iterations, LHDA obtains the feature weight vector and finally extracts the optimal feature subset. The performance of the proposed method is evaluated in extensive experiments on synthetic and real microarray benchmark datasets. Eight classical feature selection methods, four classification models and two popular embedded learning schemes, including k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), Support Vector Machine (SVM) and Random Forest are employed for comparisons.

Conclusion

The proposed method yielded comparable to or superior performances to seven state-of-the-art models. The nice performance demonstrate the superiority of combining feature weighting with model learning into an unified framework to achieve the two tasks simultaneously.

【 授权许可】

   
2015 Cheng et al.

附件列表
Files Size Format View
Fig. 3. 27KB Image download
Fig. 2. 100KB Image download
Fig. 1. 56KB Image download
Fig. 3. 27KB Image download
Fig. 2. 100KB Image download
Fig. 1. 56KB Image download
【 图 表 】

Fig. 1.

Fig. 2.

Fig. 3.

Fig. 1.

Fig. 2.

Fig. 3.

【 参考文献 】
  • [1]Rhodes DR, Yu J, Shanker K, Deshpande N, Varambally R, Ghosh D, Barrette T. Large-scale meta-analysis of cancer microarray data identifies common transcriptional profiles of neoplastic transformation and progression In: Randy S, editor. Proceedings of the National Academy of Sciences of the United States of America. National Academy of Sciences Press: 2004. p. 9309–9314.
  • [2]Chang HY, Nuyten DSA, Sneddon JB, Hastie T, Tibshirani R, Sørlie T. Robustness, scalability, and integration of a wound-response gene expression signature in predicting breast cancer survival In: Randy S, editor. Proceedings of the National Academy of Sciences of the United States of America. National Academy of Sciences Press: 2005. p. 3738–3743.
  • [3]Yang K, Cai Z, Li J, Lin G. A stable gene selection in microarray data analysis. BMC Bioinformatics. 2006; 7:228-235. BioMed Central Full Text
  • [4]Ni B, Liu J. A hybrid filter/wrapper gene selection method for microarray classification In: Daniel Y, Xizhao W, Jianbo S, editors. Proceedings of 2004 International Conference on Machine Learning and Cybernetics. IEEE Press: 2004. p. 2537–2542.
  • [5]Saeys Y, Inza I, Larrañaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007; 23:2507-2517.
  • [6]Abdi H, Williams LJ. Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics. 2010; 2(4):433-459.
  • [7]Pok G, Liu Steve, Ryu KH. Effective feature selection framework for cluster analysis of microarray data. Bioinformatics. 2010; 4:385-392.
  • [8]Talavera L. An evaluation of filter and wrapper methods for feature selection in categorical clustering In: Famili A, editor. Advances in Intelligent Data Analysis VI. Berlin Heidelberg Press: 2005. p. 440–451.
  • [9]Sun Y. Iterative relief for feature weighting: algorithms, theories, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2007; 29:1035-1051.
  • [10]Brown G. Some thoughts at the interface of ensemble methods and feature selection In: Neamat EG, Josef K, Fabio R, editors. Multiple Classifier Systems. Springer Press: 2010. p. 314–314.
  • [11]Kim Y, Street WN, Menczer F. Efficient dimensionality reduction approaches for feature selection In: Arivazhagan S, editor. International Conference on Conference on Computational Intelligence and Multimedia Applications. IEEE Press: 2007. p. 121–127.
  • [12]He X, Yan S, Hu Y, Niyogi P, Zhang H-J. Face recognition using laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005; 27:328-340.
  • [13]Roweis ST, Saul LK. Nonlinear dimensionality reduction by locally linear embedding. Science. 2000; 290:2323-2326.
  • [14]Yan S, Xu D, Zhang B, Zhang H-J, Yang Q, Lin S. Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Int. 2007; 29:40-51.
  • [15]Kim T-K, Kittler J. Locally linear discriminant analysis for multimodally distributed classes for face recognition with a single model image. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005; 27:318-327.
  • [16]Cortes C, Vapnik V. Support-vector networks. Machine Learning. 1995; 3:273-297.
  • [17]Yang J, Yang J-y, Zhang D. From classifiers to discriminators: A nearest neighbor rule induced discriminant analysis. Pattern Recognition. 2011; 44:1387-1402.
  • [18]Villegas M, Paredes R. Dimensionality reduction by minimizing nearest-neighbor classification error. Pattern Recognition Letters. 2011; 32:633-639.
  • [19]Villegas M, Paredes R. Simultaneous learning of a discriminative projection and prototypes for nearest-neighbor classification. IEEE Conference on Computer Vision and Pattern Recognition. 2008:1–8.
  • [20]Vincent P, Bengio Y. K-local hyperplane and convex distance nearest neighbor algorithms In: Thomas G, Sue B, Zoubin G, editors. Advances in Neural Information Processing Systems. MIT Press: 2001. p. 985–992.
  • [21]Kim T-K, Kittler J. UCI machine learning repository. University of California Irvine School of Information Andcomputer Sciences. 2007.
  • [22]Aha DW, Kibler D, Albert MK. Instance-based learning algorithms. Machine Learning. 1991; 1:37-66.
  • [23]Cai D, He X, Zhou K, Han J, Bao H. Locality sensitive discriminant analysis In: Veloso M, editor. Proceedings of the 20th International Joint Conference on Artificial Intelligence. MIT Press: 2007. p. 708–713.
  • [24]Sun Y, Todorovic S, Goodison S. Local-learning-based feature selection for high-dimensional data analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2010; 32:1610-1626.
  • [25]Cai H, Ng M. Feature weighting by relief based on local hyperplane approximation In: Pang-Ning T, editor. Advances in Knowledge Discovery and Data Mining. Springer Press: 2012. p. 335–346.
  • [26]Duan KB, Rajapakse JC, Wang H, Azuaje F. Multiple svm-rfe for gene selection in cancer classification with expression data. IEEE Transactions on NanoBioscience. 2005; 4:228-234.
  • [27]Liaw A, Wiener M. Classification and regression by randomforest. R news. 2002; 2:18-22.
  • [28]Meier L, Van De Geer, Bühlmann P. The group lasso for logistic regression. J R Stat Soc Series B (Statistical Methodology). 2008; 70:53-71.
  文献评价指标  
  下载次数:72次 浏览次数:27次