期刊论文详细信息
BMC Bioinformatics
Comparative study of classification algorithms for immunosignaturing data
Muskan Kukreja1  Stephen Albert Johnston1  Phillip Stafford1 
[1] Center for Innovations in Medicine, Biodesign Institute, Arizona State University, Tempe, AZ 85281, USA
关键词: Naïve Bayes;    Classification algorithms;    Data mining;    Random peptide microarray;    Immunosignature;   
Others  :  1088231
DOI  :  10.1186/1471-2105-13-139
 received in 2012-01-20, accepted in 2012-05-15,  发布年份 2012
PDF
【 摘 要 】

Background

High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data.

Results

We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy.

Conclusions

‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties.

【 授权许可】

   
2012 Kukreja et al.; licensee BioMed Central Ltd.

【 预 览 】
附件列表
Files Size Format View
20150117085845403.pdf 1101KB PDF download
Figure 1. 41KB Image download
【 图 表 】

Figure 1.

【 参考文献 】
  • [1]Haab BB: Methods and applications of antibody microarrays in cancer research. Proteomics 2003, 3:2116-2122.
  • [2]Whiteaker JR, Zhao L, Zhang HY, Feng L-C, Piening BD, Anderson L, Paulovich AG: Antibody-based enrichment of peptides on magnetic beads for mass-spectrometry-based quantification of serum biomarkers. Anal Biochem 2007, 362:44-54.
  • [3]Reimer U, Reineke U, Schneider-Mergener J: Peptide arrays: from macro to micro. Curr Opin Biotechnol 2002, 13:315-320.
  • [4]Merbl Y, Itzchak R, Vider-Shalit T, Louzoun Y, Quintana FJ, Vadai E, Eisenbach L, Cohen IR: A systems immunology approach to the host-tumor interaction: large-scale patterns of natural autoantibodies distinguish healthy and tumor-bearing mice. PLoS One 2009, 4:e6053.
  • [5]Braga-Neto UM, Dougherty ER: Is cross-validation valid for small-sample microarray classification? Bioinformatics 2004, 20:374-380.
  • [6]Hua J, Xiong Z, Lowey J, Suh E, Dougherty ER: Optimal number of features as a function of sample size for various classification rules. Bioinformatics 2004, 21:1509-1515.
  • [7]Sima C, Attoor S, Brag-Neto U, Lowey J, Suh E, Dougherty ER: Impact of error estimation on feature selection. Pattern Recognit 2005, 38:2472-2482.
  • [8]Braga-Neto U, Dougherty E: Bolstered error estimation. Pattern Recognit 2004, 37:1267-1281.
  • [9]Cwirla SE, Peters EA, Barrett RW, Dower WJ: Peptides on phage: a vast library of peptides for identifying ligands. ProcNatlAcadSci U S A 1990, 87:6378-6382.
  • [10]Nahtman T, Jernberg A, Mahdavifar S, Zerweck J, Schutkowski M, Maeurer M, Reilly M: Validation of peptide epitope microarray experiments and extraction of quality data. J Immunol Methods 2007, 328:1-13.
  • [11]Boltz KW, Gonzalez-Moa MJ, Stafford P, Johnston SA, Svarovsky SA: Peptide microarrays for carbohydrate recognition. Analyst 2009, 134:650-652.
  • [12]Brown J, Stafford P, Johnston S, Dinu V: Statistical Methods for Analyzing Immunosignatures. BMC Bioinforma 2011, 12:349.
  • [13]Halperin RF, Stafford P, Johnston SA: Exploring antibody recognition of sequence space through random-sequence peptide microarrays. Mol Cell Proteomics 2011, 10:110-000786.
  • [14]Legutki JB, Magee DM, Stafford P, Johnston SA: A general method for characterization of humoral immunity induced by a vaccine or infection. Vaccine 2010, 28:4529-4537.
  • [15]Restrepo L, Stafford P, Magee DM, Johnston SA: Application of immunosignatures to the assessment of Alzheimer's disease. Ann Neurol 2011, 70:286-295.
  • [16]Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH: The WEKA data mining software: an update. SIGKDD ExplorNewsl 2009, 11:10-18.
  • [17]John GH, Langley P: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence. In Estimating Continuous Distributions in Bayesian Classifiers. Morgan Kaufmann, San Mateo; 1995:338-345.
  • [18]Friedman N, Geiger D, Goldszmidt M: Bayesian Network Classifiers. Mach Learn 1997, 29:131-163.
  • [19]Yu J, Chen X: Bayesian neural network approaches to ovarian cancer identification from high-resolution mass spectrometry data. Bioinformatics 2005, 21(Suppl 1):i487-i494.
  • [20]Friedman J, Hastie T, Tibshirani R: Additive logistic regression: a statistical view of boosting. Ann Stat 2000, 28:337-407.
  • [21]Cessie SL, Houwelingen JCV: Ridge Estimators in Logistic Regression. J R Stat SocSer C (Appl Stat) 1992, 41:191-201.
  • [22]Landwehr N, Hall M, Frank E: Logistic Model Trees. Mach Learn 2005, 59:161-205.
  • [23]Platt J: Fast Training of Support Vector Machines using Sequential Minimal Optimization. MIT Press, Book Fast Training of Support Vector Machines using Sequential Minimal Optimization. City; 1998.
  • [24]Hastie T, Tibshirani R: Classification by Pairwise Coupling. MIT Press, Book Classification by Pairwise Coupling. City; 1998.
  • [25]Keerthi SS, Shevade SK, Bhattacharyya C, Murthy KRK: Improvements to Platt's SMO Algorithm for SVM Classifier Design. Neural Comput 2001, 13:637-649.
  • [26]Chaudhuri BB, Bhattacharya U: Efficient training and improved performance of multilayer perceptron in pattern classification. Neurocomputing 2000, 34:11-27.
  • [27]Gardner MW, Dorling SR: Artificial neural networks (the multilayer perceptron),Äî a review of applications in the atmospheric sciences. Atmos Environ 1998, 32:2627-2636.
  • [28]Aha DW, Kibler D, Albert MK: Instance-based learning algorithms. Mach Learn 1991, 6:37-66.
  • [29]Weinberger K, Blitzer J, Saul L: Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 2009, 10:207-244.
  • [30]Cleary J, Trigg L: Proceedings of the 12th International Conference on Machine Learning. In K*: An Instance-based Learner Using an Entropic Distance Measure. Morgan Kaufmann, ; 1995:108-114.
  • [31]Hall MA: Correlation-based Feature Subset Selection for Machine Learning, PhD Thesis, University of Waikato. Hamilton, New Zealand; 1998.
  • [32]Hartigan JA: Statistical theory in clustering. J Classif 1985, 2:63-76.
  • [33]Quinlan JR: Proceedings of the 5th Australian Joint Conference on Artificial Intelligence. In Learning with continuous classes. World Scientific, ; 1992:343-348.
  • [34]Witten IH, Eibe F, Hall MA: Data Mining: Practical Machine Learning Tools and Techniques. Thirdth edition. Morgan Kaufmann, San Francisco; 2011.
  • [35]Güvenir HA: Voting features based classifier with feature construction and its application to predicting financial distress. Expert SystAppl 2010, 37:1713-1718.
  • [36]Salzberg SL: C4.5: Programs for Machine Learning by J. Ross Quinlan. Morgan Kaufmann Publishers, Inc., 1993. Mach Learn 1994, 16:235-240.
  • [37]Quinlan J: Bagging, Boosting and C4. AAAI/IAAI 1996, 5:1.
  • [38]Breiman L: Random Forests. Mach Learn 2001, 45:5-32.
  • [39]Hedenfalk I, Duggan D, Chen Y, Radmacher M, Bittner M, Simon R, Meltzer P, Gusterson B, Esteller M, Raffeld M, et al.: Gene-Expression Profiles in Hereditary Breast Cancer. New England J Med 2001, 344:539-548.
  • [40]Li T, Zhang C, Ogihara M: A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression. Bioinformatics 2004, 20:2429-2437.
  • [41]Liu H, Li J, Wong L: A comparative study on feature selection and classification methods using gene expression profiles and proteomic patterns. Genome Inform 2002, 13:51-60.
  • [42]Stafford P, Brun M: Three methods for optimization of cross-laboratory and cross-platform microarray expression data. Nucleic Acids Res 2007, 35:e72.
  文献评价指标  
  下载次数:17次 浏览次数:27次