In this thesis, we study the problem of learning a linear transformation of acoustic feature vectors for speech recognition, in a framework where apart from the acoustics, additional views are available at training time.We consider a multiview learning approach based on canonical correlation analysis to learn linear transformations of the acoustic features that are maximally correlated with the data.We propose simple approaches for combining information shared across the views with information that is private to the acoustic view.We apply these methods to a specific scenario in which articulatory data is available at training time.Results of phonetic frame classification on data drawn from the University of Wisconsin X-ray Microbeam Database indicate a small but consistent advantage to the multiview approaches that combine shared and private information, compared to the baseline acoustic features or unsupervised dimensionality reduction using principal component analysis.We then discuss limitations of canonical correlation analysis and possible extensions.