In the realm of machine learning for text classification, TF·IDF is the most widely used representation for real-valued feature vectors. Unfortunately, it is oblivious to the training class labels, and naturally scales some features inappropriately. We replace IDF with Bi-Normal Separation (BNS), which was previously found to be excellent at ranking words for feature selection filtering. Empirical evaluation on a benchmark of 237 binary text classification tasks shows substantially better accuracy and F-measure for a Support Vector Machine (SVM) by using the BNS scaling representation. A wide variety of other feature scaling methods were found inferior, including binary features. Furthermore, BNS scaling yielded better performance without feature selection, obviating the complexities of feature selection.