A joint endeavor from leading researchers in the fields of philosophy and electrical engineering, An Introduction to Statistical Learning Theory provides a broad and accessible introduction to rapidly evolving field of statistical pattern recognition and statistical learning theory. Exploring topics that are not often covered in introductory level books on statistical learning theory, including PAC learning, VC dimension, and simplicity, the authors present upper-undergraduate and graduate levels with the basic theory behind contemporary machine learning and uniquely suggest it serves as an excellent framework for philosophical thinking about inductive inference.
Preface. 1. Introduction: Classification, Learning, Features, Applications. 1.1 Scope. 1.2 Why Machine Learning? 1.3 Some Applications. 1.4 Measurements, Features, and Feature Vectors. 1.5 The Need for Probability. 1.6 Supervised Learning. 1.7 Summary. 1.8 Appendix: Induction. 1.9 Questions. 1.10 References. 2. Probability. 2.1 Probability of Some Basic Events. 2.2 Probabilities of compound events. 2.3 Conditional probability. 2.4 Drawing without replacement. 2.5 A Classic Birthday Problem. 2.6 Random Variables. 2.7 Expected Value. 2.8 Variance. 2.9 Summary. 2.10 Appendix: Interpretations of Probability. 2.11 Questions. 2.12 References. 3. Probability Densities. 3.1 An Example in Two Dimensions. 3.2 Random Numbers in [0, 1]. 3.3 Density Functions. 3.4 Probability Densities in Higher Dimensions. 3.5 Joint and Conditional Densities. 3.6 Expected Value and Variance. 3.7 Laws of Large Numbers. 3.8 Summary. 3.9 Appendix: Measurability. 3.10 Questions. 3.11 References. 4. The Pattern Recognition Problem. 4.1 A Simple Example. 4.2 Decision Rules. 4.3 Success Criterion. 4.4 The Best Classifier: Bayes Decision Rule. 4.5 Continuous Features and Densities. 4.6 Summary. 4.7 Appendix: Uncountably Many. 4.8 Questions. 4.9 References. 5. The Optimal Bayes Decision Rule. 5.1 Bayes Theorem. 5.2 Bayes Decision Rule. 5.3 Optimality and Some Comments. 5.4 An Example. 5.5 Bayes Theorem and Decision Rule With Densities. 5.6 Summary. 5.7 Appendix: Defining Conditional Probability. 5.8 Questions. 5.9 References. 6. Learning from Examples. 6.1 Lack of Knowledge of Distributions. 6.2 Training Data. 6.3 Assumptions on the Training Data. 6.4 A Brute Force Approach to Learning. 6.5 Curse of Dimensionality, Inductive Bias, and No Free Lunch. 6.6 Summary. 6.7 Appendix: What Sort of Learning? 6.8 Questions. 6.9 References. 7. The Nearest Neighbor Rule. 7.1 The Nearest Neighbor Rule. 7.2 Performance of the Nearest Neighbor Rule. 7.3 Intuition and Proof Sketch of Performance. 7.4 Using More Neighbors. 7.5 Summary. 7.6 Appendix: When People Use Nearest Neighbor Reasoning. 7.7 Questions. 7.8 References. 8. Kernel Rules. 8.1 Motivation. 8.2 A Variation on Nearest Neighbor Rules. 8.3 Kernel Rules. 8.4 Universal Consistency of Kernel Rules. 8.5 Potential Functions. 8.6 More General Kernels. 8.7 Summary. 8.8 Appendix: Kernels, Similarity, and Features. 8.9 Questions. 8.10 References. 9. Neural Networks: Perceptrons. 9.1 Multilayer Feed Forward Networks. 9.2 Neural Networks for Learning and Classification. 9.3 Perceptrons. 9.4 Learning Rule for Perceptrons. 9.5 Representational Capabilities of Perceptrons. 9.6 Summary. 9.7 Appendix: Models of Mind. 9.8 Questions. 9.9 References. 10. Multilayer Networks. 10.1 Representation Capabilities of Multilayer Networks. 10.2 Learning and Sigmoidal Outputs. 10.3 Training Error and Weight Space. 10.4 Error Minimization by Gradient Descent. 10.5 Backpropagation. 10.6 Derivation of Backpropagation Equations. 10.7 Summary. 10.8 Appendix: Gradient Descent and Reasoning Toward Reflective Equilibrium. 10.9 Questions. 10.10 References. 11. PAC Learning. 11.1 Class of Decision Rules. 11.2 Best Rule From a Class. 11.3 Probably Approximately Correct Criterion. 11.4 PAC Learning. 11.5 Summary. 11.6 Appendix: Identifying Indiscernibles. 11.7 Questions. 11.8 References. 12. VC Dimension. 12.1 Approximation and Estimation Errors. 12.2 Shattering. 12.3 VC Dimension. 12.4 Learning Result. 12.5 Some Examples. 12.6 Application to Neural Nets. 12.7 Summary. 12.8 Appendix: VC Dimension and Popper Dimension. 12.9 Questions. 12.10 References. 13. Infinite VC Dimension. 13.1 A Hierarchy of Classes and Modified PAC Criterion. 13.2 Misfit Versus Complexity Tradeoff. 13.3 Learning Results. 13.4 Inductive Bias and Simplicity. 13.5 Summary. 13.6 Appendix: Uniform Convergence and Universal Consistency. 13.7 Questions. 13.8 References. 14. The Function Estimation Problem. 14.1 Estimation. 14.2 Success Criterion. 14.3 Best Estimator: Regression Function. 14.4