Optimal aggregation of classifiers in statistical learning
摘要:
Classification can be considered as nonparametric estimation of sets, where the risk is defined by means of a specific distance between sets associated with misclassification error. It is shown that the rates of convergence of classifiers depend on two parameters: the complexity of the class of candidate sets and the margin parameter. The dependence is explicitly given, indicating that optimal fast rates approaching O(n-1) can be attained, where n is the sample size, and that the proposed classifiers have the property of robustness to the margin. The main result of the paper concerns optimal aggregation of classifiers: we suggest a classifier that automatically adapts both to the complexity and to the margin, and attains the optimal fast rates, up to a logarithmic factor.
展开
关键词:
classification statistical learning aggregation of classifiers optimal rates empirical processes margin complexity of classes of sets MODEL SELECTION RATES CLASSIFICATION CONVERGENCE MARGIN ERROR SETS
DOI:
10.1214/aos/1079120131
被引量:
年份:
2004

























通过文献互助平台发起求助,成功后即可免费获取论文全文。
相似文献
参考文献
引证文献
辅助模式
引用
文献可以批量引用啦~
欢迎点我试用!