On the Generalization Ability of On-Line Learning Algorithms
摘要:
, we then obtain risk tail bounds for kernel perceptron algorithms in terms of the spectrum of the empirical kernel matrix. These bounds reveal that the linear hypotheses found via our approach achieve optimal tradeoffs between hinge loss and margin size over the class of all linear functions, an issue that was left open by previous results. A distinctive feature of our approach is that the key tools for our analysis come from the model of prediction of individual sequences; i.e., a model making no probabilistic assumptions on the source generating the data. In fact, these tools turn out to be so powerful that we only need very elementary statistical facts to obtain our final risk bounds.
展开
关键词:
Theoretical or Mathematical/ generalisation (artificial intelligence) learning (artificial intelligence) pattern recognition perceptrons/ on-line learning algorithms independent identically distributed data kernel perceptron algorithms empirical kernel matrix linear hypotheses pattern recognition/ C1230L Learning in AI C1250 Pattern recognition C5290 Neural computing techniques C1230D Neural nets
DOI:
10.1109/TIT.2004.833339
被引量:
年份:
2004







































通过文献互助平台发起求助,成功后即可免费获取论文全文。
相似文献
参考文献
引证文献
辅助模式
引用
文献可以批量引用啦~
欢迎点我试用!