Improved Risk Tail Bounds for On-Line Algorithms
摘要:
Tight bounds are derived on the risk of models in the ensemble generated by incremental training of an arbitrary learning algorithm. The result is based on proof techniques that are remarkably different from the standard risk analysis based on uniform convergence arguments, and improves on previous bounds published by the same authors.
展开
关键词:
learning (artificial intelligence) risk analysis statistical analysis arbitrary learning algorithm ensemble incremental training online algorithms proof techniques risk analysis risk tail bounds statistical learning theory uniform convergence arguments M
DOI:
10.1109/TIT.2007.911292
被引量:
年份:
2008
































































通过文献互助平台发起求助,成功后即可免费获取论文全文。
相似文献
参考文献
引证文献
引用走势
辅助模式
引用
文献可以批量引用啦~
欢迎点我试用!