Best subset selection, persistence in high-dimensional statistical learning and optimization under l 1 constraint

阅读量:

76

作者:

GreenshteinEitan

展开

摘要:

Let$(Y,X_{1},\ldots ,X_{m})$be a random vector. It is desired to predict Y based on$(X_{1},\ldots ,X_{m})$. Examples of prediction methods are regression, classification using logistic regression or separating hyperplanes, and so on. We consider the problem of best subset selection, and study it in the context$m=n^{\alpha}$, α > 1, where n is the number of observations. We investigate procedures that are based on empirical risk minimization. It is shown, that in common cases, we should aim to find the best subset among those of size which is of order o(n/log(n)). It is also shown, that in some "asymptotic sense," when assuming a certain sparsity condition, there is no loss in letting m be much larger than n, for example,$m=n^{\alpha}$, α > 1. This is in comparison to starting with the "best" subset of size smaller than n and regardless of the value of α. We then study conditions under which empirical risk minimization subject to l₁ constraint yields nearly the best subset. These results extend some recent results obtained by Greenshtein and Ritov. Finally we present a high-dimensional simulation study of a "boosting type" classification procedure.

展开

DOI:

10.1214/009053606000000768

被引量:

93

年份:

2006

通过文献互助平台发起求助,成功后即可免费获取论文全文。

相似文献

参考文献

引证文献

来源期刊

引用走势

2008
被引量:14

辅助模式

0

引用

文献可以批量引用啦~
欢迎点我试用!

引用