Loading

Error: Cannot Load Popup Box

Hit List

Title:

An overview on the shrinkage properties of partial least squares regression

Description:

Linear regression, Biased estimators, Mean squared error

Linear regression, Biased estimators, Mean squared error Minimize

Document Type:

article

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

The degrees of freedom of partial least squares regression

Description:

The derivation of statistical properties for Partial Least Squares regression can be a challenging task. The reason is that the construction of latent components from the predictor variables also depends on the response variable. While this typically leads to good performance and interpretable models in practice, it makes the statistical analysi...

The derivation of statistical properties for Partial Least Squares regression can be a challenging task. The reason is that the construction of latent components from the predictor variables also depends on the response variable. While this typically leads to good performance and interpretable models in practice, it makes the statistical analysis more involved. In this work, we study the intrinsic complexity of Partial Least Squares Regression. Our contribution is an unbiased estimate of its Degrees of Freedom. It is defined as the trace of the first derivative of the fitted values, seen as a function of the response. We establish two equivalent representations that rely on the close connection of Partial Least Squares to matrix decompositions and Krylov subspace techniques. We show that the Degrees of Freedom depend on the collinearity of the predictor variables: The lower the collinearity is, the higher the Degrees of Freedom are. In particular, they are typically higher than the naive approach that defines the Degrees of Freedom as the number of components. Further, we illustrate that the Degrees of Freedom are useful for model selection. Our experiments indicate that the model complexity based on the Degrees of Freedom estimate is lower than the model complexity of the naive approach. In terms of prediction accuracy, both methods obtain the same accuracy as cross-validation Minimize

Publisher:

Berlin : WIAS ; Göttingen : Niedersächsische Staats- und Universitätsbibliothek ; Hannover : Technische Informationsbibliothek u. Universitätsbibliothek

Year of Publication:

2010

Subjects:

31.00

31.00 Minimize

DDC:

519 Probabilities & applied mathematics *(computed)* ; 310 Collections of general statistics *(computed)*

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Comments on: Augmenting the bootstrap to analyze high dimensional genomic data

Author:

Document Type:

article

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

ASAP

Author:

Description:

Automatic inspection of network payloads is a prerequisite for effective analysis of network communication. Security research has largely focused on network analysis using protocol specifications, for example for intrusion detection, fuzz testing and forensic analysis. The specification of a protocol alone, however, is often not sufficient for a...

Automatic inspection of network payloads is a prerequisite for effective analysis of network communication. Security research has largely focused on network analysis using protocol specifications, for example for intrusion detection, fuzz testing and forensic analysis. The specification of a protocol alone, however, is often not sufficient for accurate analysis of communication, as it fails to reflect individual semantics of network applications. We propose a framework for semantics-aware analysis of network payloads which automaticylly extracts semantic components from recorded network traffic. Our method proceeds by mapping network payloads to a vector space and identifying semantic templates corresponding to base directions in the vector space. We demonstrate the efficacy of semantics-aware analysis in different security applications: automatic discovery of patterns in honeypot data, analysis of malware communication and network intrusion detection. Minimize

Publisher:

Berlin : WIAS ; Göttingen : Niedersächsische Staats- und Universitätsbibliothek ; Hannover : Technische Informationsbibliothek u. Universitätsbibliothek

Year of Publication:

2010

Subjects:

31.00

31.00 Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Local models for ramified unitary groups

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2012-11-14

Source:

http://arxiv.org/pdf/math/0302025v1.pdf

http://arxiv.org/pdf/math/0302025v1.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Kernel Partial Least Squares is Universally Consistent

Description:

We prove the statistical consistency of kernel Partial Least Squares Regression applied to a bounded regression learning problem on a reproducing kernel Hilbert space. Partial Least Squares stands out of well-known classical approaches as e.g. Ridge Regression or Principal Components Regression, as it is not defined as the solution of a global c...

We prove the statistical consistency of kernel Partial Least Squares Regression applied to a bounded regression learning problem on a reproducing kernel Hilbert space. Partial Least Squares stands out of well-known classical approaches as e.g. Ridge Regression or Principal Components Regression, as it is not defined as the solution of a global cost minimization procedure over a fixed model nor is it a linear estimator. Instead, approximate solutions are constructed by projections onto a nested set of data-dependent subspaces. To prove consistency, we exploit the known fact that Partial Least Squares is equivalent to the conjugate gradient algorithm in combination with early stopping. The choice of the stopping rule (number of iterations) is a crucial point. We study two empirical stopping rules. The first one monitors the estimation error in each iteration step of Partial Least Squares, and the second one estimates the empirical complexity in terms of a condition number. Both stopping rules lead to universally consistent estimators provided the kernel is universal. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2012-11-30

Source:

http://ml.cs.tu-berlin.de/~nkraemer/papers/pls_consistency.pdf

http://ml.cs.tu-berlin.de/~nkraemer/papers/pls_consistency.pdf Minimize

Document Type:

text

Language:

en

DDC:

519 Probabilities & applied mathematics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Kernel Conjugate Gradient is Universally Consistent

Description:

We study the statistical consistency of conjugate gradient applied to a bounded regression learning problem seen as an inverse problem defined in a reproducing kernel Hilbert space. This approach leads to an estimator that stands out of the well-known classical approaches, as it is not defined as the solution of a global cost minimization proced...

We study the statistical consistency of conjugate gradient applied to a bounded regression learning problem seen as an inverse problem defined in a reproducing kernel Hilbert space. This approach leads to an estimator that stands out of the well-known classical approaches, as it is not defined as the solution of a global cost minimization procedure over a fixed model nor is it a linear estimator. Instead, approximate solutions are constructed by projections onto a nested set of data-dependent subspaces. We study two empirical stopping rules that lead to universally consistent estimators provided the kernel is universal. As conjugate gradient is equivalent to Partial Least Squares, we therefore obtain consistency results for Kernel Partial Least Squares Regression. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2014-10-08

Source:

http://ml.cs.tu-berlin.de/~nkraemer/papers/preprint_cg.pdf

http://ml.cs.tu-berlin.de/~nkraemer/papers/preprint_cg.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Kernel Conjugate Gradient is Universally Consistent

Description:

We study the statistical consistency of conjugate gradient applied to a bounded regression learning problem seen as an inverse problem defined in a reproducing kernel Hilbert space. This approach leads to an estimator that stands out of the well-known classical approaches, as it is not defined as the solution of a global cost minimization proced...

We study the statistical consistency of conjugate gradient applied to a bounded regression learning problem seen as an inverse problem defined in a reproducing kernel Hilbert space. This approach leads to an estimator that stands out of the well-known classical approaches, as it is not defined as the solution of a global cost minimization procedure over a fixed model nor is it a linear estimator. Instead, approximate solutions are constructed by projections onto a nested set of data-dependent subspaces. We study two empirical stopping rules that lead to universally consistent estimators provided the kernel is universal. As conjugate gradient is equivalent to Partial Least Squares, we therefore obtain consistency results for Kernel Partial Least Squares Regression. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2014-10-08

Source:

http://arxiv.org/pdf/0902.4380v1.pdf

http://arxiv.org/pdf/0902.4380v1.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

On the Peaking Phenomenon of the Lasso in Model Selection

Description:

I briefly report on some unexpected results that I obtained when optimizing the model parameters of the Lasso. In simulations with varying observations-to-variables ratio n/p, I typically observe a strong peak in the test error curve at the transition point n/p = 1. This peaking phenomenon is well-documented in scenarios that involve the inversi...

I briefly report on some unexpected results that I obtained when optimizing the model parameters of the Lasso. In simulations with varying observations-to-variables ratio n/p, I typically observe a strong peak in the test error curve at the transition point n/p = 1. This peaking phenomenon is well-documented in scenarios that involve the inversion of the sample covariance matrix, and as I illustrate in this note, it is also the source of the peak for the Lasso. The key problem is the parametrization of the Lasso penalty – as e.g. in the current R package lars – and I present a solution in terms of a normalized Lasso parameter. 1 Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2012-04-02

Source:

http://ml.cs.tu-berlin.de/~nkraemer/papers/modelselection_peak_lasso.pdf

http://ml.cs.tu-berlin.de/~nkraemer/papers/modelselection_peak_lasso.pdf Minimize

Document Type:

text

Language:

en

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Optimal learning rates for Kernel Conjugate Gradient regression

Description:

We prove rates of convergence in the statistical sense for kernel-based least squares regression using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is directly related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with ...

We prove rates of convergence in the statistical sense for kernel-based least squares regression using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is directly related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. The rates depend on two key quantities: first, on the regularity of the target regression function and second, on the effective dimensionality of the data mapped into the kernel space. Lower bounds on attainable rates depending on these two quantities were established in earlier literature, and we obtain upper bounds for the considered method that match these lower bounds (up to a log factor) if the true regression function belongs to the reproducing kernel Hilbert space. If this assumption is not fulfilled, we obtain similar convergence rates provided additional unlabeled data are available. The order of the learning rates match state-of-the-art results that were recently obtained for least squares support vector machines and for linear regularization operators. 1 Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2012-06-09

Source:

http://books.nips.cc/papers/files/nips23/NIPS2010_0601.pdf

http://books.nips.cc/papers/files/nips23/NIPS2010_0601.pdf Minimize

Document Type:

text

Language:

en

DDC:

518 Numerical analysis *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Currently in BASE: 71,574,616 Documents of 3,436 Content Sources

http://www.base-search.net