Loading

Error: Cannot Load Popup Box

Hit List

Title:

A Minimum Distance Estimator in an Imprecise Probability Model – Computational Aspects and Applications

Author:

Description:

The present article considers estimating a parameter θ in an imprecise probability model (P θ)θ∈Θ which consists of coherent upper previsions P θ. After the definition of a minimum distance estimator in this setup and a summarization of its main properties, the focus lies on applications. It is shown that approximate minimum distances on the dis...

The present article considers estimating a parameter θ in an imprecise probability model (P θ)θ∈Θ which consists of coherent upper previsions P θ. After the definition of a minimum distance estimator in this setup and a summarization of its main properties, the focus lies on applications. It is shown that approximate minimum distances on the discretized sample space can be calculated by linear programming. After a discussion of some computational aspects, the estimator is applied in a simulation study consisting of two different models. Finally, the estimator is applied on a real data set in a linear regression model. Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-08-12

Source:

http://www.sipta.org/isipta09/proceedings/papers/s005.pdf

http://www.sipta.org/isipta09/proceedings/papers/s005.pdf Minimize

Document Type:

text

Language:

en

Subjects:

coherent lower previsions ; minimum distance estimator ; empirical measure ; R Project for Statistical Computing

coherent lower previsions ; minimum distance estimator ; empirical measure ; R Project for Statistical Computing Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Asymptotic Confidence Sets for General Nonparametric Regression and Classification by Regularized Kernel Methods

Author:

Description:

Regularized kernel methods such as, e.g., support vector machines and leastsquares support vector regression constitute an important class of standard learning algorithms in machine learning. Theoretical investigations concerning asymptotic properties have manly focused on rates of convergence during the last years but there are only very few an...

Regularized kernel methods such as, e.g., support vector machines and leastsquares support vector regression constitute an important class of standard learning algorithms in machine learning. Theoretical investigations concerning asymptotic properties have manly focused on rates of convergence during the last years but there are only very few and limited (asymptotic) results on statistical inference so far. As this is a serious limitation for their use in mathematical statistics, the goal is to fill this gap. Based on asymptotic normality of many of these methods [1], a strongly consistent estimator for the unknown covariance matrix of the limiting normal distribution is derived. In this way, we obtain asymptotically correct confidence sets for ψ(fP,λ0) where fP,λ0 denotes the minimizer of the regularized risk in the reproducing kernel Hilbert space H and ψ: H → Rm is any Hadamard-differentiable functional. Applications include (multivariate) pointwise confidence sets for values Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-10-06

Source:

http://www.stat.uni-muenchen.de/~mahling/Kolloquium/ss12/120425_Hable.pdf

http://www.stat.uni-muenchen.de/~mahling/Kolloquium/ss12/120425_Hable.pdf Minimize

Document Type:

text

Language:

en

Subjects:

Asymptotic confidence sets ; asymptotic normality ; least-squares support vector regression ; regularized kernel methods ; support vector machines. References

Asymptotic confidence sets ; asymptotic normality ; least-squares support vector regression ; regularized kernel methods ; support vector machines. References Minimize

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Data-Based Decisions under Complex Uncertainty

Author:

Description:

Decision theory is, in particular in economics, medical expert systems and statistics, an important tool for determining optimal decisions under uncertainty. In view of applications in statistics, the present book is concerned with decision problems which are explicitly data-based. Since the arising uncertainties are often too complex to be desc...

Decision theory is, in particular in economics, medical expert systems and statistics, an important tool for determining optimal decisions under uncertainty. In view of applications in statistics, the present book is concerned with decision problems which are explicitly data-based. Since the arising uncertainties are often too complex to be described by classical precise probability assessments, concepts of imprecise probabilities (coherent lower previsions, F-probabilities) are applied. Due to the present state of research, some basic groundwork has to be done: Firstly, topological properties of different concepts of imprecise probabilities are investigated. In particular, the concept of coherent lower previsions appears to have advantageous properties for applications in decision theory. Secondly, several decision theoretic tools are developed for imprecise probabilities. These tools are mainly based on concepts developed by L. Le Cam and enable, for example, a definition of sufficiency in case of imprecise probabilities for the first time. Building on that, the article [A. Buja, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 65 (1984) 367-384] is reinvestigated in the only recently available framework of imprecise probabilities. This leads to a generalization of results within the Huber- Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2010-10-13

Source:

http://edoc.ub.uni-muenchen.de/9874/1/Hable_Robert.pdf

http://edoc.ub.uni-muenchen.de/9874/1/Hable_Robert.pdf Minimize

Document Type:

text

Language:

en

DDC:

004 Data processing & computer science *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Factors Influencing Households' Firewood Consumption in the Western Pamirs, Tajikistan

Author:

Description:

Firewood is a major energy source, especially in many high mountainous regions in developing countries where other energy sources are limited. In the mountainous regions of Tajikistan, current energy consumption is limited owing to geographic isolation and numerous challenges—including in the energy sector—that emerged after the collapse of the ...

Firewood is a major energy source, especially in many high mountainous regions in developing countries where other energy sources are limited. In the mountainous regions of Tajikistan, current energy consumption is limited owing to geographic isolation and numerous challenges—including in the energy sector—that emerged after the collapse of the Soviet Union and Tajikistan's independence. The sudden disruption of external supplies of energy forced people to rely on locally available but scarce biomass resources, such as firewood and animal dung. We conducted an empirical study to gain an understanding of current household energy consumption in the Western Pamirs of Tajikistan and the factors that influence firewood consumption. For this purpose, we interviewed members of 170 households in 8 villages. We found that, on average, households consumed 355 kg of firewood, 253 kWh of electricity, 760 kg of dung, and 6 kg of coal per month in the winter of 2011–2012. Elevation, size of a household's private garden, and total hours of heating had a positive relationship with firewood consumption, and education level and access to a reliable supply of electricity showed a negative relationship. Minimize

Publisher:

International Mountain Society

Contributors:

Faculty of Biology, Chemistry and Earth Sciences, BayCEER, University of Bayreuth, Germany ; Department of Mathematics, University of Bayreuth, Germany ; Regional Program on Sustainable Use of Natural Resources in Central Asia ; State Forest Agency of Gorno Badakhshan Autonomous Oblast, Tajikistan

Year of Publication:

2014

Source:

http://www.bioone.org

http://www.bioone.org Minimize

Document Type:

Article

Language:

en

Subjects:

energy ; firewood consumption ; dung ; mountainous regions ; Tajikistan ; 333.79

energy ; firewood consumption ; dung ; mountainous regions ; Tajikistan ; 333.79 Minimize

Rights:

Les oeuvres reproduites sur le site « BioOne » sont protégées par les dispositions générales du Code de la propriété intellectuelle.

Les oeuvres reproduites sur le site « BioOne » sont protégées par les dispositions générales du Code de la propriété intellectuelle. Minimize

Relations:

Mountain Research and Development 34(2):147-156. 2014 doi: http://dx.doi.org/10.1659/MRD-JOURNAL-D-13-00113.1

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Asymptotic Confidence Sets for General Nonparametric Regression and Classification by Regularized Kernel Methods

Description:

Regularized kernel methods such as, e.g., support vector machines and least-squares support vector regression constitute an important class of standard learning algorithms in machine learning. Theoretical investigations concerning asymptotic properties have manly focused on rates of convergence during the last years but there are only very few a...

Regularized kernel methods such as, e.g., support vector machines and least-squares support vector regression constitute an important class of standard learning algorithms in machine learning. Theoretical investigations concerning asymptotic properties have manly focused on rates of convergence during the last years but there are only very few and limited (asymptotic) results on statistical inference so far. As this is a serious limitation for their use in mathematical statistics, the goal of the article is to fill this gap. Based on asymptotic normality of many of these methods, the article derives a strongly consistent estimator for the unknown covariance matrix of the limiting normal distribution. In this way, we obtain asymptotically correct confidence sets for $\psi(f_{P,\lambda_0})$ where $f_{P,\lambda_0}$ denotes the minimizer of the regularized risk in the reproducing kernel Hilbert space $H$ and $\psi:H\rightarrow\mathds{R}^m$ is any Hadamard-differentiable functional. Applications include (multivariate) pointwise confidence sets for values of $f_{P,\lambda_0}$ and confidence sets for gradients, integrals, and norms. Minimize

Year of Publication:

2012-03-20

Document Type:

text

Subjects:

Statistics - Machine Learning ; 62G08 ; 62G15

Statistics - Machine Learning ; 62G08 ; 62G15 Minimize

DDC:

310 Collections of general statistics *(computed)*

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Practical Tikhonov Regularized Estimators in Reproducing Kernel Hilbert Spaces for Statistical Inverse Problems

Description:

Regularized kernel methods such as support vector machines (SVM) and support vector regression (SVR) constitute a broad and flexible class of methods which are theoretically well investigated and commonly used in nonparametric classification and regression problems. As these methods are based on a Tikhonov regularization which is also common in ...

Regularized kernel methods such as support vector machines (SVM) and support vector regression (SVR) constitute a broad and flexible class of methods which are theoretically well investigated and commonly used in nonparametric classification and regression problems. As these methods are based on a Tikhonov regularization which is also common in inverse problems, this article investigates the use of regularized kernel methods for inverse problems in a unifying way. Regularized kernel methods are based on the use of reproducing kernel Hilbert spaces (RKHS) which lead to very good computational properties. It is shown that similar properties remain true in solving statistical inverse problems and that standard software implementations developed for ordinary regression problems can still be used for inverse regression problems. Consistency of these methods and a rate of convergence for the risk is shown under quite weak assumptions and rates of convergence for the estimator are shown under somehow stronger assumptions. The applicability of these methods is demonstrated in a simulation. Minimize

Year of Publication:

2013-05-06

Document Type:

text

Subjects:

Statistics - Methodology

Statistics - Methodology Minimize

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Asymptotic Normality of Support Vector Machine Variants and Other Regularized Kernel Methods

Description:

In nonparametric classification and regression problems, regularized kernel methods, in particular support vector machines, attract much attention in theoretical and in applied statistics. In an abstract sense, regularized kernel methods (simply called SVMs here) can be seen as regularized M-estimators for a parameter in a (typically infinite di...

In nonparametric classification and regression problems, regularized kernel methods, in particular support vector machines, attract much attention in theoretical and in applied statistics. In an abstract sense, regularized kernel methods (simply called SVMs here) can be seen as regularized M-estimators for a parameter in a (typically infinite dimensional) reproducing kernel Hilbert space. For smooth loss functions, it is shown that the difference between the estimator, i.e.\ the empirical SVM, and the theoretical SVM is asymptotically normal with rate $\sqrt{n}$. That is, the standardized difference converges weakly to a Gaussian process in the reproducing kernel Hilbert space. As common in real applications, the choice of the regularization parameter may depend on the data. The proof is done by an application of the functional delta-method and by showing that the SVM-functional is suitably Hadamard-differentiable. Minimize

Year of Publication:

2010-10-04

Document Type:

text

Subjects:

Statistics - Machine Learning ; 62G08 ; 62G20 ; 62M10

Statistics - Machine Learning ; 62G08 ; 62G20 ; 62M10 Minimize

DDC:

310 Collections of general statistics *(computed)*

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Data-Based Decisions under Complex Uncertainty

Description:

Decision theory is, in particular in economics, medical expert systems and statistics, an important tool for determining optimal decisions under uncertainty. In view of applications in statistics, the present book is concerned with decision problems which are explicitly data-based. Since the arising uncertainties are often too complex to be desc...

Decision theory is, in particular in economics, medical expert systems and statistics, an important tool for determining optimal decisions under uncertainty. In view of applications in statistics, the present book is concerned with decision problems which are explicitly data-based. Since the arising uncertainties are often too complex to be described by classical precise probability assessments, concepts of imprecise probabilities (coherent lower previsions, F-probabilities) are applied. Due to the present state of research, some basic groundwork has to be done: Firstly, topological properties of different concepts of imprecise probabilities are investigated. In particular, the concept of coherent lower previsions appears to have advantageous properties for applications in decision theory. Secondly, several decision theoretic tools are developed for imprecise probabilities. These tools are mainly based on concepts developed by L. Le Cam and enable, for example, a definition of sufficiency in case of imprecise probabilities for the first time. Building on that, the article [A. Buja, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 65 (1984) 367-384] is reinvestigated in the only recently available framework of imprecise probabilities. This leads to a generalization of results within the Huber-Strassen theory concerning least favorable pairs or models. Results obtained by these investigations can also be applied afterwards in order to justify the use of the method of natural extension, which is fundamental within the theory of imprecise probabilities, in data-based decision problems. It is shown by means of the theory of vector lattices that applying the method of natural extension in decision problems does not affect the optimality of decisions. However, it is also shown that, in general, the method of natural extension suffers from a severe instability. The book closes with an application in statistics in which a minimum distance estimator is developed for imprecise probabilities. After an investigation concerning its asymptotic properties, an algorithm for calculating the estimator is given which is based on linear programming. This algorithm has led to an implementation of the estimator in the programming language R which is publicly available as R package "imprProbEst". The applicability of the estimator (even for large sample sizes) is demonstrated in a simulation study. Minimize

Publisher:

Ludwig-Maximilians-Universität München

Year of Publication:

2009-02-05

Document Type:

Dissertation ; NonPeerReviewed

Subjects:

Fakultät für Mathematik ; Informatik und Statistik

Fakultät für Mathematik ; Informatik und Statistik Minimize

Relations:

http://edoc.ub.uni-muenchen.de/9874/

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Support Vector Machines for Additive Models: Consistency and Robustness

Description:

Support vector machines (SVMs) are special kernel based methods and belong to the most successful learning methods since more than a decade. SVMs can informally be described as a kind of regularized M-estimators for functions and have demonstrated their usefulness in many complicated real-life problems. During the last years a great part of the ...

Support vector machines (SVMs) are special kernel based methods and belong to the most successful learning methods since more than a decade. SVMs can informally be described as a kind of regularized M-estimators for functions and have demonstrated their usefulness in many complicated real-life problems. During the last years a great part of the statistical research on SVMs has concentrated on the question how to design SVMs such that they are universally consistent and statistically robust for nonparametric classification or nonparametric regression purposes. In many applications, some qualitative prior knowledge of the distribution P or of the unknown function f to be estimated is present or the prediction function with a good interpretability is desired, such that a semiparametric model or an additive model is of interest. In this paper we mainly address the question how to design SVMs by choosing the reproducing kernel Hilbert space (RKHS) or its corresponding kernel to obtain consistent and statistically robust estimators in additive models. We give an explicit construction of kernels - and thus of their RKHSs - which leads in combination with a Lipschitz continuous loss function to consistent and statistically robust SMVs for additive models. Examples are quantile regression based on the pinball loss function, regression based on the epsilon-insensitive loss function, and classification based on the hinge loss function. Minimize

Year of Publication:

2010-07-23

Document Type:

text

Subjects:

Statistics - Machine Learning

Statistics - Machine Learning Minimize

DDC:

310 Collections of general statistics *(computed)*

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Qualitative Robustness of Support Vector Machines

Description:

Support vector machines have attracted much attention in theoretical and in applied statistics. Main topics of recent interest are consistency, learning rates and robustness. In this article, it is shown that support vector machines are qualitatively robust. Since support vector machines can be represented by a functional on the set of all proba...

Support vector machines have attracted much attention in theoretical and in applied statistics. Main topics of recent interest are consistency, learning rates and robustness. In this article, it is shown that support vector machines are qualitatively robust. Since support vector machines can be represented by a functional on the set of all probability measures, qualitative robustness is proven by showing that this functional is continuous with respect to the topology generated by weak convergence of probability measures. Combined with the existence and uniqueness of support vector machines, our results show that support vector machines are the solutions of a well-posed mathematical problem in Hadamard's sense. Minimize

Year of Publication:

2009-12-04

Document Type:

text

Subjects:

Statistics - Machine Learning ; Mathematics - Statistics Theory ; 62G08 ; 62G35

Statistics - Machine Learning ; Mathematics - Statistics Theory ; 62G08 ; 62G35 Minimize

Content Provider:

My Lists:

My Tags:

Notes:

Currently in BASE: 69,696,404 Documents of 3,363 Content Sources

http://www.base-search.net