Loading
Error: Cannot Load Popup Box
Skip to hit list
Adjust your hit list
Further result pages
Mobile

A
A
A

A

English
Deutsch
Français
Español
Polski
Ελληνικά
Українська
中文
 Logged in as

Log Out

Login
BASIC
SEARCH
ADVANCED
SEARCH
HELP
BROWSING
SEARCH
HISTORY
Your search
Search For:
Entire Document
Title
Author
Subject
Boost open access documents
Find
Linguistics tools
Verbatim search
Additional word forms
Multilingual synonyms
Statistics
17 hits
in 72,045,933 documents
in 0.26 seconds
Please leave the following field blank:
Home
»
Search: Robert Hable
Hit List
Hit list
1.
Asymptotic Confidence Sets for General Nonparametric Regression and Classification by Regularized Kernel Methods
Open Access
Title:
Asymptotic Confidence Sets for General Nonparametric Regression and Classification by Regularized Kernel Methods
Author:
Robert Hable
Robert Hable
Minimize authors
Description:
Regularized kernel methods such as, e.g., support vector machines and leastsquares support vector regression constitute an important class of standard learning algorithms in machine learning. Theoretical investigations concerning asymptotic properties have manly focused on rates of convergence during the last years but there are only very few an...
Regularized kernel methods such as, e.g., support vector machines and leastsquares support vector regression constitute an important class of standard learning algorithms in machine learning. Theoretical investigations concerning asymptotic properties have manly focused on rates of convergence during the last years but there are only very few and limited (asymptotic) results on statistical inference so far. As this is a serious limitation for their use in mathematical statistics, the goal is to fill this gap. Based on asymptotic normality of many of these methods [1], a strongly consistent estimator for the unknown covariance matrix of the limiting normal distribution is derived. In this way, we obtain asymptotically correct confidence sets for ψ(fP,λ0) where fP,λ0 denotes the minimizer of the regularized risk in the reproducing kernel Hilbert space H and ψ: H → Rm is any Hadamarddifferentiable functional. Applications include (multivariate) pointwise confidence sets for values
Minimize
Contributors:
The Pennsylvania State University CiteSeerX Archives
Year of Publication:
20131006
Source:
http://www.stat.unimuenchen.de/~mahling/Kolloquium/ss12/120425_Hable.pdf
http://www.stat.unimuenchen.de/~mahling/Kolloquium/ss12/120425_Hable.pdf
Minimize
Document Type:
text
Language:
en
Subjects:
Asymptotic confidence sets ; asymptotic normality ; leastsquares support vector regression ; regularized kernel methods ; support vector machines. References
Asymptotic confidence sets ; asymptotic normality ; leastsquares support vector regression ; regularized kernel methods ; support vector machines. References
Minimize
DDC:
310 Collections of general statistics
(computed)
Rights:
Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Minimize
URL:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.363.7509
http://www.stat.unimuenchen.de/~mahling/Kolloquium/ss12/120425_Hable.pdf
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.363.7509
http://www.stat.unimuenchen.de/~mahling/Kolloquium/ss12/120425_Hable.pdf
Minimize
Content Provider:
CiteSeerX
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
2.
A Minimum Distance Estimator in an Imprecise Probability Model – Computational Aspects and Applications
Open Access
Title:
A Minimum Distance Estimator in an Imprecise Probability Model – Computational Aspects and Applications
Author:
Robert Hable
Robert Hable
Minimize authors
Description:
The present article considers estimating a parameter θ in an imprecise probability model (P θ)θ∈Θ which consists of coherent upper previsions P θ. After the definition of a minimum distance estimator in this setup and a summarization of its main properties, the focus lies on applications. It is shown that approximate minimum distances on the dis...
The present article considers estimating a parameter θ in an imprecise probability model (P θ)θ∈Θ which consists of coherent upper previsions P θ. After the definition of a minimum distance estimator in this setup and a summarization of its main properties, the focus lies on applications. It is shown that approximate minimum distances on the discretized sample space can be calculated by linear programming. After a discussion of some computational aspects, the estimator is applied in a simulation study consisting of two different models. Finally, the estimator is applied on a real data set in a linear regression model.
Minimize
Contributors:
The Pennsylvania State University CiteSeerX Archives
Year of Publication:
20130812
Source:
http://www.sipta.org/isipta09/proceedings/papers/s005.pdf
http://www.sipta.org/isipta09/proceedings/papers/s005.pdf
Minimize
Document Type:
text
Language:
en
Subjects:
coherent lower previsions ; minimum distance estimator ; empirical measure ; R Project for Statistical Computing
coherent lower previsions ; minimum distance estimator ; empirical measure ; R Project for Statistical Computing
Minimize
Rights:
Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Minimize
URL:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.324.980
http://www.sipta.org/isipta09/proceedings/papers/s005.pdf
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.324.980
http://www.sipta.org/isipta09/proceedings/papers/s005.pdf
Minimize
Content Provider:
CiteSeerX
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
3.
DataBased Decisions under Complex Uncertainty
Open Access
Title:
DataBased Decisions under Complex Uncertainty
Author:
Robert Hable
Robert Hable
Minimize authors
Description:
Decision theory is, in particular in economics, medical expert systems and statistics, an important tool for determining optimal decisions under uncertainty. In view of applications in statistics, the present book is concerned with decision problems which are explicitly databased. Since the arising uncertainties are often too complex to be desc...
Decision theory is, in particular in economics, medical expert systems and statistics, an important tool for determining optimal decisions under uncertainty. In view of applications in statistics, the present book is concerned with decision problems which are explicitly databased. Since the arising uncertainties are often too complex to be described by classical precise probability assessments, concepts of imprecise probabilities (coherent lower previsions, Fprobabilities) are applied. Due to the present state of research, some basic groundwork has to be done: Firstly, topological properties of different concepts of imprecise probabilities are investigated. In particular, the concept of coherent lower previsions appears to have advantageous properties for applications in decision theory. Secondly, several decision theoretic tools are developed for imprecise probabilities. These tools are mainly based on concepts developed by L. Le Cam and enable, for example, a definition of sufficiency in case of imprecise probabilities for the first time. Building on that, the article [A. Buja, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 65 (1984) 367384] is reinvestigated in the only recently available framework of imprecise probabilities. This leads to a generalization of results within the Huber
Minimize
Contributors:
The Pennsylvania State University CiteSeerX Archives
Year of Publication:
20101013
Source:
http://edoc.ub.unimuenchen.de/9874/1/
Hable
_Robert.pdf
http://edoc.ub.unimuenchen.de/9874/1/
Hable
_Robert.pdf
Minimize
Document Type:
text
Language:
en
DDC:
004 Data processing & computer science
(computed)
Rights:
Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Metadata may be used without restrictions as long as the oai identifier remains attached to it.
Minimize
URL:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.152.8601
http://edoc.ub.unimuenchen.de/9874/1/Hable_Robert.pdf
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.152.8601
http://edoc.ub.unimuenchen.de/9874/1/Hable_Robert.pdf
Minimize
Content Provider:
CiteSeerX
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
4.
Factors Influencing Households' Firewood Consumption in the Western Pamirs, Tajikistan
Title:
Factors Influencing Households' Firewood Consumption in the Western Pamirs, Tajikistan
Author:
Bunafsha Mislimshoeva
;
Robert Hable
;
Manuchehr Fezakov
;
Cyrus Samimi
;
Abdulnazar Abdulnazarov
;
Thomas Koellner
Bunafsha Mislimshoeva
;
Robert Hable
;
Manuchehr Fezakov
;
Cyrus Samimi
;
Abdulnazar Abdulnazarov
;
Thomas Koellner
Minimize authors
Description:
Firewood is a major energy source, especially in many high mountainous regions in developing countries where other energy sources are limited. In the mountainous regions of Tajikistan, current energy consumption is limited owing to geographic isolation and numerous challenges—including in the energy sector—that emerged after the collapse of the ...
Firewood is a major energy source, especially in many high mountainous regions in developing countries where other energy sources are limited. In the mountainous regions of Tajikistan, current energy consumption is limited owing to geographic isolation and numerous challenges—including in the energy sector—that emerged after the collapse of the Soviet Union and Tajikistan's independence. The sudden disruption of external supplies of energy forced people to rely on locally available but scarce biomass resources, such as firewood and animal dung. We conducted an empirical study to gain an understanding of current household energy consumption in the Western Pamirs of Tajikistan and the factors that influence firewood consumption. For this purpose, we interviewed members of 170 households in 8 villages. We found that, on average, households consumed 355 kg of firewood, 253 kWh of electricity, 760 kg of dung, and 6 kg of coal per month in the winter of 2011–2012. Elevation, size of a household's private garden, and total hours of heating had a positive relationship with firewood consumption, and education level and access to a reliable supply of electricity showed a negative relationship.
Minimize
Publisher:
International Mountain Society
Contributors:
Faculty of Biology, Chemistry and Earth Sciences, BayCEER, University of Bayreuth, Germany ; Department of Mathematics, University of Bayreuth, Germany ; Regional Program on Sustainable Use of Natural Resources in Central Asia ; State Forest Agency of Gorno Badakhshan Autonomous Oblast, Tajikistan
Year of Publication:
2014
Source:
http://www.bioone.org
http://www.bioone.org
Minimize
Document Type:
Article
Language:
en
Subjects:
energy ; firewood consumption ; dung ; mountainous regions ; Tajikistan ; 333.79
energy ; firewood consumption ; dung ; mountainous regions ; Tajikistan ; 333.79
Minimize
Rights:
Les oeuvres reproduites sur le site « BioOne » sont protégées par les dispositions générales du Code de la propriété intellectuelle.
Les oeuvres reproduites sur le site « BioOne » sont protégées par les dispositions générales du Code de la propriété intellectuelle.
Minimize
Relations:
Mountain Research and Development 34(2):147156. 2014 doi: http://dx.doi.org/10.1659/MRDJOURNALD1300113.1
URL:
http://www.bioone.org/doi/full/10.1659/MRDJOURNALD1300113.1
http://www.bioone.org/doi/full/10.1659/MRDJOURNALD1300113.1
Minimize
Content Provider:
Institut de la Montagne, France
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
5.
Asymptotic Normality of Support Vector Machine Variants and Other Regularized Kernel Methods
Open Access
Title:
Asymptotic Normality of Support Vector Machine Variants and Other Regularized Kernel Methods
Author:
Hable, Robert
Hable, Robert
Minimize authors
Description:
In nonparametric classification and regression problems, regularized kernel methods, in particular support vector machines, attract much attention in theoretical and in applied statistics. In an abstract sense, regularized kernel methods (simply called SVMs here) can be seen as regularized Mestimators for a parameter in a (typically infinite di...
In nonparametric classification and regression problems, regularized kernel methods, in particular support vector machines, attract much attention in theoretical and in applied statistics. In an abstract sense, regularized kernel methods (simply called SVMs here) can be seen as regularized Mestimators for a parameter in a (typically infinite dimensional) reproducing kernel Hilbert space. For smooth loss functions, it is shown that the difference between the estimator, i.e.\ the empirical SVM, and the theoretical SVM is asymptotically normal with rate $\sqrt{n}$. That is, the standardized difference converges weakly to a Gaussian process in the reproducing kernel Hilbert space. As common in real applications, the choice of the regularization parameter may depend on the data. The proof is done by an application of the functional deltamethod and by showing that the SVMfunctional is suitably Hadamarddifferentiable.
Minimize
Year of Publication:
20101004
Document Type:
text
Subjects:
Statistics  Machine Learning ; 62G08 ; 62G20 ; 62M10
Statistics  Machine Learning ; 62G08 ; 62G20 ; 62M10
Minimize
DDC:
310 Collections of general statistics
(computed)
URL:
http://arxiv.org/abs/1010.0535
http://arxiv.org/abs/1010.0535
Minimize
Content Provider:
ArXiv.org (Cornell University Library)
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
6.
Practical Tikhonov Regularized Estimators in Reproducing Kernel Hilbert Spaces for Statistical Inverse Problems
Open Access
Title:
Practical Tikhonov Regularized Estimators in Reproducing Kernel Hilbert Spaces for Statistical Inverse Problems
Author:
Hable, Robert
Hable, Robert
Minimize authors
Description:
Regularized kernel methods such as support vector machines (SVM) and support vector regression (SVR) constitute a broad and flexible class of methods which are theoretically well investigated and commonly used in nonparametric classification and regression problems. As these methods are based on a Tikhonov regularization which is also common in ...
Regularized kernel methods such as support vector machines (SVM) and support vector regression (SVR) constitute a broad and flexible class of methods which are theoretically well investigated and commonly used in nonparametric classification and regression problems. As these methods are based on a Tikhonov regularization which is also common in inverse problems, this article investigates the use of regularized kernel methods for inverse problems in a unifying way. Regularized kernel methods are based on the use of reproducing kernel Hilbert spaces (RKHS) which lead to very good computational properties. It is shown that similar properties remain true in solving statistical inverse problems and that standard software implementations developed for ordinary regression problems can still be used for inverse regression problems. Consistency of these methods and a rate of convergence for the risk is shown under quite weak assumptions and rates of convergence for the estimator are shown under somehow stronger assumptions. The applicability of these methods is demonstrated in a simulation.
Minimize
Year of Publication:
20130506
Document Type:
text
Subjects:
Statistics  Methodology
Statistics  Methodology
Minimize
URL:
http://arxiv.org/abs/1305.1137
http://arxiv.org/abs/1305.1137
Minimize
Content Provider:
ArXiv.org (Cornell University Library)
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
7.
Asymptotic Confidence Sets for General Nonparametric Regression and Classification by Regularized Kernel Methods
Open Access
Title:
Asymptotic Confidence Sets for General Nonparametric Regression and Classification by Regularized Kernel Methods
Author:
Hable, Robert
Hable, Robert
Minimize authors
Description:
Regularized kernel methods such as, e.g., support vector machines and leastsquares support vector regression constitute an important class of standard learning algorithms in machine learning. Theoretical investigations concerning asymptotic properties have manly focused on rates of convergence during the last years but there are only very few a...
Regularized kernel methods such as, e.g., support vector machines and leastsquares support vector regression constitute an important class of standard learning algorithms in machine learning. Theoretical investigations concerning asymptotic properties have manly focused on rates of convergence during the last years but there are only very few and limited (asymptotic) results on statistical inference so far. As this is a serious limitation for their use in mathematical statistics, the goal of the article is to fill this gap. Based on asymptotic normality of many of these methods, the article derives a strongly consistent estimator for the unknown covariance matrix of the limiting normal distribution. In this way, we obtain asymptotically correct confidence sets for $\psi(f_{P,\lambda_0})$ where $f_{P,\lambda_0}$ denotes the minimizer of the regularized risk in the reproducing kernel Hilbert space $H$ and $\psi:H\rightarrow\mathds{R}^m$ is any Hadamarddifferentiable functional. Applications include (multivariate) pointwise confidence sets for values of $f_{P,\lambda_0}$ and confidence sets for gradients, integrals, and norms.
Minimize
Year of Publication:
20120320
Document Type:
text
Subjects:
Statistics  Machine Learning ; 62G08 ; 62G15
Statistics  Machine Learning ; 62G08 ; 62G15
Minimize
DDC:
310 Collections of general statistics
(computed)
URL:
http://arxiv.org/abs/1203.4354
http://arxiv.org/abs/1203.4354
Minimize
Content Provider:
ArXiv.org (Cornell University Library)
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
8.
DataBased Decisions under Complex Uncertainty
Title:
DataBased Decisions under Complex Uncertainty
Author:
Hable, Robert
Hable, Robert
Minimize authors
Description:
Decision theory is, in particular in economics, medical expert systems and statistics, an important tool for determining optimal decisions under uncertainty. In view of applications in statistics, the present book is concerned with decision problems which are explicitly databased. Since the arising uncertainties are often too complex to be desc...
Decision theory is, in particular in economics, medical expert systems and statistics, an important tool for determining optimal decisions under uncertainty. In view of applications in statistics, the present book is concerned with decision problems which are explicitly databased. Since the arising uncertainties are often too complex to be described by classical precise probability assessments, concepts of imprecise probabilities (coherent lower previsions, Fprobabilities) are applied. Due to the present state of research, some basic groundwork has to be done: Firstly, topological properties of different concepts of imprecise probabilities are investigated. In particular, the concept of coherent lower previsions appears to have advantageous properties for applications in decision theory. Secondly, several decision theoretic tools are developed for imprecise probabilities. These tools are mainly based on concepts developed by L. Le Cam and enable, for example, a definition of sufficiency in case of imprecise probabilities for the first time. Building on that, the article [A. Buja, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 65 (1984) 367384] is reinvestigated in the only recently available framework of imprecise probabilities. This leads to a generalization of results within the HuberStrassen theory concerning least favorable pairs or models. Results obtained by these investigations can also be applied afterwards in order to justify the use of the method of natural extension, which is fundamental within the theory of imprecise probabilities, in databased decision problems. It is shown by means of the theory of vector lattices that applying the method of natural extension in decision problems does not affect the optimality of decisions. However, it is also shown that, in general, the method of natural extension suffers from a severe instability. The book closes with an application in statistics in which a minimum distance estimator is developed for imprecise probabilities. After an investigation concerning its asymptotic properties, an algorithm for calculating the estimator is given which is based on linear programming. This algorithm has led to an implementation of the estimator in the programming language R which is publicly available as R package "imprProbEst". The applicability of the estimator (even for large sample sizes) is demonstrated in a simulation study.
Minimize
Publisher:
LudwigMaximiliansUniversität München
Year of Publication:
20090205
Document Type:
Dissertation ; NonPeerReviewed
Subjects:
Fakultät für Mathematik ; Informatik und Statistik
Fakultät für Mathematik ; Informatik und Statistik
Minimize
Relations:
http://edoc.ub.unimuenchen.de/9874/
URL:
http://edoc.ub.unimuenchen.de/9874/1/Hable_Robert.pdf
http://nbnresolving.de/urn:nbn:de:bvb:1998740
http://edoc.ub.unimuenchen.de/9874/1/Hable_Robert.pdf
http://nbnresolving.de/urn:nbn:de:bvb:1998740
Minimize
Content Provider:
University of Munich: Digital theses
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
9.
Support Vector Machines for Additive Models: Consistency and Robustness
Open Access
Title:
Support Vector Machines for Additive Models: Consistency and Robustness
Author:
Christmann, Andreas
;
Hable, Robert
Christmann, Andreas
;
Hable, Robert
Minimize authors
Description:
Support vector machines (SVMs) are special kernel based methods and belong to the most successful learning methods since more than a decade. SVMs can informally be described as a kind of regularized Mestimators for functions and have demonstrated their usefulness in many complicated reallife problems. During the last years a great part of the ...
Support vector machines (SVMs) are special kernel based methods and belong to the most successful learning methods since more than a decade. SVMs can informally be described as a kind of regularized Mestimators for functions and have demonstrated their usefulness in many complicated reallife problems. During the last years a great part of the statistical research on SVMs has concentrated on the question how to design SVMs such that they are universally consistent and statistically robust for nonparametric classification or nonparametric regression purposes. In many applications, some qualitative prior knowledge of the distribution P or of the unknown function f to be estimated is present or the prediction function with a good interpretability is desired, such that a semiparametric model or an additive model is of interest. In this paper we mainly address the question how to design SVMs by choosing the reproducing kernel Hilbert space (RKHS) or its corresponding kernel to obtain consistent and statistically robust estimators in additive models. We give an explicit construction of kernels  and thus of their RKHSs  which leads in combination with a Lipschitz continuous loss function to consistent and statistically robust SMVs for additive models. Examples are quantile regression based on the pinball loss function, regression based on the epsiloninsensitive loss function, and classification based on the hinge loss function.
Minimize
Year of Publication:
20100723
Document Type:
text
Subjects:
Statistics  Machine Learning
Statistics  Machine Learning
Minimize
DDC:
310 Collections of general statistics
(computed)
URL:
http://arxiv.org/abs/1007.4062
http://arxiv.org/abs/1007.4062
Minimize
Content Provider:
ArXiv.org (Cornell University Library)
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
10.
On the Consistency of the Bootstrap Approach for Support Vector Machines and Related Kernel Based Methods
Open Access
Title:
On the Consistency of the Bootstrap Approach for Support Vector Machines and Related Kernel Based Methods
Author:
Christmann, Andreas
;
Hable, Robert
Christmann, Andreas
;
Hable, Robert
Minimize authors
Description:
It is shown that bootstrap approximations of support vector machines (SVMs) based on a general convex and smooth loss function and on a general kernel are consistent. This result is useful to approximate the unknown finite sample distribution of SVMs by the bootstrap approach. ; Comment: 13 pages
It is shown that bootstrap approximations of support vector machines (SVMs) based on a general convex and smooth loss function and on a general kernel are consistent. This result is useful to approximate the unknown finite sample distribution of SVMs by the bootstrap approach. ; Comment: 13 pages
Minimize
Year of Publication:
20130129
Document Type:
text
Subjects:
Statistics  Machine Learning ; Computer Science  Learning ; 62G08 ; 62G09 ; 62G20 ; 62G86
Statistics  Machine Learning ; Computer Science  Learning ; 62G08 ; 62G09 ; 62G20 ; 62G86
Minimize
URL:
http://arxiv.org/abs/1301.6944
http://arxiv.org/abs/1301.6944
Minimize
Content Provider:
ArXiv.org (Cornell University Library)
My Lists:
My Tags:
Notes:
Detail View
Email this
Export Record
Export Record
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Add to Favorites
Check in Google Scholar
Add to another List
Edit Favorit
Delete from Favorites
Export Record
All Records
Export
» RefWorks
» EndNote
» RIS
» BibTeX
» MARC
» RDF
» RTF
» JSON
» YAML
Adjust your hit list
Sort Your Results
Refine Search Result
More Options
Sort Your Results
Sort by:
Relevance
Author, ZA
Author, AZ
Title, AZ
Title, ZA
Date of publication, descending
Date of publication, ascending
Refine Search Result
Author
(11) Hable, Robert
(5) Christmann, Andreas
(4) Robert Hable
(3) The Pennsylvania State University CiteSeerX...
(2) Bishop, Richard W.
(2) Hable, Michael A.
(2) Oliver, Curtis G.
(1) Abdulnazar Abdulnazarov
(1) Abdulnazarov, Abdulnazar
(1) Boulesteix, AnneLaure
(1) Bunafsha Mislimshoeva
(1) Cyrus Samimi
(1) Department of Mathematics, University of...
(1) Eugster, Manuel J. A.
(1) Faculty of Biology, Chemistry and Earth...
(1) Fezakov, Manuchehr
(1) Gordon, Eleonor F.
(1) Koellner, Thomas
(1) Lauer, Sabine
(1) Manuchehr Fezakov
(1) McKenzie, Robert M.
(1) Mislimshoeva, Bunafsha
(1) Regional Program on Sustainable Use of Natural...
(1) Samimi, Cyrus
(1) State Forest Agency of Gorno Badakhshan...
(1) Sutphin, Joseph B.
(1) Thomas Koellner
(1) Valis, Robert J.
Author:
Subject
(6) statistics machine learning
(4) 62g08
(2) 62g20
(2) articles
(2) statistics methodology
(1) 333 79
(1) 500 naturwissenschaften
(1) 62g09
(1) 62g15
(1) 62g35
(1) 62g86
(1) 62m10
(1) asymptotic confidence sets
(1) asymptotic normality
(1) coherent lower previsions
(1) computer science learning
(1) ddc 510
(1) dung
(1) empirical measure
(1) energy
(1) fakultät für mathematik
(1) firewood consumption
(1) informatik und statistik
(1) least squares support vector regression
(1) mathematics statistics theory
(1) minimum distance estimator
(1) mountainous regions
(1) r project for statistical computing
(1) regularized kernel methods
(1) support vector machines references
(1) tajikistan
(1) technische reports
Subject:
Dewey Decimal Classification (DDC)
(4) Statistics [31*]
(1) Computer science, knowledge & systems [00*]
(1) Science [50*]
Dewey Decimal Classification (DDC):
Year of Publication
(5) 2013
(3) 2010
(2) 2009
(2) 2014
(1) 2002
(1) 2003
(1) 2011
(1) 2012
Year of Publication:
Content Provider
(7) ArXiv.org
(3) CiteSeerX
(2) HighWire Press
(1) Inst. de la Montagne
(1) Munich LMU: Digital theses
(1) Munich LMU: Open Access
(1) RePEc.org
(1) Bayreuth Univ.: ERef Bayreuth
Content Provider:
Language
(10) Unknown
(7) English
Language:
Document Type
(12) Text
(3) Article, Journals
(1) Reports, Papers, Lectures
(1) Theses
Document Type:
Access
(12) Open Access
(5) Unknown
Access:
More Options
»
Search History
»
Get RSS Feed
»
Get ATOM Feed
»
Email this Search
»
Save Search
»
Browsing
»
Search Plugin
Further result pages
Results:
1

2
Next »
New Search »
Currently in BASE: 72,045,933 Documents of 3,464
Content Sources
About BASE

Contact

BASE Lab

Imprint
© 20042015 by
Bielefeld University Library
Search powered by
Solr
&
VuFind
.
Suggest Repository
BASE Interfaces
Currently in BASE: 72,045,933 Documents of 3,464 Content Sources
http://www.basesearch.net