Loading

Error: Cannot Load Popup Box

Hit List

Title:

Parametric Simultaneous Inference Under Test

Author:

Description:

Multiple testing problems occur in many areas of application. Hothorn, Bretz and Westfall (2008) introduced a framework for simultaneous inference in general parametric models, which allows for an arbitrary number of null hypotheses to be tested simultaneously with an overall type I error rate below the nominal level α. Each null hypothesis is s...

Multiple testing problems occur in many areas of application. Hothorn, Bretz and Westfall (2008) introduced a framework for simultaneous inference in general parametric models, which allows for an arbitrary number of null hypotheses to be tested simultaneously with an overall type I error rate below the nominal level α. Each null hypothesis is specified by a linear combination of model parameters. The test procedure is based on the asymptotic or exact distribution of the linear functions set up in the hypotheses; a reference distribution which is obtained under little restrictive conditions. As normality and homoscedasticity are not assumed, the framework allows for simultaneous inference in various parametric models such as linear regression and ANOVA models, generalized linear models, Cox proportional hazard models, linear mixed effects models, and robust linear models. In ANOVA models, multiple comparisons can be considered not only of contrasts of means, but of arbitrary contrasts specified by a linear function of the model parameters. In a simulation study the size and power properties of this test procedure were Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-10-06

Source:

http://www.stat.uni-muenchen.de/~mahling/Kolloquium/ss09/090618_Herberich.pdf

http://www.stat.uni-muenchen.de/~mahling/Kolloquium/ss09/090618_Herberich.pdf Minimize

Document Type:

text

Language:

en

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Supplementary Material for A re-evaluation of the model selection procedure in Pollet & Nettle (2009)

Author:

Description:

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculating the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to ...

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculating the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to a different choice of best model (section 2). We then suggest two other strategies of model selection for these data, and show that these also lead to different best-fitting models than that reported by Pollet and Nettle (2009) (sections 3 and 4). 1 Ordinal regression: The cumulative Logit Model The appropriate model for a dependent variable Yi ∈ {1,., R}, i = 1,., n, consisting of ranked outcome categories is a cumulative logit model (Agresti, 2002): P (Yi ≤ r|xi) = exp(β0r − x ⊤ i β) 1 + exp(β0r − x ⊤ r = 1,., R − 1. i β), The model includes intercepts β0r for each category and a global parameter vector β = (β1,., βp) for the p covariates. To obtain parameter estimates the maximum-likelihood method is used. The responses are conditionally independent and follow a multinomial distribution with yi|xi ∼ M(1, πi), yi = (yi1,., yiR−1) = (0,., 0,}{{} 1, 0,., 0) ⇔ Yi = r, r−th position πi = (πi1,., πiR−1) with Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2010-01-17

Source:

http://cran.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf

http://cran.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf Minimize

Document Type:

text

Language:

en

Subjects:

K∏

K∏ Minimize

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Supplementary Material for A re-evaluation of the model selection procedure in Pollet & Nettle (2009)

Author:

Description:

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculating the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to ...

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculating the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to a different choice of best model (section 2). We then suggest two other strategies of model selection for these data, and show that these also lead to different best-fitting models than that reported by Pollet and Nettle (2009) (sections 3 and 4). 1 Ordinal regression: The cumulative Logit Model The appropriate model for a dependent variable Yi ∈ {1,., R}, i = 1,., n, consisting of ranked outcome categories is a cumulative logit model (Agresti, 2002): P (Yi ≤ r|xi) = exp(β0r − x ⊤ i β) 1 + exp(β0r − x ⊤ r = 1,., R − 1. i β), The model includes intercepts β0r for each category and a global parameter vector β = (β1,., βp) for the p covariates. To obtain parameter estimates the maximum-likelihood method is used. The responses are conditionally independent and follow a multinomial distribution with yi|xi ∼ M(1, πi), yi = (yi1,., yiR−1) = (0,., 0,}{{} 1, 0,., 0) ⇔ Yi = r, r−th position πi = (πi1,., πiR−1) with Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2010-04-15

Source:

http://cran.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf

http://cran.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf Minimize

Document Type:

text

Language:

en

Subjects:

K∏

K∏ Minimize

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

Metadata may be used without restrictions as long as the oai identifier remains attached to it. Minimize

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Supplementary Material for A re-evaluation of the model selection procedure in Pollet & Nettle (2009)

Author:

Description:

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculating the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to ...

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculating the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to a different choice of best model (section 2). We then suggest two other strategies of model selection for these data, and show that these also lead to different best-fitting models than that reported by Pollet and Nettle (2009) (sections 3 and 4). 1 Ordinal regression: The cumulative Logit Model The appropriate model for a dependent variable Yi ∈ {1,.,R}, i = 1,.,n, consisting of ranked outcome categories is a cumulative logit model (Agresti, 2002): P(Yi ≤ r|xi) = exp(β0r −x ⊤ i β) 1+exp(β0r −x ⊤ r = 1,.,R−1. i β), The model includes intercepts β0r for each category and a global parameter vector β = (β1,.,βp) for the p covariates. To obtain parameter estimates the maximum-likelihood method is used. The responses are conditionally independent and follow a multinomial distribution with yi|xi ∼ M(1,πi), yi = (yi1,.,yiR−1) = (0,.,0,}{{} 1,0,.,0) ⇔ Yi = r, r−th position πi = (πi1,.,πiR−1) with πir = P(Yi = r|xi) = P(Yi ≤ r|xi)−P(Yi ≤ r −1|xi), r = 1,.,R−1. The associated likelihood function is L(β01,.β0R−1,β;x1,.xn) = n∏ i=1 πi1 yi1 ·πi2 yi2 ·.·(1−πi1 −.−πiR−1) 1−yi1−.−yiR−1. 1 To obtain the parameter estimates, the data are often (as by default in SPSS 15.0) pooled in K groups, and the likelihood of the grouped data is maximized, instead of the likelihood of the individual data. Group k, k = 1,.K, includes all hk observations with the value ˜xk = (˜xk1,.,˜xkp) of the covariates x = (x1,.,xp). The responses again follow a multinomial distribution: Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-07-24

Source:

http://cran.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf

http://cran.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf Minimize

Document Type:

text

Language:

en

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

A robust procedure for comparing multiple means under heteroscedasticity in unbalanced designs

Author:

Description:

Investigating differences between means of more than two groups or experimental conditions is a routine research question addressed in biology. In order to assess differences statistically, multiple comparison procedures are applied. The most prominent procedures of this type, the Dunnett and Tukey-Kramer test, control the probability of reporti...

Investigating differences between means of more than two groups or experimental conditions is a routine research question addressed in biology. In order to assess differences statistically, multiple comparison procedures are applied. The most prominent procedures of this type, the Dunnett and Tukey-Kramer test, control the probability of reporting at least one false positive result when the data are normally distributed and when the sample sizes and variances do not differ between groups. All three assumptions are non-realistic in biological research and any violation leads to an increased number of reported false positive results. Based on a general statistical framework for simultaneous inference and robust covariance estimators we propose a new statistical multiple comparison procedure for assessing multiple means. In contrast to the Dunnett or Tukey-Kramer tests, no assumptions regarding the distribution, sample sizes or variance homogeneity are necessary. The performance of the new procedure is assessed by means of its familywise error rate and power under different distributions. The practical merits are demonstrated by a reanalysis of fatty acid phenotypes of the bacterium Bacillus simplex from the ‘‘Evolution Canyons’ ’ I and II in Israel. The simulation results show that even under severely varying variances, the procedure controls the number of false positive findings very well. Thus, the here presented procedure works Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2013-09-25

Source:

ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/5c/fd/PLoS_One_2010_Mar_29_5(3)_e9788.tar.gz

ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/5c/fd/PLoS_One_2010_Mar_29_5(3)_e9788.tar.gz Minimize

Document Type:

text

Language:

en

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Author:

Description:

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculating the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to a different choice of best model (section 2). We then suggest two other strategies of model selection for these data, and show that these also lead to different best-fitting models than that reported by Pollet and Nettle (2009) (sections 3 and 4). 1 Ordinal regression: The cumulative Logit Model The appropriate model for a dependent variable Yi ∈ {1,., R}, i = 1,., n, consisting of ranked outcome categories is a cumulative logit model (Agresti, 2002): P (Yi ≤ r|xi) = exp(β0r − x ⊤ i β) 1 + exp(β0r − x ⊤ r = 1,., R − 1. i β), The model includes intercepts β0r for each category and a global parameter vector β = (β1,., βp) for the p covariates. To obtain parameter estimates the maximum-likelihood method is used. The responses are conditionally independent and follow a multinomial distribution with yi|xi ∼ M(1, πi), yi = (yi1,., yiR−1) = (0,., 0,}{{} 1, 0,., 0) ⇔ Yi = r, r−th position πi = (πi1,., πiR−1) with Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2010-07-25

Source:

http://cran.at.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf

http://cran.at.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf Minimize

Document Type:

text

Language:

en

Subjects:

K∏

K∏ Minimize

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Author:

Description:

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2012-03-25

Source:

http://cran.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf

http://cran.r-project.org/web/packages/multcomp/vignettes/chfls1.pdf Minimize

Document Type:

text

Language:

en

Subjects:

K∏

K∏ Minimize

DDC:

310 Collections of general statistics *(computed)*

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Author:

Description:

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculat-ing the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to...

In this paper, we first explain the statistical model underlying the ordinal regression technique used by Pollet and Nettle (2009), including the two possible ways of calculat-ing the likelihood function (section 1). We then show that the model fit criteria reported were in fact invalid, and calculate the correct ones, showing that this leads to a different choice of best model (section 2). We then suggest two other strategies of model selection for these data, and show that these also lead to different best-fitting models than that Minimize

Contributors:

The Pennsylvania State University CiteSeerX Archives

Year of Publication:

2014-12-03

Source:

http://cran.fhcrc.org/web/packages/multcomp/vignettes/chfls1.pdf

http://cran.fhcrc.org/web/packages/multcomp/vignettes/chfls1.pdf Minimize

Document Type:

text

Language:

en

Subjects:

L(β01 ; β0R−1 ; β ; x̃1 ; x̃K) = K∏

L(β01 ; β0R−1 ; β ; x̃1 ; x̃K) = K∏ Minimize

Rights:

Metadata may be used without restrictions as long as the oai identifier remains attached to it.

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

On the behavior of multiple comparison procedures in complex parametric designs

Description:

The framework for simultaneous inference by Hothorn, Bretz, and Westfall (2008) allows for a unified treatment of multiple comparisons in general parametric models where the study questions are specified as linear combinations of elemental model parameters. However, due to the asymptotic nature of the reference distribution the procedure control...

The framework for simultaneous inference by Hothorn, Bretz, and Westfall (2008) allows for a unified treatment of multiple comparisons in general parametric models where the study questions are specified as linear combinations of elemental model parameters. However, due to the asymptotic nature of the reference distribution the procedure controls the error rate across all comparisons only for sufficiently large samples. This thesis evaluates the small samples properties of simultaneous inference in complex parametric designs. These designs are necessary to address questions from applied research and include nonstandard parametric models or data in which the assumptions of classical procedures for multiple comparisons are not met. This thesis first treats multiple comparisons of samples with heterogeneous variances. Usage of a heteroscedastic consistent covariance estimation prevents an increase in the probability of false positive findings for reasonable sample sizes whereas the classical procedures show liberal or conservative behavior which persists even with increasing sample size. The focus of the second part are multiple comparisons in survival models. Multiple comparisons to a control can be performed in correlated survival data modeled by a frailty Cox model under control of the familywise error rate in sample sizes applicable for clinical trials. As a further application, multiple comparisons in survival models can be performed to investigate trends. The procedure achieves good power to detect different dose-response shapes and controls the error probability to falsely detect any trend. The third part addresses multiple comparisons in semiparametric mixed models. Simultaneous inference in the linear mixed model representation of these models yields an approach for multiple comparisons of curves of arbitrary shape. The sections on which curves differ can also be identified. For reasonably large samples the overall error rate to detect any non-existent difference is controlled. An extension allows for multiple comparisons of areas under the curve. However the resulting procedure achieves an overall error control only for sample sizes considerably larger than available in studies in which multiple AUC comparisons are usually performed. The usage of the evaluated procedures is illustrated by examples from applied research including comparisons of fatty acid contents between Bacillus simplex lineages, comparisons of experimental drugs with a control for prolongation in survival of chronic myelogeneous leukemia patients, and comparisons of curves describing a morphological structure along the spinal cord between variants of the EphA4 gene in mice. Minimize

Publisher:

Ludwig-Maximilians-Universität München

Year of Publication:

2012-10-31

Document Type:

Dissertation ; NonPeerReviewed

Subjects:

Fakultät für Mathematik ; Informatik und Statistik

Fakultät für Mathematik ; Informatik und Statistik Minimize

DDC:

310 Collections of general statistics *(computed)*

Relations:

http://edoc.ub.uni-muenchen.de/15226/

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Title:

Niveau und Güte simultaner parametrischer Inferenzverfahren

Year of Publication:

2009-01-01

Document Type:

doc-type:masterThesis ; Hochschulschrift ; NonPeerReviewed

Subjects:

Ausgewählte Abschlussarbeiten ; ddc:500

Ausgewählte Abschlussarbeiten ; ddc:500 Minimize

Relations:

http://epub.ub.uni-muenchen.de/11027/1/DA_Herberich.zip ; Herberich, Esther (2009): Niveau und Güte simultaner parametrischer Inferenzverfahren. Diplomarbeit, Ludwig-Maximilians-Universität München

URL:

Content Provider:

My Lists:

My Tags:

Notes:

Currently in BASE: 69,696,404 Documents of 3,363 Content Sources

http://www.base-search.net