To **contribute to STROBE-related discussions**, please post your comments by sending an email to the STROBE Initiative. Due to spam attacks our interactive **Discussion Forum** had to be disabled.

It would be helpful to include reference in publications so that a copy of the submission to the Ethics Committee which approved the research could be accessed.

This would also have an addition benefit in that it would describe whether participants have had access to the publication and whether there has been opportunity for comments from them to be included in publications. As the design of a study cannot be altogether separated from the way it is reported, it seems a good opportunity for STROBE to emphasise the benefits of added verification, validation and feedback from participants, especially in certain types of observational research publications such as case studies.

The STROBE group requested feedback and we therefore call attention to Table 1 (point 16) where the guidelines recommend that authors “Make clear which confounders were adjusted for and why they were included.” We believe this section should be expanded. For example, common>ly used criteria to determine if a variable is a potential confounder are if the variable is 1) associated with the exposure, 2) associated with the outcome and 3) changes the effect estimate when included in the model. However, authors / readers need to understand two important, related, exceptions. First, including a putative confounder that satisfies the above criteria may lead to bias instead of eliminate it if the variable is affected by exposure (whether it lies along the causal pathway or not). Second, conditioning on a variable that is caused by two other variables creates a conditional association between the two causes. This can result in a variable appearing to be a confounder when it is not. Again, including the variable in the model can introduce bias (commonly called selection bias) rather than eliminate it (1-3).

Determining the variables necessary to obtain an unbiased estimate requires assumptions about the causal structure of the problem and statistical methods alone cannot distinguish between a confounding variable and one that introduces selection bias. Therefore, we believe the STROBE recommendations should explicitly ask authors to make all of their assumptions about causal relationships transparent when they report their methods. Causal diagrams (encoded through directed acyclic graphs or DAGs) are a convenient way to describe these assumptions. Two excellent examples of the potential benefits of the use of causal diagrams are the biased estimates that occur when 1) CD4 counts are included as a covariate for the effects of HIV treatment (4), and 2) birth weight is included as a covariate in studies examining perinatal mortality (a common but incorrect practice)(5).

In summary, reporting causal assumptions would allow the reader to better evaluate whether the underlying assumptions of the statistical model were appropriate, and would minimize the inadvertent introduction of selection bias by researchers who include all covariates associated with outcome and exposure regardless of the causal structure.

References:

(1) Pearl J. Causality. Cambridge: Cambridge Univ Pr, 2007.

(2) Weinberg CR. Toward a clearer definition of confounding. Am J Epidemiol 1993; 137(1):1-8.

(3) Hernan MA, Hernandez-Diaz S, Robins JM. A structural approach to selection bias. Epidemiology 2004; 15(5):615-625.

(4) Hernan MA, Brumback B, Robins JM. Marginal structural models to estimate the causal effect of zidovudine on the survival of HIV-positive men. Epidemiology 2000; 11(5):561-570.

(5) Hernandez-Diaz S, Schisterman EF, Hernan MA. The birth weight "paradox" uncovered? Am J Epidemiol 2006; 164(11):1115-1120.

With great interest I have read your Strobe recommendations. However , there is one major problem concerning all your advices: Many major medical journals allow a maximun of 3000 words or even less per article. It seems to me quite impossible to comply with all your recommendations in this situation. Therefore you should provide a list of priorities to be published , if this at all is possible.

(This is written by an clinically working oncologist with great interest in clinical research, but with limited statistical competence).

First, let me congratulate the effort that went into STROBE, this is fantastic. One recommendation which I did not see is that authors should report the confidence interval method and p-value method used. For some measures, such as an odds ratio, a confidence interval could be calculated using a variety of methods, such as the Taylor series, Cornfield's, mid-p Exact, Fisher exact, and others. The same goes for statistical tests, such as for a 2x2 table could be an uncorrected chi square, corrected chi square, Mantel-Haenszel chi square, likelihood ratio chi square, mid-P exact, Fisher exact, or other method. Just knowing which computer program was used to calculate their estimates/p-values is not sufficient because many programs provide a variety of values.

Although the STROBE Statement touches on "multiplicity of analyses" in its Discussion/Interpretation part (Item Number 20), I do believe that multiplicity should be addressed in the statistical analysis part as well. At the minimum, the number of hypothesis tests performed and/or confidence intervals constructed should be disclosed/reported to provide the reader with an idea about the amount of multiplicity involved.

However, it should be much more useful/informative to actually report multiplicity-adjusted results (both p-values and confidence intervals).

What I found missing in the guidelines, were clear recommendations how to report on weighted results. Quite often in epi, especially in cross-sectional studies, results are weighted for population representativeness. There are several versions of reporting, e.g. for proportions:

- unweighted N and weighted %

- weighted N and weighted %

- unweighted N, weighted N and weighted %.