Decision Sciences Statistics, Probability and Uncertainty

Meta-analysis and systematic reviews

Description

This cluster of papers focuses on methods and guidelines for conducting systematic reviews, meta-analyses, and evidence synthesis in research. It covers topics such as the PRISMA statement, measuring inconsistency in meta-analyses, assessing risk of bias, detecting publication bias, and interpreting effect sizes. The papers also discuss reporting guidelines, quality assessment tools, and diagnostic accuracy in research.

Keywords

PRISMA Statement; Meta-Analysis; Quality Assessment; Publication Bias; Evidence Synthesis; Reporting Guidelines; Risk of Bias; Diagnostic Accuracy; Research Methodology; Effect Size

Abstract Background Usually the researchers performing meta-analysis of continuous outcomes from clinical trials need their mean value and the variance (or standard deviation) in order to pool data. However, sometimes … Abstract Background Usually the researchers performing meta-analysis of continuous outcomes from clinical trials need their mean value and the variance (or standard deviation) in order to pool data. However, sometimes the published reports of clinical trials only report the median, range and the size of the trial. Methods In this article we use simple and elementary inequalities and approximations in order to estimate the mean and the variance for such trials. Our estimation is distribution-free, i.e., it makes no assumption on the distribution of the underlying data. Results We found two simple formulas that estimate the mean using the values of the median ( m ), low and high end of the range ( a and b , respectively), and n (the sample size). Using simulations, we show that median can be used to estimate mean when the sample size is larger than 25. For smaller samples our new formula, devised in this paper, should be used. We also estimated the variance of an unknown sample using the median, low and high end of the range, and the sample size. Our estimate is performing as the best estimate in our simulations for very small samples ( n ≤ 15). For moderately sized samples (15 < n ≤ 70), our simulations show that the formula range/4 is the best estimator for the standard deviation (variance). For large samples ( n > 70), the formula range/6 gives the best estimator for the standard deviation (variance). We also include an illustrative example of the potential value of our method using reports from the Cochrane review on the role of erythropoietin in anemia due to malignancy. Conclusion Using these formulas, we hope to help meta-analysts use clinical trials in their analysis even when not all of the information is available and/or reported.
The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare … The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three … Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a … An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular "funnel-graph." The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.
Because of the pressure for timely, informed decisions in public health and clinical practice and the explosion of information in the scientific literature, research results must be synthesized. Meta-analyses are … Because of the pressure for timely, informed decisions in public health and clinical practice and the explosion of information in the scientific literature, research results must be synthesized. Meta-analyses are increasingly used to address this problem, and they often evaluate observational studies. A workshop was held in Atlanta, Ga, in April 1997, to examine the reporting of meta-analyses of observational studies and to make recommendations to aid authors, reviewers, editors, and readers.Twenty-seven participants were selected by a steering committee, based on expertise in clinical practice, trials, statistics, epidemiology, social sciences, and biomedical editing. Deliberations of the workshop were open to other interested scientists. Funding for this activity was provided by the Centers for Disease Control and Prevention.We conducted a systematic review of the published literature on the conduct and reporting of meta-analyses in observational studies using MEDLINE, Educational Research Information Center (ERIC), PsycLIT, and the Current Index to Statistics. We also examined reference lists of the 32 studies retrieved and contacted experts in the field. Participants were assigned to small-group discussions on the subjects of bias, searching and abstracting, heterogeneity, study categorization, and statistical methods.From the material presented at the workshop, the authors developed a checklist summarizing recommendations for reporting meta-analyses of observational studies. The checklist and supporting evidence were circulated to all conference attendees and additional experts. All suggestions for revisions were addressed.The proposed checklist contains specifications for reporting of meta-analyses of observational studies in epidemiology, including background, search strategy, methods, results, discussion, and conclusion. Use of the checklist should improve the usefulness of meta-analyses for authors, reviewers, editors, readers, and decision makers. An evaluation plan is suggested and research areas are explored.
Preface. Introduction. Data Sets. Tests of Statistical Significance of Combined Results. Vote-Counting Methods. Estimation of a Single Effect Size: Parametric and Nonparametric Methods. Parametric Estimation of Effect Size from a … Preface. Introduction. Data Sets. Tests of Statistical Significance of Combined Results. Vote-Counting Methods. Estimation of a Single Effect Size: Parametric and Nonparametric Methods. Parametric Estimation of Effect Size from a Series of Experiments. Fitting Parametric Fixed Effect Models to Effect Sizes: Categorical Methods. Fitting Parametric Fixed Effect Models to Effect Sizes: General Linear Models. Random Effects Models for Effect Sizes. Multivariate Models for Effect Sizes. Combining Estimates of Correlation Coefficients. Diagnostic Procedures for Research Synthesis Models. Clustering Estimates of Effect Magnitude. Estimation of Effect Size When Not All Study Outcomes Are Observed. Meta-Analysis in the Physical and Biological Sciences. Appendix. References. Index.
Academia and Clinic18 August 2009Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA StatementFREEDavid Moher, PhD, Alessandro Liberati, MD, DrPH, Jennifer Tetzlaff, BSc, and Douglas G. Altman, DSc, the … Academia and Clinic18 August 2009Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA StatementFREEDavid Moher, PhD, Alessandro Liberati, MD, DrPH, Jennifer Tetzlaff, BSc, and Douglas G. Altman, DSc, the PRISMA Group*David Moher, PhDFrom Ottawa Methods Centre, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada; Università di Modena e Reggio Emilia, Modena, Italy; Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy; and Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom.Search for more papers by this author, Alessandro Liberati, MD, DrPHFrom Ottawa Methods Centre, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada; Università di Modena e Reggio Emilia, Modena, Italy; Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy; and Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom.Search for more papers by this author, Jennifer Tetzlaff, BScFrom Ottawa Methods Centre, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada; Università di Modena e Reggio Emilia, Modena, Italy; Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy; and Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom.Search for more papers by this author, and Douglas G. Altman, DScFrom Ottawa Methods Centre, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada; Università di Modena e Reggio Emilia, Modena, Italy; Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy; and Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom.Search for more papers by this author, the PRISMA Group*Search for more papers by this authorAuthor, Article, and Disclosure Informationhttps://doi.org/10.7326/0003-4819-151-4-200908180-00135 SectionsSupplemental MaterialAboutVisual AbstractPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissions ShareFacebookTwitterLinkedInRedditEmail Editor's Note: In order to encourage dissemination of the PRISMA Statement, this article is freely accessible on the Annals of Internal Medicine Web site (www.annals.org) and will be also published in PLOS Medicine, BMJ, Journal of Clinical Epidemiology, and Open Medicine. The authors jointly hold the copyright of this article. For details on further use, see the PRISMA Web site (www.prisma-statement.org).Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field (1, 2), and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research (3), and some health care journals are moving in this direction (4). As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews.Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in four leading medical journals in 1985 and 1986 and found that none met all eight explicit scientific criteria, such as a quality assessment of included studies (5). In 1987, Sacks and colleagues (6) evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in six domains. Reporting was generally poor; between one and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement (7).In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized, controlled trials (8). In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1).Box 1. Conceptual Issues in the Evolution From QUOROM to PRISMA Download figure Download PowerPoint TerminologyThe terminology used to describe a systematic review and meta-analysis has evolved over time. One reason for changing the name from QUOROM to PRISMA was the desire to encompass both systematic reviews and meta-analyses. We have adopted the definitions used by the Cochrane Collaboration (9). A systematic review is a review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and to collect and analyze data from the studies that are included in the review. Statistical methods (meta-analysis) may or may not be used to analyze and summarize the results of the included studies. Meta-analysis refers to the use of statistical techniques in a systematic review to integrate the results of included studies.Developing the PRISMA StatementA three-day meeting was held in Ottawa, Ontario, Canada, in June 2005 with 29 participants, including review authors, methodologists, clinicians, medical editors, and a consumer. The objective of the Ottawa meeting was to revise and expand the QUOROM checklist and flow diagram, as needed.The executive committee completed the following tasks, prior to the meeting: a systematic review of studies examining the quality of reporting of systematic reviews, and a comprehensive literature search to identify methodological and other articles that might inform the meeting, especially in relation to modifying checklist items. An international survey of review authors, consumers, and groups commissioning or using systematic reviews and meta-analyses was completed, including the International Network of Agencies for Health Technology Assessment (INAHTA) and the Guidelines International Network (GIN). The survey aimed to ascertain views of QUOROM, including the merits of the existing checklist items. The results of these activities were presented during the meeting and are summarized on the PRISMA Web site (www.prisma-statement.org).Only items deemed essential were retained or added to the checklist. Some additional items are nevertheless desirable, and review authors should include these, if relevant (10). For example, it is useful to indicate whether the systematic review is an update (11) of a previous review, and to describe any changes in procedures from those described in the original protocol.Shortly after the meeting a draft of the PRISMA checklist was circulated to the group, including those invited to the meeting but unable to attend. A disposition file was created containing comments and revisions from each respondent, and the checklist was subsequently revised 11 times. The group approved the checklist, flow diagram, and this summary paper.Although no direct evidence was found to support retaining or adding some items, evidence from other domains was believed to be relevant. For example, Item 5 asks authors to provide registration information about the systematic review, including a registration number, if available. Although systematic review registration is not yet widely available (12, 13), the participating journals of the International Committee of Medical Journal Editors (ICMJE) (14) now require all clinical trials to be registered in an effort to increase transparency and accountability (15). Those aspects are also likely to benefit systematic reviewers, possibly reducing the risk of an excessive number of reviews addressing the same question (16, 17) and providing greater transparency when updating systematic reviews.The PRISMA StatementThe PRISMA Statement consists of a 27-item checklist (Table 1; see also Table S1, for a downloadable Word template for researchers to re-use) and a four-phase flow diagram (Figure 1; see also Figure S1, for a downloadable Word template for researchers to re-use). The aim of the PRISMA Statement is to help authors improve the reporting of systematic reviews and meta-analyses. We have focused on randomized trials, but PRISMA can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions. PRISMA may also be useful for critical appraisal of published systematic reviews. However, the PRISMA checklist is not a quality assessment instrument to gauge the quality of a systematic review.Table 1. Checklist of Items to Include When Reporting a Systematic Review or Meta-AnalysisFigure 1. Flow of information through the different phases of a systematic review. Download figure Download PowerPoint From QUOROM to PRISMAThe new PRISMA checklist differs in several respects from the QUOROM checklist, and the substantive specific changes are highlighted in Table 2. Generally, the PRISMA checklist “decouples” several items present in the QUOROM checklist and, where applicable, several checklist items are linked to improve consistency across the systematic review report.Table 2. Substantive Specific Changes Between the QUOROM Checklist and the PRISMA ChecklistThe flow diagram has also been modified. Before including studies and providing reasons for excluding others, the review team must first search the literature. This search results in records. Once these records have been screened and eligibility criteria applied, a smaller number of articles will remain. The number of included articles might be smaller (or larger) than the number of studies, because articles may report on multiple studies and results from a particular study may be published in several articles. To capture this information, the PRISMA flow diagram now requests information on these phases of the review process.EndorsementThe PRISMA Statement should replace the QUOROM Statement for those journals that have endorsed QUOROM. We hope that other journals will support PRISMA; they can do so by registering on the PRISMA Web site. To underscore to authors, and others, the importance of transparent reporting of systematic reviews, we encourage supporting journals to reference the PRISMA Statement and include the PRISMA Web address in their instructions to authors. We also invite editorial organizations to consider endorsing PRISMA and encourage authors to adhere to its principles.The PRISMA Explanation and Elaboration PaperIn addition to the PRISMA Statement, a supporting Explanation and Elaboration document has been produced (18) following the style used for other reporting guidelines (19–21). The process of completing this document included developing a large database of exemplars to highlight how best to report each checklist item, and identifying a comprehensive evidence base to support the inclusion of each checklist item. The Explanation and Elaboration document was completed after several face-to-face meetings and numerous iterations among several meeting participants, after which it was shared with the whole group for additional revisions and final approval. Finally, the group formed a dissemination subcommittee to help disseminate and implement PRISMA.DiscussionThe quality of reporting of systematic reviews is still not optimal (22–27). In a recent review of 300 systematic reviews, few authors reported assessing possible publication bias (22), even though there is overwhelming evidence both for its existence (28) and its impact on the results of systematic reviews (29). Even when the possibility of publication bias is assessed, there is no guarantee that systematic reviewers have assessed or interpreted it appropriately (30). Although the absence of reporting such an assessment does not necessarily indicate that it was not done, reporting an assessment of possible publication bias is likely to be a marker of the thoroughness of the conduct of the systematic review.Several approaches have been developed to conduct systematic reviews on a broader array of questions. For example, systematic reviews are now conducted to investigate cost-effectiveness (31), diagnostic (32) or prognostic questions (33), genetic associations (34), and policy making (35). The general concepts and topics covered by PRISMA are all relevant to any systematic review, not just those whose objective is to summarize the benefits and harms of a health care intervention. However, some modifications of the checklist items or flow diagram will be necessary in particular circumstances. For example, assessing the risk of bias is a key concept, but the items used to assess this in a diagnostic review are likely to focus on issues such as the spectrum of patients and the verification of disease status, which differ from reviews of interventions. The flow diagram will also need adjustments when reporting individual patient data meta-analysis (36).We have developed an explanatory document (18) to increase the usefulness of PRISMA. For each checklist item, this document contains an example of good reporting, a rationale for its inclusion, and supporting evidence, including references, whenever possible. We believe this document will also serve as a useful resource for those teaching systematic review methodology. We encourage journals to include reference to the explanatory document in their Instructions to Authors.Like any evidence-based endeavor, PRISMA is a living document. To this end we invite readers to comment on the revised version, particularly the new checklist and flow diagram, through the PRISMA Web site. We will use such information to inform PRISMA's continued development.References1. Oxman AD, Cook DJ, Guyatt GH. Users' guides to the medical literature. VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA. 1994;272:1367-71. [PMID: 7933399] CrossrefMedlineGoogle Scholar2. Swingler GH, Volmink J, Ioannidis JP. Number of published systematic reviews and global burden of disease: database analysis. BMJ. 2003;327:1083-4. [PMID: 14604930] CrossrefMedlineGoogle Scholar3. Canadian Institutes of Health Research. Randomized controlled trials registration/application checklist. December 2006. Accessed at www.cihr-irsc.gc.ca/e/documents/rct_reg_e.pdf on 19 May 2009. Google Scholar4. Young C, Horton R. Putting clinical trials into context. Lancet. 2005;366:107-8. [PMID: 16005318] CrossrefMedlineGoogle Scholar5. Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106:485-8. [PMID: 3813259] LinkGoogle Scholar6. Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analyses of randomized controlled trials. N Engl J Med. 1987;316:450-5. [PMID: 3807986] CrossrefMedlineGoogle Scholar7. Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mt Sinai J Med. 1996;63:216-24. [PMID: 8692168] MedlineGoogle Scholar8. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999;354:1896-900. [PMID: 10584742] CrossrefMedlineGoogle Scholar9. Green S, Higgins J, eds. Glossary. In: Cochrane Handbook for Systematic Reviews of Interventions 4.2.5. The Cochrane Collaboration; 2005. Accessed at www.cochrane.org/resources/glossary.htm on 19 May 2009. Google Scholar10. Strech D, Tilburt J. Value judgments in the analysis and synthesis of evidence. J Clin Epidemiol. 2008;61:521-4. [PMID: 18471654] CrossrefMedlineGoogle Scholar11. Moher D, Tsertsvadze A. Systematic reviews: when is an update an update? Lancet. 2006;367:881-3. [PMID: 16546523] CrossrefMedlineGoogle Scholar12. University of York Centre for Reviews and Dissemination. 2009. Accessed at www.york.ac.uk/inst/crd/ on 19 May 2009. Google Scholar13. The Joanna Briggs Institute protocols & work in progress. 2009. Accessed at www.joannabriggs.edu.au/pubs/systematic_reviews_prot.php on 19 May 2009. Google Scholar14. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, et al; International Committee of Medical Journal Editors. Clinical trial registration: a statement from the International Committee of Medical Journal Editors [Editorial]. CMAJ. 2004;171:606-7. [PMID: 15367465] CrossrefMedlineGoogle Scholar15. Whittington CJ, Kendall T, Fonagy P, Cottrell D, Cotgrove A, Boddington E. Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data. Lancet. 2004;363:1341-5. [PMID: 15110490] CrossrefMedlineGoogle Scholar16. Bagshaw SM, McAlister FA, Manns BJ, Ghali WA. Acetylcysteine in the prevention of contrast-induced nephropathy: a case study of the pitfalls in the evolution of evidence. Arch Intern Med. 2006;166:161-6. [PMID: 16432083] CrossrefMedlineGoogle Scholar17. Biondi-Zoccai GG, Lotrionte M, Abbate A, Testa L, Remigi E, Burzotta F, et al. Compliance with QUOROM and quality of reporting of overlapping meta-analyses on the role of acetylcysteine in the prevention of contrast associated nephropathy: case study. BMJ. 2006;332:202-9. [PMID: 16415336] CrossrefMedlineGoogle Scholar18. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche P, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med. 2009;151:W-65-94. LinkGoogle Scholar19. Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, et al; CONSORT GROUP (Consolidated Standards of Reporting Trials). The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med. 2001;134:663-94. [PMID: 11304107] LinkGoogle Scholar20. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al; Standards for Reporting of Diagnostic Accuracy. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med. 2003;138:W1-12. [PMID: 12513067] LinkGoogle Scholar21. Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, et al; STROBE Initiative. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Ann Intern Med. 2007;147:W163-94. [PMID: 17938389] LinkGoogle Scholar22. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4:78. [PMID: 17388659] CrossrefMedlineGoogle Scholar23. Bhandari M, Morrow F, Kulkarni AV, Tornetta P. Meta-analyses in orthopaedic surgery. A systematic review of their methodologies. J Bone Joint Surg Am. 2001;83-A:15-24. [PMID: 11205853] CrossrefMedlineGoogle Scholar24. Kelly KD, Travers A, Dorgan M, Slater L, Rowe BH. Evaluating the quality of systematic reviews in the emergency medicine literature. Ann Emerg Med. 2001;38:518-26. [PMID: 11679863] CrossrefMedlineGoogle Scholar25. Richards D. The quality of systematic reviews in dentistry. Evid Based Dent. 2004;5:17. [PMID: 15238972] CrossrefMedlineGoogle Scholar26. Choi PT, Halpern SH, Malik N, Jadad AR, Tramèr MR, Walder B. Examining the evidence in anesthesia literature: a critical appraisal of systematic reviews. Anesth Analg. 2001;92:700-9. [PMID: 11226105] CrossrefMedlineGoogle Scholar27. Delaney A, Bagshaw SM, Ferland A, Manns B, Laupland KB, Doig CJ. A systematic evaluation of the quality of meta-analyses in the critical care literature. Crit Care. 2005;9:R575-82. [PMID: 16277721] CrossrefMedlineGoogle Scholar28. Dickersin K. Publication bias: recognizing the problem, understanding its origins and scope, and preventing harm.. In: Rothstein HR, Sutton AJ, Borenstein M, eds. Publication Bias in Meta-Analysis—Prevention, Assessment and Adjustments. Chichester, UK: J Wiley; 2005:11-33. Google Scholar29. Sutton AJ. Evidence concerning the consequences of publication and related biases.. In: Rothstein HR, Sutton AJ, Borenstein M, eds. Publication Bias in Meta-Analysis—Prevention, Assessment and Adjustments. Chichester, UK: J Wiley; 2005:175-92. Google Scholar30. Lau J, Ioannidis JP, Terrin N, Schmid CH, Olkin I. The case of the misleading funnel plot. BMJ. 2006;333:597-600. [PMID: 16974018] CrossrefMedlineGoogle Scholar31. Ladabaum U, Chopra CL, Huang G, Scheiman JM, Chernew ME, Fendrick AM. Aspirin as an adjunct to screening for prevention of sporadic colorectal cancer. A cost-effectiveness analysis. Ann Intern Med. 2001;135:769-81. [PMID: 11694102] LinkGoogle Scholar32. Deeks JJ. Systematic reviews in health care: Systematic reviews of evaluations of diagnostic and screening tests. BMJ. 2001;323:157-62. [PMID: 11463691] CrossrefMedlineGoogle Scholar33. Altman DG. Systematic reviews of evaluations of prognostic variables. BMJ. 2001;323:224-8. [PMID: 11473921] CrossrefMedlineGoogle Scholar34. Ioannidis JP, Ntzani EE, Trikalinos TA, Contopoulos-Ioannidis DG. Replication validity of genetic association studies. Nat Genet. 2001;29:306-9. [PMID: 11600885] CrossrefMedlineGoogle Scholar35. Lavis J, Davies H, Oxman A, Denis JL, Golden-Biddle K, Ferlie E. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy. 2005;10 Suppl 1 35-48. [PMID: 16053582] CrossrefMedlineGoogle Scholar36. Stewart LA, Clarke MJ. Practical methodology of meta-analyses (overviews) using updated individual patient data. Cochrane Working Group. Stat Med. 1995;14:2057-79. [PMID: 8552887] CrossrefMedlineGoogle Scholar37. Moja LP, Telaro E, D'Amico R, Moschetti I, Coe L, Liberati A. Assessment of methodological quality of primary studies by systematic reviews: results of the metaquality cross sectional study. BMJ. 2005;330:1053. [PMID: 15817526] CrossrefMedlineGoogle Scholar38. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al; GRADE Working Group. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336:924-6. [PMID: 18436948] CrossrefMedlineGoogle Scholar39. Schünemann HJ, Jaeschke R, Cook DJ, Bria WF, El-Solh AA, Ernst A, et al; ATS Documents Development and Implementation Committee. An official ATS statement: grading the quality of evidence and strength of recommendations in ATS guidelines and recommendations. Am J Respir Crit Care Med. 2006;174:605-14. [PMID: 16931644] CrossrefMedlineGoogle Scholar40. Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291:2457-65. [PMID: 15161896] CrossrefMedlineGoogle Scholar41. Chan AW, Krleza-Jerić K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ. 2004;171:735-40. [PMID: 15451835] CrossrefMedlineGoogle Scholar42. Silagy CA, Middleton P, Hopewell S. Publishing protocols of systematic reviews: comparing what was done to what was planned. JAMA. 2002;287:2831-4. [PMID: 12038926] CrossrefMedlineGoogle Scholar Comments0 CommentsSign In to Submit A Comment Author, Article, and Disclosure InformationAffiliations: From Ottawa Methods Centre, Ottawa Hospital Research Institute, University of Ottawa, Ottawa, Ontario, Canada; Università di Modena e Reggio Emilia, Modena, Italy; Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy; and Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom.Acknowledgment: The following people contributed to the PRISMA Statement: Doug Altman, DSc, Centre for Statistics in Medicine (Oxford, United Kingdom); Gerd Antes, PhD, University Hospital Freiburg (Freiburg, Germany); David Atkins, MD, MPH, Health Services Research & Development Service, Veterans Health Administration (Washington, DC); Virginia Barbour, MRCP, DPhil, PLoS Medicine (Cambridge, United Kingdom); Nick Barrowman, PhD, Children's Hospital of Eastern Ontario (Ottawa, Ontario, Canada); Jesse A. Berlin, ScD, Johnson & Johnson Pharmaceutical Research and Development (Titusville, New Jersey); Jocalyn Clark, PhD, PLoS Medicine (at the time of writing, BMJ; London, United Kingdom); Mike Clarke, PhD, UK Cochrane Centre (Oxford, United Kingdom) and School of Nursing and Midwifery, Trinity College (Dublin, Ireland); Deborah Cook, MD, Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University (Hamilton, Ontario, Canada); Roberto D'Amico, PhD, Università di Modena e Reggio Emilia (Modena, Italy) and Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri (Milan, Italy); Jonathan J. Deeks, PhD, University of Birmingham (Birmingham, United Kingdom); P.J. Devereaux, MD, PhD, Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University (Hamilton, Ontario, Canada); Kay Dickersin, PhD, Johns Hopkins Bloomberg School of Public Health (Baltimore, Maryland); Matthias Egger, MD, Department of Social and Preventive Medicine, University of Bern (Bern, Switzerland); Edzard Ernst, MD, PhD, FRCP, FRCP(Edin), Peninsula Medical School (Exeter, United Kingdom); Peter C. Gøtzsche, MD, MSc, The Nordic Cochrane Centre (Copenhagen, Denmark); Jeremy Grimshaw, MBChB, PhD, FRCFP, Ottawa Hospital Research Institute (Ottawa, Ontario, Canada); Gordon Guyatt, MD, Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University (Hamilton, Ontario, Canada); Julian Higgins, PhD, MRC Biostatistics Unit (Cambridge, United Kingdom); John P.A. Ioannidis, MD, University of Ioannina Campus (Ioannina, Greece); Jos Kleijnen, MD, PhD, Kleijnen Systematic Reviews Ltd (York, United Kingdom) and School for Public Health and Primary Care (CAPHRI), University of Maastricht (Maastricht, the Netherlands); Tom Lang, MA, Tom Lang Communications and Training (Davis, California); Alessandro Liberati, MD, Università di Modena e Reggio Emilia (Modena, Italy) and Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri (Milan, Italy); Nicola Magrini, MD, NHS Centre for the Evaluation of the Effectiveness of Health Care–CeVEAS (Modena, Italy); David McNamee, PhD, The Lancet (London, United Kingdom); Lorenzo Moja, MD, MSc, Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri (Milan, Italy); David Moher, PhD, Ottawa Methods Centre, Ottawa Hospital Research Institute (Ottawa, Ontario, Canada); Cynthia Mulrow, MD, MSc, Annals of Internal Medicine (Philadelphia, Pennsylvania); Maryann Napoli, Center for Medical Consumers (New York, New York); Andy Oxman, MD, Norwegian Health Services Research Centre (Oslo, Norway); Ba' Pham, MMath, Toronto Health Economics and Technology Assessment Collaborative (Toronto, Ontario, Canada; at the time of the first meeting of the group, GlaxoSmithKline Canada [Mississauga, Ontario, Canada]); Drummond Rennie, MD, FRCP, FACP, University of California, San Francisco (San Francisco, California); Margaret Sampson, MLIS, Children's Hospital of Eastern Ontario (Ottawa, Ontario, Canada); Kenneth F. Schulz, PhD, MBA, Family Health International (Durham, North Carolina); Paul G. Shekelle, MD, PhD, Southern California Evidence-Based Practice Center (Santa Monica, California); Jennifer Tetzlaff, BSc, Ottawa Methods Centre, Ottawa Hospital Research Institute (Ottawa, Ontario, Canada); David Tovey, FRCGP, The Cochrane Library, Cochrane Collaboration (Oxford, United Kingdom; at the time of the first meeting of the group, BMJ [London, United Kingdom]); and Peter Tugwell, MD, MSc, FRCPC, Institute of Population Health, University of Ottawa (Ottawa, Ontario, Canada).Grant Support: PRISMA was funded by the Canadian Institutes of Health Research; Università di Modena e Reggio Emilia, Italy; Cancer Research UK; Clinical Evidence BMJ Knowledge; The Cochrane Collaboration; and GlaxoSmithKline, Canada. Dr. Liberati is funded, in part, through grants of the Italian Ministry of University (COFIN-PRIN 2002 prot. 2002061749 and COFIN-PRIN 2006 prot. 2006062298). Dr. Altman is funded by Cancer Research UK. Dr. Moher is funded by a University of Ottawa Research Chair. None of the sponsors had any involvement in the planning, execution, or write-up of the PRISMA documents. Additionally, no funder played a role in drafting the manuscript.Disclosures: None disclosed.Corresponding Author: David Moher, PhD, Ottawa Methods Centre, Ottawa Hospital Research Institute, The Ottawa Hospital, General Campus, Critical Care Wing (Eye Institute), 6th Floor, 501 Smyth Road, Ottawa, Ontario K1H 8L6, Canada; e-mail, [email protected]ca.Current Author Addresses: Dr. Moher and Ms. Tetzlaff: Ottawa Methods Centre, Ottawa Hospital Research Institute, The Ottawa Hospital, General Campus, Critical Care Wing (Eye Institute), 6th Floor, 501 Smyth Road, Ottawa, Ontario K1H 8L6, Canada.Dr. Liberati: Università di Modena e Reggio Emilia and Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Via La Masa 19, 20156 Milan, Italy.Dr. Altman: Centre for Statistics in Medicine, University of Oxford, Wolfson College Annexe, Linton Road, Oxford OX2 6UD, United Kingdom.* Membership of the PRISMA Group is provided in the Acknowledgment. PreviousarticleNextarticle Advertisement FiguresReferencesRelatedDetailsSee AlsoThe PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration Alessandro Liberati , Douglas G. Altman , Jennifer Tet
Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based … Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.
During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports.The … During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports.The CONSORT explanation and elaboration article 58 was published in 2001 alongside the 2001 version of the CONSORT statement.It discussed the rationale and scientific background for each item and provided published examples of good reporting.The rationale for revising that article is similar to that for revising the statement, described above.We briefly describe below the main additions and deletions to this version of the explanation and elaboration article. the consort 2010 explanation and elaboration: changesWe have made several substantive and some cosmetic changes
We study recently developed nonparametric methods for estimating the number of missing studies that might exist in a meta-analysis and the effect that these studies might have had on its … We study recently developed nonparametric methods for estimating the number of missing studies that might exist in a meta-analysis and the effect that these studies might have had on its outcome. These are simple rank-based data augmentation techniques, which formalize the use of funnel plots. We show that they provide effective and relatively powerful tests for evaluating the existence of such publication bias. After adjusting for missing studies, we find that the point estimate of the overall effect size is approximately correct and coverage of the effect size confidence intervals is substantially improved, in many cases recovering the nominal confidence levels entirely. We illustrate the trim and fill method on existing meta-analyses of studies in clinical trials and psychometrics.
Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalizability. The Strengthening the … Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalizability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover 3 main study designs: cohort, case–control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors, to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. Eighteen items are common to all 3 study designs and 4 are specific for cohort, case–control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available at http://www.annals.org and on the Web sites of PLoS Medicine and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.
Flaws in the design, conduct, analysis, and reporting of randomised trials can cause the effect of an intervention to be underestimated or overestimated. The Cochrane Collaboration's tool for assessing risk … Flaws in the design, conduct, analysis, and reporting of randomised trials can cause the effect of an intervention to be underestimated or overestimated. The Cochrane Collaboration's tool for assessing risk of bias aims to make the process clearer and more accurate
In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises … In 2003, the QUADAS tool for systematic reviews of diagnostic accuracy studies was developed. Experience, anecdotal reports, and feedback suggested areas for improvement; therefore, QUADAS-2 was developed. This tool comprises 4 domains: patient selection, index test, reference standard, and flow and timing. Each domain is assessed in terms of risk of bias, and the first 3 domains are also assessed in terms of concerns regarding applicability. Signalling questions are included to help judge risk of bias. The QUADAS-2 tool is applied in 4 phases: summarize the review question, tailor the tool and produce review-specific guidance, construct a flow diagram for the primary study, and judge bias and applicability. This tool will allow for more transparent rating of bias and applicability of primary diagnostic accuracy studies.
The CONSORT statement is used worldwide to improve the reporting of randomised controlled trials. Kenneth Schulz and colleagues describe the latest version, CONSORT 2010, which updates the reporting guideline based … The CONSORT statement is used worldwide to improve the reporting of randomised controlled trials. Kenneth Schulz and colleagues describe the latest version, CONSORT 2010, which updates the reporting guideline based on new methodological evidence and accumulating experience. To encourage dissemination of the CONSORT 2010 Statement, this article is freely accessible on bmj.com and will also be published in the Lancet, Obstetrics and Gynecology, PLoS Medicine, Annals of Internal Medicine, Open Medicine, Journal of Clinical Epidemiology, BMC Medicine, and Trials.
The aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the … The aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process.Recent evidence-based practice initiatives have increased the need for and the production of all types of reviews of the literature (integrative reviews, systematic reviews, meta-analyses, and qualitative reviews). The integrative review method is the only approach that allows for the combination of diverse methodologies (for example, experimental and non-experimental research), and has the potential to play a greater role in evidence-based practice for nursing. With respect to the integrative review method, strategies to enhance data collection and extraction have been developed; however, methods of analysis, synthesis, and conclusion drawing remain poorly formulated.A modified framework for research reviews is presented to address issues specific to the integrative review method. Issues related to specifying the review purpose, searching the literature, evaluating data from primary sources, analysing data, and presenting the results are discussed. Data analysis methods of qualitative research are proposed as strategies that enhance the rigour of combining diverse methodologies as well as empirical and theoretical sources in an integrative review.An updated integrative review method has the potential to allow for diverse primary research methods to become a greater part of evidence-based practice initiatives.
OBJECTIVE: To test the feasibility of creating a valid and reliable checklist with the following features: appropriate for assessing both randomised and non-randomised studies; provision of both an overall score … OBJECTIVE: To test the feasibility of creating a valid and reliable checklist with the following features: appropriate for assessing both randomised and non-randomised studies; provision of both an overall score for study quality and a profile of scores not only for the quality of reporting, internal validity (bias and confounding) and power, but also for external validity. DESIGN: A pilot version was first developed, based on epidemiological principles, reviews, and existing checklists for randomised studies. Face and content validity were assessed by three experienced reviewers and reliability was determined using two raters assessing 10 randomised and 10 non-randomised studies. Using different raters, the checklist was revised and tested for internal consistency (Kuder-Richardson 20), test-retest and inter-rater reliability (Spearman correlation coefficient and sign rank test; kappa statistics), criterion validity, and respondent burden. MAIN RESULTS: The performance of the checklist improved considerably after revision of a pilot version. The Quality Index had high internal consistency (KR-20: 0.89) as did the subscales apart from external validity (KR-20: 0.54). Test-retest (r 0.88) and inter-rater (r 0.75) reliability of the Quality Index were good. Reliability of the subscales varied from good (bias) to poor (external validity). The Quality Index correlated highly with an existing, established instrument for assessing randomised studies (r 0.90). There was little difference between its performance with non-randomised and with randomised studies. Raters took about 20 minutes to assess each paper (range 10 to 45 minutes). CONCLUSIONS: This study has shown that it is feasible to develop a checklist that can be used to assess the methodological quality not only of randomised controlled trials but also non-randomised studies. It has also shown that it is possible to produce a checklist that provides a profile of the paper, alerting reviewers to its particular methodological strengths and weaknesses. Further work is required to improve the checklist and the training of raters in the assessment of external validity.
Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and … Cochrane Reviews have recently started including the quantity I 2 to help readers assess the consistency of the results of studies in meta-analyses. What does this new quantity mean, and why is assessment of heterogeneity so important to clinical practice? Systematic reviews and meta-analyses can provide convincing and reliable evidence relevant to many aspects of medicine and health care.1 Their value is especially clear when the results of the studies they include show clinically important effects of similar magnitude. However, the conclusions are less clear when the included studies have differing results. In an attempt to establish whether studies are consistent, reports of meta-analyses commonly present a statistical test of heterogeneity. The test seeks to determine whether there are genuine differences underlying the results of the studies (heterogeneity), or whether the variation in findings is compatible with chance alone (homogeneity). However, the test is susceptible to the number of trials included in the meta-analysis. We have developed a new quantity, I 2, which we believe gives a better measure of the consistency between trials in a meta-analysis. Assessment of the consistency of effects across studies is an essential part of meta-analysis. Unless we know how consistent the results of studies are, we cannot determine the generalisability of the findings of the meta-analysis. Indeed, several hierarchical systems for grading evidence state that the results of studies must be consistent or homogeneous to obtain the highest grading.2–4 Tests for heterogeneity are commonly used to decide on methods for combining studies and for concluding consistency or inconsistency of findings.5 6 But what does the test achieve in practice, and how should the resulting P values be interpreted? A test for heterogeneity examines the null hypothesis that all studies are evaluating the same effect. The usual test statistic …
Abstract The extent of heterogeneity in a meta‐analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between‐study variance, but interpretation is then … Abstract The extent of heterogeneity in a meta‐analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between‐study variance, but interpretation is then specific to a particular treatment effect metric. A test for the existence of heterogeneity exists, but depends on the number of studies in the meta‐analysis. We develop measures of the impact of heterogeneity on a meta‐analysis, from mathematical criteria, that are independent of the number of studies and the treatment effect metric. We derive and propose three suitable statistics: H is the square root of the χ 2 heterogeneity statistic divided by its degrees of freedom; R is the ratio of the standard error of the underlying mean from a random effects meta‐analysis to the standard error of a fixed effect meta‐analytic estimate, and I 2 is a transformation of H that describes the proportion of total variation in study estimates that is due to heterogeneity. We discuss interpretation, interval estimates and other properties of these measures and examine them in five example data sets showing different amounts of heterogeneity. We conclude that H and I 2 , which can usually be calculated for published meta‐analyses, are particularly useful summaries of the impact of heterogeneity. One or both should be presented in published meta‐analyses in preference to the test for heterogeneity. Copyright © 2002 John Wiley & Sons, Ltd.
Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not … Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
The <b>metafor</b> package provides functions for conducting meta-analyses in <b>R</b>. The package includes functions for fitting the meta-analytic fixed- and random-effects models and allows for the inclusion of moderators variables … The <b>metafor</b> package provides functions for conducting meta-analyses in <b>R</b>. The package includes functions for fitting the meta-analytic fixed- and random-effects models and allows for the inclusion of moderators variables (study-level covariates) in these models. Meta-regression analyses with continuous and categorical moderators can be conducted in this way. Functions for the Mantel-Haenszel and Peto's one-step method for meta-analyses of 2 x 2 table data are also available. Finally, the package provides various plot functions (for example, for forest, funnel, and radial plots) and functions for assessing the model fit, for obtaining case diagnostics, and for tests of publication bias.
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of … There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
Structured summary 2 Provide a structured summary including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of … Structured summary 2 Provide a structured summary including, as applicable, background, objectives, data sources, study eligibility criteria, participants, interventions, study appraisal and synthesis methods, results, limitations, conclusions and implications of key findings, systematic review registration number Flow of information through the different phases of a systematic review No of records identified through database searching No of additional records identified through other sources No of records after duplicates removed No of studies included in qualitative synthesis No of studies included in quantitative synthesis (meta-analysis)
<h3>Abstract</h3> <b>Objective:</b> Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a … <h3>Abstract</h3> <b>Objective:</b> Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. <b>Design:</b> Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the <i>Cochrane Database of Systematic Reviews</i>. <b>Main outcome measure:</b> Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. <b>Results:</b> In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. <b>Conclusions:</b> A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. <h3>Key messages</h3> Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the <i>Cochrane Database of Systematic Reviews</i> Critical examination of systematic reviews for publication and related biases should be considered a routine procedure
Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated … Agreement between two methods of clinical measurement can be quantified using the differences between observations made using the two methods on the same subjects. The 95% limits of agreement, estimated by mean difference +/- 1.96 standard deviation of the differences, provide an interval within which 95% of differences between measurements by the two methods are expected to lie. We describe how graphical methods can be used to investigate the assumptions of the method and we also give confidence intervals. We extend the basic approach to data where there is a relationship between difference and magnitude, both with a simple logarithmic transformation approach and a new, more general, regression approach. We discuss the importance of the repeatability of each method separately and compare an estimate of this to the limits of agreement. We extend the limits of agreement approach to data with repeated measurements, proposing new estimates for equal numbers of replicates by each method on each subject, for unequal numbers of replicates, and for replicated data collected in pairs, where the underlying value of the quantity being measured is changing. Finally, we describe a nonparametric approach to comparing methods.
Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate … Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate the understanding and appraisal of the review methods, as well as the detection of modifications to methods and selective reporting in completed reviews. We describe the development of a reporting guideline, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015). PRISMA-P consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review. Funders and those commissioning reviews might consider mandating the use of the checklist to facilitate the submission of relevant protocol information in funding applications. Similarly, peer reviewers and editors can use the guidance to gauge the completeness and transparency of a systematic review protocol submitted for publication in a journal or other medium.
Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for … Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for the presence of selective reporting against completed reviews, and, when made publicly available, reduce duplication of efforts and potentially prompt collaboration. Evidence documenting the existence of selective reporting and excessive duplication of reviews on the same or similar topics is accumulating and many calls have been made in support of the documentation and public availability of review protocols. Several efforts have emerged in recent years to rectify these problems, including development of an international register for prospective reviews (PROSPERO) and launch of the first open access journal dedicated to the exclusive publication of systematic review products, including protocols (BioMed Central's <i>Systematic Reviews</i>). Furthering these efforts and building on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, an international group of experts has created a guideline to improve the transparency, accuracy, completeness, and frequency of documented systematic review and meta-analysis protocols—PRISMA-P (for protocols) 2015. The PRISMA-P checklist contains 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol. This PRISMA-P 2015 Explanation and Elaboration paper provides readers with a full understanding of and evidence about the necessity of each item as well as a model example from an existing published protocol. This paper should be read together with the PRISMA-P 2015 statement. Systematic review authors and assessors are strongly encouraged to make use of PRISMA-P when drafting and appraising review protocols.
Synthesis of multiple randomized controlled trials (RCTs) in a systematic review can summarize the effects of individual outcomes and provide numerical answers about the effectiveness of interventions. Filtering of searches … Synthesis of multiple randomized controlled trials (RCTs) in a systematic review can summarize the effects of individual outcomes and provide numerical answers about the effectiveness of interventions. Filtering of searches is time consuming, and no single method fulfills the principal requirements of speed with accuracy. Automation of systematic reviews is driven by a necessity to expedite the availability of current best evidence for policy and clinical decision-making. We developed Rayyan ( http://rayyan.qcri.org ), a free web and mobile app, that helps expedite the initial screening of abstracts and titles using a process of semi-automation while incorporating a high level of usability. For the beta testing phase, we used two published Cochrane reviews in which included studies had been selected manually. Their searches, with 1030 records and 273 records, were uploaded to Rayyan. Different features of Rayyan were tested using these two reviews. We also conducted a survey of Rayyan's users and collected feedback through a built-in feature. Pilot testing of Rayyan focused on usability, accuracy against manual methods, and the added value of the prediction feature. The "taster" review (273 records) allowed a quick overview of Rayyan for early comments on usability. The second review (1030 records) required several iterations to identify the previously identified 11 trials. The "suggestions" and "hints," based on the "prediction model," appeared as testing progressed beyond five included studies. Post rollout user experiences and a reflexive response by the developers enabled real-time modifications and improvements. The survey respondents reported 40% average time savings when using Rayyan compared to others tools, with 34% of the respondents reporting more than 50% time savings. In addition, around 75% of the respondents mentioned that screening and labeling studies as well as collaborating on reviews to be the two most important features of Rayyan. As of November 2016, Rayyan users exceed 2000 from over 60 countries conducting hundreds of reviews totaling more than 1.6M citations. Feedback from users, obtained mostly through the app web site and a recent survey, has highlighted the ease in exploration of searches, the time saved, and simplicity in sharing and comparing include-exclude decisions. The strongest features of the app, identified and reported in user feedback, were its ability to help in screening and collaboration as well as the time savings it affords to users. Rayyan is responsive and intuitive in use with significant potential to lighten the load of reviewers.
Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the … Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case-control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. 18 items are common to all three study designs and four are specific for cohort, case-control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the Web sites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.
The number of published systematic reviews of studies of healthcare interventions has increased rapidly and these are used extensively for clinical and policy decisions. Systematic reviews are subject to a … The number of published systematic reviews of studies of healthcare interventions has increased rapidly and these are used extensively for clinical and policy decisions. Systematic reviews are subject to a range of biases and increasingly include non-randomised studies of interventions. It is important that users can distinguish high quality reviews. Many instruments have been designed to evaluate different aspects of reviews, but there are few comprehensive critical appraisal instruments. AMSTAR was developed to evaluate systematic reviews of randomised trials. In this paper, we report on the updating of AMSTAR and its adaptation to enable more detailed assessment of systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. With moves to base more decisions on real world observational evidence we believe that AMSTAR 2 will assist decision makers in the identification of high quality systematic reviews, including those based on non-randomised studies of healthcare interventions<i>.</i>
Scoping reviews are a relatively new approach to evidence synthesis and currently there exists little guidance regarding the decision to choose between a systematic review or scoping review approach when … Scoping reviews are a relatively new approach to evidence synthesis and currently there exists little guidance regarding the decision to choose between a systematic review or scoping review approach when synthesising evidence. The purpose of this article is to clearly describe the differences in indications between scoping reviews and systematic reviews and to provide guidance for when a scoping review is (and is not) appropriate. Researchers may conduct scoping reviews instead of systematic reviews where the purpose of the review is to identify knowledge gaps, scope a body of literature, clarify concepts or to investigate research conduct. While useful in their own right, scoping reviews may also be helpful precursors to systematic reviews and can be used to confirm the relevance of inclusion criteria and potential questions. Scoping reviews are a useful tool in the ever increasing arsenal of evidence synthesis approaches. Although conducted for different purposes compared to systematic reviews, scoping reviews still require rigorous and transparent methods in their conduct to ensure that the results are trustworthy. Our hope is that with clear guidance available regarding whether to conduct a scoping review or a systematic review, there will be less scoping reviews being performed for inappropriate indications better served by a systematic review, and vice-versa.
Assessment of risk of bias is regarded as an essential component of a systematic review on the effects of an intervention. The most commonly used tool for randomised trials is … Assessment of risk of bias is regarded as an essential component of a systematic review on the effects of an intervention. The most commonly used tool for randomised trials is the Cochrane risk-of-bias tool. We updated the tool to respond to developments in understanding how bias arises in randomised trials, and to address user feedback on and limitations of the original tool.
Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting … Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field,1,2 and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research,3 and some health care journals are moving in this direction.4 As with all research, the value of a systematic review depends on what was done, what was found, and the clarity of reporting. As with other publications, the reporting quality of systematic reviews varies, limiting readers' ability to assess the strengths and weaknesses of those reviews. Several early studies evaluated the quality of review reports. In 1987, Mulrow examined 50 review articles published in 4 leading medical journals in 1985 and 1986 and found that none met all 8 explicit scientific criteria, such as a quality assessment of included studies.5 In 1987, Sacks and colleagues6 evaluated the adequacy of reporting of 83 meta-analyses on 23 characteristics in 6 domains. Reporting was generally poor; between 1 and 14 characteristics were adequately reported (mean = 7.7; standard deviation = 2.7). A 1996 update of this study found little improvement.7 In 1996, to address the suboptimal reporting of meta-analyses, an international group developed a guidance called the QUOROM Statement (QUality Of Reporting Of Meta-analyses), which focused on the reporting of meta-analyses of randomized controlled trials.8 In this article, we summarize a revision of these guidelines, renamed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), which have been updated to address several conceptual and practical advances in the science of systematic reviews (Box 1). Box 1 Conceptual issues in the evolution from QUOROM to PRISMA
Poor reporting of research hampers assessment and makes it less useful. An international group of methodologists, researchers, and journal editors sets out guidelines to improve reports of observational studies Poor reporting of research hampers assessment and makes it less useful. An international group of methodologists, researchers, and journal editors sets out guidelines to improve reports of observational studies
Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is … Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users.Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement--a reporting guideline published in 1999--there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions.The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors … The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors … The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors … The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the … Much biomedical research is observational. The reporting of such research is often inadequate, which hampers the assessment of its strengths and weaknesses and of a study's generalisability. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Initiative developed recommendations on what should be included in an accurate and complete report of an observational study. We defined the scope of the recommendations to cover three main study designs: cohort, case–control, and cross-sectional studies. We convened a 2-day workshop in September 2004, with methodologists, researchers, and journal editors to draft a checklist of items. This list was subsequently revised during several meetings of the coordinating group and in e-mail discussions with the larger group of STROBE contributors, taking into account empirical evidence and methodological considerations. The workshop and the subsequent iterative process of consultation and revision resulted in a checklist of 22 items (the STROBE Statement) that relate to the title, abstract, introduction, methods, results, and discussion sections of articles. 18 items are common to all three study designs and four are specific for cohort, case–control, or cross-sectional studies. A detailed Explanation and Elaboration document is published separately and is freely available on the Web sites of PLoS Medicine, Annals of Internal Medicine, and Epidemiology. We hope that the STROBE Statement will contribute to improving the quality of reporting of observational studies.
se ha posicionado como un centro de referencia asistencial y académica, con la proyección de conformar clínicas o centros de excelencia.Para lograr este objetivo, es indispensable que realice y se … se ha posicionado como un centro de referencia asistencial y académica, con la proyección de conformar clínicas o centros de excelencia.Para lograr este objetivo, es indispensable que realice y se ciña a sus propias guías de práctica clínica o estándares clínicos basados en la evidencia (ECBE), que permita una adecuada y estandarizada atención de pacientes aplicando la mejor evidencia médica disponible.La obesidad es una patología de alta prevalencia en la población general, siendo uno de los motivos de consulta más
Evidence-based Practice (EBP) is a vital principle, with its origins in the 1970s, that has transformed the disciplines of medicine and healthcare. The use of best available evidence to inform … Evidence-based Practice (EBP) is a vital principle, with its origins in the 1970s, that has transformed the disciplines of medicine and healthcare. The use of best available evidence to inform decisions and best practice has since spread across other disciplines, including in the environmental sciences through evidence-based conservation and environmental management. However, ironically there only appears to be a single scoping review on the impacts and return-on-investment of EBP in healthcare and it is unclear whether any such evidence exists in the broad field of conservation and environmental management. In this scoping review, we aim to explore the extent to which evaluations of the impacts and return-on-investment of EBP and evidence use have been conducted in conservation and environmental management on both human and environmental outcomes. We will search at least ten different electronic bibliographic platforms, databases, and search engines for published and grey literature, from 1992 to 2025 – there will be no geographical or language restrictions on the documents included. A machine learning-assisted review process will be followed using open source tools (ASReview and SysRev) and following the comprehensive SYstematic review Methodology Blending Active Learning and Snowballing (SYMBALS). The findings from the scoping review will be useful to inform organisations and practitioners considering implementing EBP on its benefits and costs and will also highlight potential research gaps on the impact of EBP and evidence use.
Critical appraisal tools are essential instruments used to systematically assess the validity, reliability, and applicability of research evidence, aiding in informed decision-making across various fields. These tools help to minimize … Critical appraisal tools are essential instruments used to systematically assess the validity, reliability, and applicability of research evidence, aiding in informed decision-making across various fields. These tools help to minimize bias and evaluate the methodological rigor of studies. Complementing critical appraisal, reporting standards provide guidelines for transparently and comprehensively documenting research findings. Adherence to these standards ensures clarity, facilitates accurate interpretation, and enhances the reproducibility of research. Together, critical appraisal tools and reporting standards are indispensable for evidence-based practice, promoting the use of high-quality research in decision-making and contributing to the advancement of knowledge.
Abstract Study coding is an essential component of the research synthesis process. Data extracted during study coding serve as a direct link between the included studies and the synthesis results, … Abstract Study coding is an essential component of the research synthesis process. Data extracted during study coding serve as a direct link between the included studies and the synthesis results, allowing reviewers to justify claims about the findings from a set of related studies. The purpose of this tutorial is to provide authors, particularly those new to research synthesis, with recommendations to develop study coding manuals and forms that result in efficient, high-quality data extraction. Each of the 10 easy-to-follow practices is supported with additional resources, examples, or non-examples to help authors develop high-quality study coding materials. With the increase in publication of meta-analyses in recent years across many disciplines, a primary goal of this article is to enhance the quality of study coding materials that authors develop.
The increasing variety of literature synthesis methods and the evolving guidelines for conducting them can be overwhelming for early-career researchers. While these guidelines provide a structured approach, they may contain … The increasing variety of literature synthesis methods and the evolving guidelines for conducting them can be overwhelming for early-career researchers. While these guidelines provide a structured approach, they may contain hidden challenges and nuances that only become apparent through practical application. The aim of this paper is to provide supplementary guidance based on the latest available guidelines and two PhD candidates’ experiences of conducting scoping reviews. The paper outlines what a scoping review is, when to use a scoping review and summarises the most recent guidelines for conducting such reviews. We describe five steps of the scoping review process: 1) identifying the research question; 2) scoping review protocol; 3) identifying relevant studies and literature; 4) selecting studies to be included in the review; and 5) data extraction, synthesis and results presentation. We share our learnings, including the use of Rayyan and Covidence software for the systematic management of literature. Though not presenting new methodologies, this paper provides practical advice on how to apply the scoping review methodology to support its effective and rigorous implementation. Te Reo Māori Translation Te Kawe i ngā Arotake Tuhinga: He Māramatanga Āwhina Mahi mō ngā Kairangahau Hauora Tau Tuatahi Ngā Ariā Matua Tērā anō te huhua o ngā tikanga tuitui rangatū tuhinga, me ngā aratohu mō te kawe, ā, ka tauwhare mai pea hei taumahatanga mō ngā kairangahau tau tuatahi. Ahakoa nā ēnei aratohu i hora mai ētahi ritenga nahanaha, kei roto pea ētahi wero me ētahi aronga e kore e kitea wawetia, hei te hoatutanga tūturu i te ao mahi kātahi anō ka kitea. Ko te whāinga o tēnei tuhinga he hoatu aratohu tāpiri i takea mai i ngā aratohu mohoa, me ngā wheako o ētahi kaiaruaru tohu kairangi i te kawenga i ngā arotake tuhinga. Kei tēnei tuhinga ka kitea he aha rawa tēnei mea te tuhinga arotake i te whānuitanga o ngā taunakitanga wātea , āhea me whakamahi i taua momo arotake, ā, kei konei hoki he whakarāpopototanga o te kawe i aua tū arotake. Ka whakamāramatia e mātou ngā hipanga e rima o te hātepe arotake i te whānuitanga o ngā taunakitanga wātea: 1) te tautohu i te pātai rangahau; 2) te kawe o te taua tū arotake; 3) te tautohu i ngā rangahau me ngā tuhinga tauhāngai; 4) te kōwhiri i ngā rangahau me whakauru ki te arotake; me 5) te kounu raraunga, te tuitui raraunga me te whakaatu i ngā kitenga ki te ao. Ka tiria e mātou ā mātou kitenga, tae atu ki te whakamahinga o ngā pūmanawa Rayyan me Covidence mō te whakahaere nahanaha i ngā tuhinga. Ahakoa kāore i te tāpae tikanga mahi hou, ko tā tēnei pepa he tuku tohutohu whai take mō te hoatutanga o ngā tikanga mahi arotake tuhinga hei tautoko i tōna whakatinanatanga whai hua, tōtika rawa hoki.
Pediatric research is often faced with ethical and logistical challenges. Pediatric literature is dominated by studies that are performed to answer similar questions but of small sample size, varying in … Pediatric research is often faced with ethical and logistical challenges. Pediatric literature is dominated by studies that are performed to answer similar questions but of small sample size, varying in methodology, results, and conclusions. Therefore, evidence synthesis methods are necessary for summarizing existing evidence and recognizing gaps in knowledge. A systematic review is used to summarize evidence in conjunction with meta-analysis, a statistical technique that combines results from studies to generate a summary estimate with CIs. Systematic reviews with meta-analysis represent the highest level in the hierarchy of evidence to inform clinical practice. Meta-analysis, by combining studies, increases precision and may help determine certainty of evidence for a specific clinical question. Systematic reviews and meta-analyses allow the clinician to explore reasons for variability and heterogeneity. Based on the intended research question, several types of meta-analysis are available, which must be chosen appropriately. We review the principles and types of meta-analyses that are used to summarize clinical evidence. We also summarize the common strengths (eg, increasing precision) and limitations (eg, publication bias and study heterogeneity) of a meta-analysis. Our review will inform clinicians on appraising published meta-analyses and may equip them with a framework and methods to perform their own.
Individual participant data (IPD) meta-analysis of randomised trials is a crucial method for detecting and investigating effect modifications in medical research. However, few studies have explored scenarios involving systematically missing … Individual participant data (IPD) meta-analysis of randomised trials is a crucial method for detecting and investigating effect modifications in medical research. However, few studies have explored scenarios involving systematically missing data on discrete effect modifiers (EMs) in IPD meta-analyses with a limited number of trials. This simulation study examines the impact of systematic missing values in IPD meta-analysis using a two-stage imputation method. We simulated IPD meta-analyses of randomised trials with multiple studies that had systematically missing data on the EM. A multivariable Weibull survival model was specified to assess beneficial (Hazard Ratio (HR) <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" overflow="scroll"> <mml:mo>=</mml:mo> </mml:math> 0.8), null (HR <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" overflow="scroll"> <mml:mo>=</mml:mo> </mml:math> 1.0), and harmful (HR <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" overflow="scroll"> <mml:mo>=</mml:mo> </mml:math> 1.2) treatment effects for low, medium, and high levels of an EM, respectively. Bias and coverage were evaluated using Monte-Carlo simulations. The absolute bias for common and heterogeneous effect IPD meta-analyses was less than 0.016 and 0.007, respectively, with coverage close to its nominal value across all EM levels. An uncongenial imputation model resulted in larger bias, even when the proportion of studies with systematically missing data on the EM was small. Overall, the proposed two-stage imputation approach provided unbiased estimates with improved precision. The assumptions and limitations of this approach are discussed.
Objective To evaluate the methodology quality, reporting quality, and evidence quality of systematic reviews and meta-analyses on Chinese patented oral medicines that promote blood circulation and remove blood stasis combined … Objective To evaluate the methodology quality, reporting quality, and evidence quality of systematic reviews and meta-analyses on Chinese patented oral medicines that promote blood circulation and remove blood stasis combined with conventional Western medicine in the treatment of coronary heart disease angina pectoris. The aim is to identify and address methodological issues in systematic reviews of Chinese patented oral medicines for promoting blood circulation and removing blood stasis in angina pectoris. This study also offers methodological guidance for future research design and implementation, and provides a basis for clinical decision-making. Methods A systematic search was performed using CNKI, Wanfang, VIP, CBM, PubMed, Cochrane Library, Embase, and Web of Science databases, covering the period from the inception of each database to July 18, 2024, and meta-analyses on randomized controlled trials were included. Methodological quality was evaluated using AMSTAR-2, reporting quality using PRISMA 2020, and evidence quality of the outcome indicators using GRADE. Results Twenty meta-analyses were examined, involving a total of 41,231 patients with angina pectoris. The methodological quality of all studies was rated as “critically low,” with notable deficiencies in the registration of the study protocol, study inclusion criteria, assessment of individual study risk of bias, evaluation of the likelihood of publication bias, and discussion of the effects of publication bias on the results. The assessment of the three qualities mentioned above revealed common issues, including incomplete abstracts, lack of characteristics for pooled outcomes, failure to report risk of bias across studies, missing registration information, lack of accessibility details for protocols, and unreported modifications to registered protocols or plans. Evidence quality assessment revealed that 16 outcome indicators were rated as “moderate,” 29 as “low,” and 46 as “very low.” Conclusion Despite the demonstrated efficacy and safety of Chinese patented oral medicines that promote blood circulation and remove blood stasis in the adjuvant treatment of angina pectoris, the low methodological and reporting quality of current systematic reviews and meta-analyses compromises the reliability of these findings. Future research should focus on standardizing study design and reporting to improve the reliability of evidence.
<ns3:p>The field of contraceptive research and development (R&amp;D) has not historically benefitted from a strong or coordinated knowledge exchange effort, due in part to the fact that it has no … <ns3:p>The field of contraceptive research and development (R&amp;D) has not historically benefitted from a strong or coordinated knowledge exchange effort, due in part to the fact that it has no base in any single academic field of study. A number of funders, platforms, and research and advocacy organizations play a role in knowledge promotion and coordination, but these efforts are limited. The Contraceptive Technology Innovation (CTI) Exchange is one such platform that recently embarked on a co-creation process to improve knowledge exchange for the contraceptive R&amp;D field as a whole and to inform its own redesign and relaunch efforts. This co-creation process, informed by design thinking principles, collected information and ideas from 55 participants from 35 organizations and 11 countries and identified three significant knowledge exchange needs in the contraceptive R&amp;D community: 1) credible, current, and creative field-specific content, 2) curated, aggregated, and easy-to-use resources, and 3) audience-driven knowledge exchange. These findings will be used to simplify and refocus the CTI Exchange to better meet the needs of contraceptive researchers and developers at every career stage, but these findings also have field-wide applicability. We are calling for a swell of knowledge exchange efforts in the field of contraceptive R&amp;D to prompt and promote health and well-being worldwide. We encourage contraceptive innovators and knowledge exchange specialists to make use of and contribute to the CTI Exchange and apply the findings outlined here to shape their own work.</ns3:p>
<ns3:p>Science is justly admired as a cumulative process (“standing on the shoulders of giants”), yet scientific knowledge is typically built on a patchwork of research contributions without much coordination. This … <ns3:p>Science is justly admired as a cumulative process (“standing on the shoulders of giants”), yet scientific knowledge is typically built on a patchwork of research contributions without much coordination. This lack of efficiency has specifically been addressed in clinical research by recommendations against avoidable research waste and for living systematic reviews and prospective meta-analysis. We propose to further those recommendations with ALL-IN meta-analysis: Anytime Live and Leading INterim meta-analysis. ALL-IN provides meta-analysis based on <ns3:italic>e</ns3:italic>-values and anytime-valid confidence intervals that can be updated at any time—reanalyzing after each new observation while retaining type-I error and coverage guarantees, live—no need to prespecify the looks, and leading—in the decisions on whether individual studies should be initiated, stopped or expanded, the meta-analysis can be the leading source of information without losing validity to accumulation bias. The analysis design requires no information about the trial sample sizes or the number of trials eventually included. So ALL-IN meta-analysis can be applied retrospectively as well as prospectively, to evaluate the evidence once or sequentially. Because the intention of the analysis does not change the validity of the results, the results of the analysis can change the intentions (‘optional stopping’ and ‘optional continuation’ based on the results so far). On the one hand: any analysis can be turned into a living one, or even become prospective and real-time by updating with new trial data and including interim data from trials that are still ongoing — without any changes in the cut-offs for testing or the method for interval estimation. On the other hand: no stopping rule needs to be enforced for the analysis to remain valid, so participating in a prospective meta-analysis does not require outside control over data collection. Hence ALL-IN meta-analysis breathes life into living systematic reviews, and offers better and simpler statistics, efficiency, collaboration and communication.</ns3:p>
Introduction: The use of reporting guidelines and clinical trial registration policies by academic journals reduces bias and improves transparency in clinical research. It is unknown whether geriatric and gerontology journals … Introduction: The use of reporting guidelines and clinical trial registration policies by academic journals reduces bias and improves transparency in clinical research. It is unknown whether geriatric and gerontology journals mention, recommend, or require their use for the studies they may potentially publish. The purpose of this study is to assess the submission guidelines of the top geriatric and gerontology journals for their editorial recommendation or requirement of predetermined reporting guidelines and clinical trial registration. Methods: Using the 2021 Scopus CiteScore tool, we identified the top 100 journals in the “Geriatrics and Gerontology” subcategory. We reviewed each journal's “Instructions to Authors” for references to reporting guidelines commonly used for various study designs, categorizing them as “Not Mentioned,” “Recommended,” “Does Not Require,” or “Required.” Additionally, we assessed how each journal addressed clinical trial registration using the same classification system. Results: Among the 100 journals reviewed, none referenced the QUOROM statement. In contrast, the CONSORT statement was the most frequently mentioned, with 44 journals (44%) recommending or requiring its use. PRISMA guidelines were omitted by 57 journals (57%), while study registration was recommended or required by 92 journals (92%). Conclusion: The recommendation or requirement of reporting guidelines and clinical trial registration in the top 100 geriatric and gerontology journals is inconsistent. Journal editors should strongly recommend that authors follow reporting guidelines to reduce potential bias and improve transparency in the articles they publish.
Masaki Futamura | Nihon Shoni Arerugi Gakkaishi The Japanese Journal of Pediatric Allergy and Clinical Immunology
The number of literature reviews in the fields of ecology and conservation has increased dramatically in recent years. Scientists conduct systematic literature reviews with the aim of drawing conclusions based … The number of literature reviews in the fields of ecology and conservation has increased dramatically in recent years. Scientists conduct systematic literature reviews with the aim of drawing conclusions based on the content of a representative sample of publications. This requires subjective judgments on qualitative content, including interpretations and deductions. However, subjective judgments can differ substantially even between highly trained experts that are faced with the same evidence. Because classification of content into codes by one individual rater is prone to subjectivity and error, general guidelines recommend checking the produced data for consistency and reliability. Metrics on agreement between multiple people exist to assess the rate of agreement (consistency). These metrics do not account for mistakes or allow for their correction, while group discussions about codes that have been derived from classification of qualitative data have shown to improve reliability and accuracy. Here, we describe a pragmatic approach to reliability testing that gives insights into the error rate of multiple raters. Five independent raters rated and discussed categories for 23 variables within 21 peer-reviewed publications on conservation management plans. Mistakes, including overlooking information in the text, were the most common source of disagreement, followed by differences in interpretation and ambiguity around categories. Discussions could resolve most differences in ratings. We recommend our approach as a significant improvement on current review and synthesis approaches that lack assessment of misclassification.
Nasima Sawlat , Hayatullah Masomi | Journal of Mathematics and Statistics Studies
Sampling error is a significant factor in research, denoting the variance between sample statistics and actual population values. This study examines techniques for quantifying and mitigating sampling error to improve … Sampling error is a significant factor in research, denoting the variance between sample statistics and actual population values. This study examines techniques for quantifying and mitigating sampling error to improve the reliability and accuracy of research findings. Essential methods for determining sampling error, such as the standard error of the mean, confidence intervals, proportional error estimates, and bootstrapping, are examined comprehensively. Strategies to mitigate sampling error, including augmenting sample size, using stratified sampling, utilizing systematic sampling, implementing weighted adjustments, and enhancing sampling frames, are examined. The results underscore the significance of rigorous sampling techniques in reducing error, guaranteeing representativeness, and improving the validity of outcomes. The research emphasizes the significance of sophisticated statistical methodologies and pilot studies in mitigating constraints in sampling methods. This study offers pragmatic insights and methodological directives for academics, policymakers, and practitioners in several fields. It also delineates avenues for further investigation, including the use of sophisticated computational techniques and context-specific sampling methodologies, to further reduce sample error and enhance study quality.
Lori M. Rhudy | Journal of Neuroscience Nursing
Abstract This review reflects on the lessons and limitations of the first large, collaborative replication project in sports and exercise science. We discuss the challenges and barriers faced, while also … Abstract This review reflects on the lessons and limitations of the first large, collaborative replication project in sports and exercise science. We discuss the challenges and barriers faced, while also exploring the broader contribution of replication to the field. This project faced many practical challenges when preparing studies for replication, specifically the poor reporting of statistical information, the availability of original raw data and the prioritisation of feasibility at the risk of some bias. However, we believe these issues reflect the larger sports and exercise science field. Therefore, our research culture needs to change to minimise the active engagement in behaviours that reduce reproducibility and replicability, and enable collective evaluation of research in line with the foundations of scientific rigour. In addition, discourse with the original study authors was a challenging process as many were unwilling to engage and this indicates a problematic perception of replication. We also reflect on the contribution of replication to theory development in sports and exercise science so that this review can serve as a valuable resource for understanding replication and can aid future replication efforts.
Abstract Background The replicability of sports and exercise research has not been assessed previously despite concerns about scientific practices within the field. Aim This study aims to provide an initial … Abstract Background The replicability of sports and exercise research has not been assessed previously despite concerns about scientific practices within the field. Aim This study aims to provide an initial estimate of the replicability of applied sports and exercise science research published in quartile 1 journals (SCImago journal ranking for 2019 in the Sports Science subject category; www.scimagojr.com ) between 2016 and 2021. Methods A formalised selection protocol for this replication project was previously published. Voluntary collaborators were recruited, and studies were allocated in a stratified and randomised manner on the basis of equipment and expertise. Original authors were contacted to provide deidentified raw data, to review preregistrations and to provide methodological clarifications. A multiple inferential strategy was employed to analyse the replication data. The same analysis (i.e. F test or t test) was used to determine whether the replication effect size was statistically significant and in the same direction as the original effect size. Z -tests were used to determine whether the original and replication effect size estimates were compatible or significantly different in magnitude. Results In total, 25 replication studies were included for analysis. Of the 25, 10 replications used paired t tests, 1 used an independent t test and 14 used an analysis of variance (ANOVA) for the statistical analyses. In all, 7 (28%) studies demonstrated robust replicability, meeting all three validation criteria: achieving statistical significance ( p &lt; 0.05) in the same direction as the original study and showing compatible effect size magnitudes as per the Z test ( p &gt; 0.05). Conclusion There was a substantial decrease in the published effect size estimate magnitudes when replicated; therefore, sports and exercise science researchers should consider effect size uncertainty when conducting subsequent power analyses. Additionally, there were many barriers to conducting the replication studies, e.g., original author communication and poor data and reporting transparency.
Data collection serves as the cornerstone in the study of clinical research questions. Two types of data are commonly utilized in medicine: (1) Qualitative; and (2) Quantitative. Several methods are … Data collection serves as the cornerstone in the study of clinical research questions. Two types of data are commonly utilized in medicine: (1) Qualitative; and (2) Quantitative. Several methods are commonly employed to gather data, regardless of whether retrospective or prospective studies are used: (1) Interviews; (2) Observational methods; (3) Questionnaires; (4) Investigation parameters; (5) Medical records; and (6) Electronic chart reviews. Each source type has its own advantages and cons in terms of the accuracy and availability of the data to be extracted. We will focus on the important parts of the research methodology: (1) Data collection; and (2) Subgroup analyses. Errors in research can arise from various sources, including investigators, instruments, and subjects, making the validation and reliability of research tools crucial for ensuring the credibility of findings. Subgroup analyses can either be planned before or emerge after (post-hoc) treatment. The interpretation of subgroup effects should consider the interaction between treatment effect and various patient variables with caution.
Scientific journals play a crucial role in promoting open science. The Transparency and Openness Promotion (TOP) guidelines identify a range of standards that journals can adopt to promote the verifiability … Scientific journals play a crucial role in promoting open science. The Transparency and Openness Promotion (TOP) guidelines identify a range of standards that journals can adopt to promote the verifiability of the research they publish. We evaluated the adoption of TOP standards within health psychology and behavioural medicine journal policies, as this had not yet been systematically assessed. In a cross-sectional study on 19 health psychology and behavioural medicine journals, eight raters evaluated TOP standard adoption by these journals using the TRUST journal policy evaluation tool. Out of a total possible score of 29, journal scores ranged from 1 to 13 (median = 6). Standards related to use of reporting guidelines and data transparency were adopted the most, whereas standards related to pre-registration of study analysis plans and citation of code were adopted the least. TOP guidelines have to-date been poorly adopted within health psychology and behavioural medicine journal policies. There are several relatively straightforward opportunities for improvement, such as expanding policies around research data to also consider code and materials, and reducing ambiguity of wording. However, other improvements may require a collaborative approach involving all research stakeholders.
Background Reproducibility crisis is among major concerns of many scientists worldwide. Some researchers believe that the crisis is mostly attributed to the conventional p significance threshold value arbitrarily chosen to … Background Reproducibility crisis is among major concerns of many scientists worldwide. Some researchers believe that the crisis is mostly attributed to the conventional p significance threshold value arbitrarily chosen to be 0.05 and propose to lower the cut-off to 0.005. Reducing the cut-off, although decreases the false-positive rate, is associated with an increase in false-negative rate. Recently, a flexible p significance threshold that minimizes the weighted sum of errors in statistical inference tests of hypothesis was proposed. Methods The current in silico study was conducted to compare the error rates under different conditions assumed for the p significance threshold—0.05, 0.005, and a flexible threshold. Using a Monte Carlo simulation, the false-positive rate (when the null hypothesis was true) and false-negative rate (when the alternative hypothesis was true) were calculated in a hypothetical randomized clinical trial. Results Increasing the study sample size was associated with a reduction in the false-negative rate, however, the false-positive rate occurred at a fixed value regardless of the sample size when fixed significance thresholds were used; the rate decreased, however, when the flexible threshold was employed. While employing the flexible threshold abolished the reproducibility crisis to a large extent, the method uncovered an inherent conflict in the frequentist statistical inference framework. Calculation of the flexible p significance threshold is only possible a posteriori , after the results are obtained. The threshold would thus be different even for replicas, which is in contradiction to the common sense. Conclusions It seems that relying on frequentist statistical inference and the p value is no longer a viable approach. Emphasis should be shifted toward alternative approaches for data analysis, Bayesian statistical methods, for example.
Cross-sectional studies are a useful observational study design that provides a snapshot of a population’s health status at a specific moment in time. Analytical cross-sectional studies are often included in … Cross-sectional studies are a useful observational study design that provides a snapshot of a population’s health status at a specific moment in time. Analytical cross-sectional studies are often included in systematic reviews investigating the etiology or risk of diseases, and descriptive cross-sectional studies are often used to determine the prevalence of a disease. As required of all studies that meet eligibility criteria for a systematic review, analytical cross-sectional studies should be subjected to appropriate critical appraisal of their methodological quality to determine the risk of bias. The JBI Effectiveness Methodology Group is currently undertaking a comprehensive revision of the entire suite of JBI critical appraisal tools to align with recent advances in risk of bias assessment. This paper presents the revised critical appraisal tool for risk of bias assessment of analytical cross-sectional studies. Applying tools such as the revised JBI tools within systematic reviews allows for end users to make informed decisions using the evidence. We discuss major changes from the previous iterations of this tool and justify these changes within the context of the broader advancements to risk-of-bias assessment science. We also offer practical guidance for the use of this revised tool, and provide examples for interpreting the results of risk-of-bias assessment for analytical cross-sectional studies to support reviewers including these studies in their systematic reviews.
La tecnología ha transformado profundamente la medicina veterinaria, al aportar herramientas innovadoras para el diagnóstico, tratamiento, reproducción, monitoreo y bienestar animal. En este contexto, el presente estudio analizó la producción … La tecnología ha transformado profundamente la medicina veterinaria, al aportar herramientas innovadoras para el diagnóstico, tratamiento, reproducción, monitoreo y bienestar animal. En este contexto, el presente estudio analizó la producción científica relacionada con la aplicación de tecnologías en medicina veterinaria, a través de la identificación de tendencias, actores clave y temáticas predominantes en el campo. La investigación se desarrolló mediante una revisión bibliométrica de tipo cuantitativo, utilizando la base de datos Scopus, de la cual se extrajeron 938 estudios pertinentes. Para el análisis de los metadatos se empleó el software R Studio, a través del paquete bibliometrix, lo que permitió visualizar y sistematizar la información. Entre los principales resultados se observa un crecimiento sostenido de publicaciones desde 2016, con Estados Unidos y China como los países más productivos, y con una predominancia de artículos científicos. Las temáticas más relevantes incluyen inteligencia artificial, nanotecnología, imagenología avanzada, impresión 3D y tecnologías reproductivas. En conjunto, los indicadores bibliométricos aplicados permitieron caracterizar la evolución del campo, identificar los trabajos más citados y reconocer las áreas emergentes que marcan la agenda de investigación en tecnología y medicina veterinaria.
ABSTRACT Reproducibility and replicability of study results are crucial for advancing scientific knowledge. However, achieving these goals is often challenging, which can compromise the credibility of research and incur immeasurable … ABSTRACT Reproducibility and replicability of study results are crucial for advancing scientific knowledge. However, achieving these goals is often challenging, which can compromise the credibility of research and incur immeasurable costs for the progression of science. Despite efforts to standardize reporting with guidelines, the description of statistical methodology in manuscripts often remains insufficient, limiting the possibility of replicating scientific studies. A thorough, transparent, and complete report of statistical methods is essential for understanding study results and mimicking statistical strategies implemented in previous studies. This review outlines the key statistical reporting elements required to replicate statistical methods in most current veterinary pharmacology studies. It also offers a protocol for statistical reporting to aid in manuscript preparation and to assist trialists and editors in the collective strive for advancing veterinary pharmacology research.
| American Journal of Health-System Pharmacy