Intended for healthcare professionals

Research Methods & Reporting

STARD for Abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts

BMJ 2017; 358 doi: https://doi.org/10.1136/bmj.j3751 (Published 17 August 2017) Cite this as: BMJ 2017;358:j3751
  1. Jérémie F Cohen, postdoctoral fellow1 2,
  2. Daniël A Korevaar, PhD candidate1,
  3. Constantine A Gatsonis, professor of biostatistics3,
  4. Paul P Glasziou, professor of evidence-based medicine4,
  5. Lotty Hooft, co-director5,
  6. David Moher, senior scientist6 7,
  7. Johannes B Reitsma, associate professor of clinical epidemiology8,
  8. Henrica CW de Vet, professor of clinimetrics9,
  9. Patrick M Bossuyt, professor of clinical epidemiology1
  10. for the STARD Group
  1. 1Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Centre, University of Amsterdam, Amsterdam, Netherlands
  2. 2INSERM UMR 1153 and Department of Pediatrics, Necker-Enfants malades Hospital, AP-HP, Paris Descartes University, Paris, France
  3. 3Department of Biostatistics, Brown University School of Public Health, Providence, Rhode Island, USA
  4. 4Centre for Research in Evidence-Based Practice, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, Queensland, Australia
  5. 5Dutch Cochrane Centre, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, University of Utrecht, Utrecht, Netherlands
  6. 6Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
  7. 7School of Epidemiology, Public Health and Preventive Medicine, University of Ottawa, Ottawa, Canada
  8. 8Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, University of Utrecht, Utrecht, Netherlands
  9. 9Department of Epidemiology and Biostatistics, EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, Netherlands
  1. Correspondence to: J F Cohen jeremie.cohen{at}inserm.fr

Many abstracts of diagnostic accuracy studies are currently insufficiently informative. We extended the STARD (Standards for Reporting Diagnostic Accuracy) statement by developing a list of essential items that authors should consider when reporting diagnostic accuracy studies in journal or conference abstracts. After a literature review of published guidance for reporting biomedical studies, we identified 39 items potentially relevant to report in an abstract. We then selected essential items through a two round web based survey among the 85 members of the STARD Group, followed by discussions within an executive committee. Seventy three STARD Group members responded (86%), with 100% completion rate. STARD for Abstracts is a list of 11 quintessential items, to be reported in every abstract of a diagnostic accuracy study. We provide examples of complete reporting, and developed template text for writing informative abstracts.

Summary points

  • The STARD statement has become an internationally accepted reporting guideline for diagnostic accuracy studies

  • STARD for Abstracts is intended to improve the completeness and informativeness of journal and conference abstracts of diagnostic accuracy studies

  • STARD for Abstracts presents a minimal set of 11 items (specific journals or organisations could ask for additional information); whenever space restrictions allow it, authors may also incorporate other STARD 2015 elements in their abstract

Abstracts play a critical role in the use of research. Clinicians and researchers use abstracts to decide whether they should read the full journal article, attend the conference presentation, or contact the authors for more information. Systematic reviewers screen large amounts of abstracts to assess study eligibility. In some cases, study abstracts may be the only information available to clinicians, researchers, reviewers, guideline developers, or policy makers.1 In evaluations, the proportion of diagnostic accuracy studies presented as conference abstracts that are eventually reported in articles was found to be as low as 39%.234

We recently evaluated the quality of reporting of abstracts of diagnostic accuracy studies published in several high impact journals and abstracts presented at a major ophthalmology conference.56 In line with previous authors,37 we found that many of these abstracts were insufficiently informative. Key items, such as eligibility criteria, study setting, patient sampling procedures, and confidence intervals around accuracy estimates were reported in less than half of the abstracts.56 This makes it difficult for readers to assess the validity and applicability of the study findings.

Ideally, studies should be free from deficiencies, and the results of the study should reflect the “true” accuracy of the test under evaluation. Major sources of bias in diagnostic accuracy studies include methodological flaws in participant recruitment, data collection, test execution and interpretation, and data analysis.89 Even when free of bias, study findings are not necessarily generalisable to all applications. Diagnostic accuracy can vary across studies because of variations in study setting, participant characteristics, disease prevalence and severity, and aspects of test execution and interpretation.10 Risk of bias and concerns about applicability can only be evaluated if study reports are sufficiently informative.

Aim and scope

The Standards for Reporting Diagnostic Accuracy (STARD) initiative was developed in response to increasing evidence of suboptimal reporting of diagnostic accuracy studies in scientific journals.89 The STARD Group developed a list of essential items that should be presented in all full reports of diagnostic accuracy studies.11 Since its launch in 2003, STARD has been endorsed by more than 200 journals, including The BMJ. Study reports of diagnostic accuracy studies have become more complete since then, although there is still room for further improvement.12 STARD 2015, an update of the original STARD statement, was recently published by TheBMJ, Radiology, and Clinical Chemistry.131415

Unlike some other reporting guidelines, such as CONSORT (Consolidated Standards Of Reporting Trials) for randomised controlled trials1 and PRISMA (Preferred Reporting Items for Systematic reviews and Meta-analyses) for systematic reviews,16 STARD so far has not provided guidance for writing abstracts. Here we present a separate reporting guideline that can help to improve the informativeness of abstracts of diagnostic accuracy studies, both for journals and for conferences.

The guiding principle in the development of the checklist was to identify essential items that should be reported in all abstracts of diagnostic accuracy studies, considering the usual 200 to 300 word limit. The items can assist authors in presenting informative abstracts and help readers in deciding whether to invest time in reading the full report, attending the conference presentation, or contacting the authors for more information.

Methods for developing STARD for Abstracts

Detailed survey methods and results are presented in supplementary eAppendix 1 and eTables 1 and 2. We relied on standard processes for developing reporting guidelines.17 Initially we formed an executive committee (DAK and JFC, clinicians, respectively, doctoral and postdoctoral research fellows; PMB, CAG, JBR, and LH, respectively, professor in clinical epidemiology, professor in biostatistics, associate professor in clinical epidemiology, and co-director of the Dutch Cochrane Centre) and developed a protocol.18 We then conducted a literature review, which focused on previously published guidance for reporting biomedical studies (full texts and abstracts), including STARD 2015, and on studies of the methodological quality of diagnostic accuracy studies.5 Thereafter we listed 39 items judged potentially relevant to report in abstracts (see supplementary eAppendix 2).

We then invited the STARD Group, which consists of 85 clinical epidemiologists, statisticians, journal editors, and other stakeholders, to participate in a two round web based survey, aiming at obtaining consensus on which of these items were deemed essential.

Seventy three STARD Group members responded in both rounds (86%), with 100% completion rate. In the first round, participants were asked to rate to what extent each candidate item would be essential for abstracts. Consensus, defined as a positive response by at least two thirds of the respondents, was reached for 10 items. We then developed a draft STARD for Abstracts checklist and circulated it within the STARD Steering Committee. That list was fine tuned until the executive committee agreed.

In the second round, STARD Group members were asked whether they thought any of the remaining candidate items, apart from the 10 selected in the first round, should be added to the list. No consensus was reached about adding any other item. In both rounds of the survey, participants had the option to provide comments in open comment boxes.

After the survey, a revised draft STARD for Abstracts checklist was established. During a teleconference in August 2015, the executive committee agreed on incorporating two additional elements, merging these into the already selected items. This was based on concerns expressed in comments by STARD Group members during the survey. The draft list of 10 items was then circulated to members of the STARD Steering Committee to provide feedback. Before the final list was agreed upon, the executive committee decided to add an 11th item about study registration (see supplementary eAppendix 3 for a description of the flow of items through the process), to ensure consistency with another STARD initiative promoting the prospective registration of diagnostic accuracy studies.19

STARD for Abstracts

STARD for Abstracts presents a checklist of 11 essential items, to be considered in every abstract that reports on a diagnostic accuracy study (see table 1 for the checklist and table 2 for key terminology). The structure of STARD for Abstracts follows that of a typical biomedical abstract, with headings pertaining to Background and Objectives, Methods, Results, and Discussion sections.

Table 1

STARD for Abstracts: essential items for reporting diagnostic accuracy studies in journal or conference abstracts

View this table:
Table 2

Key STARD terminology

View this table:

Because 10 out of 11 STARD for Abstracts items are similar to those from STARD 2015 (see supplementary eTable 3), we did not develop a separate explanation and elaboration document; instructions can be found in STARD 2015.20 To illustrate the information that corresponds to each item, we collected examples of complete reporting (see supplementary eAppendices 4-7). To further assist authors in writing abstracts, we developed template text for each item and an example abstract (see table 3 for template text and the box for an example of application).

Table 3

STARD for Abstracts template text

View this table:

Applicability and implementation

In developing STARD for Abstracts, we aimed at identifying items that would apply to any abstract of a diagnostic accuracy study. The list presents a minimum, and specific journals or organisations could ask for additional information, such as variability across readers in imaging studies or analytical performance in laboratory tests studies. Whenever space restrictions allow it, authors may incorporate other elements from STARD 2015 in their abstract.

Based on our evaluations, we believe it is possible to address all 11 items within the 200 to 300 word limit that typically applies for abstracts (supplementary eAppendices 4-7 illustrate real abstracts, with 237 to 339 words, which comply well with the checklist). We do not make recommendations about how abstracts should be structured but only recommend that this minimal set of information should be reported within every abstract. Some conferences invite authors to provide a figure with their abstract. If so, we recommend considering submission of a diagram reflecting the design and flow of participants through the study.13

To improve the completeness of reporting, simply developing a list of items is insufficient; dissemination, endorsement, and implementation are also critical.1721 We invite journal editors and conference organisers to endorse STARD for Abstracts, by drawing attention to this list of items in their instructions to authors and conference websites. The template texts may also facilitate writing abstracts of diagnostic accuracy studies (see table 3 and the box).

Box 1: Example of an application of STARD for Abstracts template text

Point-of-care D-Dimer testing for diagnosing pulmonary embolism in primary care
  • Objective: To evaluate the negative and positive predictive value of D-Dimer testing in patients with suspected pulmonary embolism in primary care.

  • Methods: We conducted a prospective study among 70 general practitioners in the UK. Eligible for inclusion were consecutive adults, age 18 to 70 years, with suspected pulmonary embolism based on presenting signs and symptoms. All consenting patients underwent a qualitative point-of-care D-Dimer test (with a positivity cut-off of 80 ng/mL) performed by the general practitioner. Patients with a positive D-Dimer test result were referred to secondary care for further management according to national guidelines. Three months’ clinical follow-up was used as the reference standard in patients with a negative D-Dimer test result.

  • Results: Of 500 patients included in the analysis, the diagnosis of pulmonary embolism was confirmed in 50 and excluded in 450. Three cases of pulmonary embolism were observed among the 273 patients with negative D-Dimer test results. The negative predictive value of point-of-care D-dimer testing was 98.9% (95% confidence interval 96.8% to 99.8%) and the positive predictive value was 20.7% (15.6% to 26.6%).

  • Discussion: With a high negative predictive value, point-of-care D-Dimer testing could be used for the triage of adults suspected of pulmonary embolism seen in primary care.

  • Registration: ClinicalTrials.gov: NCT02593219.

  • Word count: 205

  • This abstract was created for illustrative purposes only. It is based on a virtual study.

Conclusions

We acknowledge that an important share of the burden of improving reporting and reducing waste in research is currently put on journal editors and peer reviewers, as they play a major role in the final stages of the publication process. Authors should also take action, as should other stakeholders, such as funders and academic institutions.22 We need to convince scientific institutions and universities that complete reporting forms an irrefutably indispensable element of good research practice and should be taught as such in academic training programmes—for example, as part of scientific writing courses.21 23

We believe that STARD for Abstracts can help to improve the quality of reporting of diagnostic accuracy studies through the inclusion of essential study information in every abstract, thereby increasing the value of such studies to the clinical and research community.

Footnotes

  • STARD Group collaborators: Todd Alonzo, Douglas G Altman, Augusto Azuara-Blanco, Lucas Bachmann, Jeffrey Blume, Patrick M Bossuyt, Isabelle Boutron, David Bruns, Harry Büller, Frank Buntinx, Sarah Byron, Stephanie Chang, Jérémie F Cohen, Richelle Cooper, Joris de Groot, Henrica CW de Vet, Jon Deeks, Nandini Dendukuri, Jac Dinnes, Kenneth Fleming, Constantine A Gatsonis, Paul P Glasziou, Robert M Golub, Gordon Guyatt, Carl Heneghan, Jørgen Hilden, Lotty Hooft, Rita Horvath, Myriam Hunink, Chris Hyde, John Ioannidis, Les Irwig, Holly Janes, Jos Kleijnen, André Knottnerus, Daniël A Korevaar, Herbert Y Kressel, Stefan Lange, Mariska Leeflang, Jeroen G Lijmer, Sally Lord, Blanca Lumbreras, Petra Macaskill, Erik Magid, Susan Mallett, Matthew McInnes, Barbara McNeil, Matthew McQueen, David Moher, Karel Moons, Katie Morris, Reem Mustafa, Nancy Obuchowski, Eleanor Ochodo, Andrew Onderdonk, John Overbeke, Nitika Pai, Rosanna Peeling, Margaret Pepe, Steffen Petersen, Christopher Price, Philippe Ravaud, Johannes B Reitsma, Drummond Rennie, Nader Rifai, Anne Rutjes, Holger Schunemann, David Simel, Iveta Simera, Nynke Smidt, Ewout Steyerberg, Sharon Straus, William Summerskill, Yemisi Takwoingi, Matthew Thompson, Ann van den Bruel, Hans van Maanen, Andrew Vickers, Gianni Virgili, Stephen Walter, Wim Weber, Marie Westwood, Penny Whiting, Nancy Wilczynski, and Andreas Ziegler.

    STARD for Abstracts is released under a Creative Commons license (CC BY-NC license http://creativecommons.org/licenses/by-nc/4.0) that allows others to use and distribute the list if unmodified and if they acknowledge the source. All material related to STARD for Abstracts is available at the EQUATOR website (www.equator-network.org/reporting-guidelines/stard/).

    The protocol of this study was published on the EQUATOR website (www.equator-network.org/wp-content/uploads/2009/02/STARD-for-Abstracts-protocol.pdf).

  • Contributors: PMB had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis; he is the guarantor. JFC analysed the web based survey, drafted the STARD for Abstracts list, and wrote the first draft of the manuscript. PMB, JFC, CAG, DAK, CAG, LH, and JBR coordinated the development of STARD for Abstracts, wrote the protocol, designed the web based survey, and finalised the list (executive committee). All authors, who are members of the STARD Steering Committee, revised the manuscript for important intellectual content and approved the final version submitted for publication. Members of the STARD Group participated in the web based survey. PMB supervised the development of STARD for Abstracts.

  • Funding: No external funding.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • Ethical approval: Not required.

  • Data sharing: No additional data available.

  • Transparency: The manuscript’s guarantor (PMB) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.