PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation

Scoping reviews can be conducted to meet various objectives. They may examine the extent (that is, size), range (variety), and nature (characteristics) of the evidence on a topic or question; determine the value of undertaking a systematic review; summarize findings from a body of knowledge that is heterogeneous in methods or discipline; or identify gaps in the literature to aid the planning and commissioning of future research (1, 2). A recent scoping review by members of our team suggested that although the number of scoping reviews in the literature is increasing steadily, methodological and reporting quality needs to improve in order to facilitate complete and transparent reporting (1). Results from a survey on scoping review terminology, definitions, and methods showed a lack of consensus on how to conduct and report scoping reviews (3). The Joanna Briggs Institute (JBI) published a guidance document for the conduct of scoping reviews (4) (updated in 2017 [5]) based on earlier work by Arksey and O'Malley (6) and Levac and colleagues (7). However, a reporting guideline for scoping reviews currently does not exist. Reporting guidelines outline a minimum set of items to include in research reports and have been shown to increase methodological transparency and uptake of research findings (8, 9). Although a reporting guideline exists for systematic reviewsthe PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) statement (10)scoping reviews serve a different purpose (11). Systematic reviews are useful for answering clearly defined questions (for example, Does this intervention improve specified outcomes when compared with a given comparator in this population?), whereas scoping reviews are useful for answering much broader questions (such as What is the nature of the evidence for this intervention? or What is known about this concept?). Given the difference in objectives, and therefore in the methodological approach (such as presence vs. absence of a risk-of-bias assessment or meta-analysis), scoping reviews should have different essential reporting items from systematic reviews. Consequently, some PRISMA items may not be appropriate, whereas other important considerations may be missing (1214). It was decided that a PRISMA extension for scoping reviews was needed to provide reporting guidance for this specific type of knowledge synthesis. This extension is also intended to apply to evidence maps (15, 16), which share similarities with scoping reviews and involve a systematic search of a body of literature to identify knowledge gaps, with a visual representation of results (such as a figure or graph). Methods The PRISMA-ScR (PRISMA extension for Scoping Reviews) was developed according to published guidance by the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network for the development of reporting guidelines (9). The St. Michael's Hospital Research Ethics Board granted research ethics approval for this study on 15 August 2016. Protocol, Advisory Board, and Expert Panel Our protocol was drafted by the research team and revised as necessary by the advisory board before being listed as a reporting guideline on the EQUATOR (17) and PRISMA (18) Web sites. The research team included 2 leads (A.C.T. and S.E.S.) and 2 research coordinators (E.L. and W.Z.), none of whom participated in the scoring exercises, and a 4-member advisory board (K.K.O., H.C., D.L., and D.M.) with extensive experience doing scoping reviews or developing reporting guidelines. We aimed to form an expert panel of approximately 30 members that would be representative of different geography and stakeholder types and research experiences, including persons with experience in the conduct, dissemination, or uptake of scoping reviews. Survey Development and Round 1 of Delphi The initial step in developing the Delphi survey via Qualtrics (an online survey platform) (19) involved identifying potential modifications to the original 27-item PRISMA checklist. The modifications were based on a research program carried out by members of the advisory board to better understand scoping review practices (1, 3, 20) and included a broader research question and literature search strategy, optional risk-of-bias assessment and consultation exercise (whereby relevant stakeholders contribute to the work, as described by Arksey and O'Malley [6]), and a qualitative analysis. For round 1 of scoring, we prepared a draft of the PRISMA-ScR (Supplement) and asked expert panel members to rate their agreement with each of the proposed reporting items using a 7-point Likert scale (1 = entirely disagree, 2 = mostly disagree, 3 = somewhat disagree, 4 = neutral, 5 = somewhat agree, 6 = mostly agree, and 7 = entirely agree). Each survey item included an optional text box where respondents could provide comments. The research team calibrated the survey for content and clarity before administering it and sent biweekly reminders to optimize participation. Supplement. PRISMA-ScR Round 1 Survey (With Information Sheet) Survey Analysis To be conservative, a threshold for 85% agreement was established a priori for each of the reporting items to indicate consensus among the expert panel. This rule required that at least 85% of the panel mostly or entirely agreed (values of 6 or 7 on the Likert scale) with the inclusion of the item in the PRISMA-ScR. If agreement was less than 85%, it was considered to be discrepant. This standard was used for all 3 rounds of scoring to inform the final checklist. For ease and consistency with how the survey questions were worded, we did not include a provision for agreement on exclusion (that is, 85% of answers corresponding to values of 1 or 2 on the Likert scale). All comments were summarized to help explain the scorings and identify any issues. For the analysis, the results were stratified by group (in-person meeting vs. online, hereafter e-Delphi) because discrepant items could differ between groups. In-Person Group (Round 2 of Delphi) The Chatham House rule (21) was established at the beginning of the meeting, whereby participants were free to use information that is shared but were not permitted to reveal the identity or affiliation of the speaker. Expert panel members were given their individual results; the overall group distribution, median, and interquartile range; a summary of the JBI methodological guidance (4); and preliminary feedback from the e-Delphi group. These data were used to generate and inform the discussion about each discrepant item from round 1. Two researchers (A.C.T. and S.E.S.) facilitated the discussion using a modified nominal group technique (22) to reach consensus. Panel members were subsequently asked to rescore the discrepant items using sli.do (23), a live audience-response system in a format that resembled the round 1 survey. For items that failed to meet the threshold for consensus, working groups were assembled. The meeting was audio-recorded and transcribed using TranscribeMe (24), and 3 note-takers independently documented the main discussion points. The transcript was annotated to complement a master summary of the discussion points, which was compiled using the 3 note-takers' files. E-Delphi Group (Round 2 of Delphi) Those who could not attend the in-person meeting participated via an online discussion exercise using Conceptboard (25), a visual collaboration platform that allows users to provide feedback on whiteboards in real time. The discrepant items from round 1 were presented as a single whiteboard, and questions (for example, After reviewing your survey results with respect to this item, please share why you rated this item the way you did) were assigned to participants as tasks to facilitate the discussion. E-Delphi panel members received the same materials as in-person participants and were encouraged to respond to others' comments and interact through a chat feature. The second round of scoring was done in Qualtrics using a similar format as in round 1. A summary of the Conceptboard discussion, as well as the annotated meeting transcript and master summary document, were shared so that participants could learn about the perspectives of the in-person group before rescoring. Working Groups and Round 3 of Delphi To enable panel-wide dialogue and refine the checklist items before the final round of scoring, working groups were created and collaborated by teleconference and e-mail. Their task was to discuss the discrepant items in terms of the key issues and considerations (relating to both concepts and wording) that had been raised in earlier stages across both groups. To harmonize the data from the 2 groups, a third round of scoring exercise was administered using Qualtrics (19). In this step, suggested modifications (in terms of both concepts and wording) from all previous stages were incorporated into the items that had failed to reach consensus in the first 2 rounds across both groups, and the full panel scored this updated list. Interactive Workshop (Testing) A workshop led by the lead investigator (A.C.T.) and facilitated by members of the advisory board and expert panel (S.E.S., C.M.G., C.G., T.H., M.T.M., and M.D.J.P.) was held as part of the Global Evidence Summit in Cape Town, South Africa, in September 2017. Participants (researchers, scientists, policymakers, managers, and students) tested the checklist by applying the PRISMA-ScR to a scoping review on a health-related topic (26). Role of the Funding Source This work was supported by a grant from the Canadian Institutes of Health Research. The funding source had no role in designing the study; collecting, analyzing, or interpreting the data; writing the manuscript; or deciding to submit it for publication. Results Expert Panel A total of 37 persons were invited to participate, of whom 31 completed round 1 and 24 completed all 3 rounds of scoring. The Figure presents results of the modified Delphi, in

[1]  N. Perumal,et al.  Metrics of early childhood growth in recent epidemiological research: A scoping review , 2018, PloS one.

[2]  A. Tricco,et al.  Engaging policy-makers, heath system managers, and policy analysts in the knowledge synthesis process: a scoping review , 2018, Implementation Science.

[3]  Lisa Hartling,et al.  Same family, different species: methodological conduct and quality varies according to purpose for five types of knowledge synthesis. , 2017, Journal of clinical epidemiology.

[4]  Trudy van der Weijden,et al.  Disentangling self-management goal setting and action planning: A scoping review , 2017, PloS one.

[5]  Adem Sav,et al.  Measuring the burden of treatment for chronic disease: implications of a scoping review of the literature , 2017, BMC Medical Research Methodology.

[6]  H. Mcdonald,et al.  Evaluative reports on medical malpractice policies in obstetrics: a rapid scoping review , 2017, Systematic Reviews.

[7]  L. Grealish,et al.  Screening of Cognitive Impairment in the Dialysis Population: A Scoping Review , 2017, Dementia and Geriatric Cognitive Disorders.

[8]  I. Lang,et al.  Dissemination and implementation research in dementia care: a systematic scoping review and evidence map , 2017, BMC Geriatrics.

[9]  重川 須賀子,et al.  システマティックレビュー検索式査読のためのガイドライン : PRESS Peer Review of Electronic Search Strategies , 2017 .

[10]  C. Ruland,et al.  Transitions from biomedical to recovery-oriented practices in mental health: a scoping review to explore the role of Internet-based interventions , 2017, BMC Health Services Research.

[11]  Sharon E. Straus,et al.  Characteristics and knowledge synthesis approach for 456 network meta-analyses: a scoping review , 2017, BMC Medicine.

[12]  Wasifa Zarin,et al.  Utility of social media and crowd-sourced data for pharmacovigilance: a scoping review protocol , 2017, BMJ Open.

[13]  I. Lang,et al.  Physiotherapy interventions for people with dementia and a hip fracture-a scoping review of the literature. , 2016, Physiotherapy.

[14]  D. Moher,et al.  Advancing scoping study methodology: a web-based survey and consultation of perceptions on terminology, definition and methodological steps , 2016, BMC Health Services Research.

[15]  J. McGowan,et al.  PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. , 2016, Journal of clinical epidemiology.

[16]  F. Sardanelli,et al.  Pitfalls of Systematic Reviews and Meta-Analyses. , 2016, Radiology.

[17]  Susanne Hempel,et al.  What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products , 2016, Systematic Reviews.

[18]  Sharon E. Straus,et al.  A scoping review on the conduct and reporting of scoping reviews , 2016, BMC Medical Research Methodology.

[19]  P. Bossuyt,et al.  Pitfalls of Systematic Reviews and Meta-Analyses in Imaging Research. , 2015, Radiology.

[20]  D. Moher,et al.  A scoping review of rapid review methods , 2015, BMC Medicine.

[21]  D. Parker,et al.  Guidance for conducting systematic scoping reviews , 2015, International journal of evidence-based healthcare.

[22]  H. Graham,et al.  A Scoping Review of Observational Studies Examining Relationships between Environmental Behaviors and Health Behaviors , 2015, International journal of environmental research and public health.

[23]  D. Moher,et al.  Scoping reviews: time for clarity in definition, methods, and reporting. , 2014, Journal of clinical epidemiology.

[24]  D. Altman,et al.  Using Reporting Guidelines Effectively to Ensure Good Reporting of Health Research , 2014 .

[25]  Joy C MacDermid,et al.  Scoping review of patient-centered care approaches in healthcare , 2014, BMC Health Services Research.

[26]  D. Cook,et al.  Randomized controlled trials in pediatric critical care: a scoping review , 2013, Critical Care.

[27]  E Motschall,et al.  [Methods of evidence mapping. A systematic review]. , 2013, Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitsschutz.

[28]  Sally Hopewell,et al.  PRISMA for Abstracts: Reporting Systematic Reviews in Journal and Conference Abstracts , 2013, PLoS medicine.

[29]  Patrick M. M. Bossuyt,et al.  Cochrane Handbook for Systematic Reviews of Diagnostic Test Accuracy , 2013 .

[30]  A. Beurskens,et al.  Parents’ actions, challenges, and needs while enabling participation of children with a physical disability: a scoping review , 2012, BMC Pediatrics.

[31]  Jamie Hosking,et al.  How Well Does Climate Change and Human Health Research Match the Demands of Policymakers? A Scoping Review , 2012, Environmental health perspectives.

[32]  P. Bjerregaard,et al.  Obesity studies in the circumpolar Inuit: a scoping review , 2012, International journal of circumpolar health.

[33]  Susan Mallett,et al.  Circulating MicroRNAs as a Novel Class of Diagnostic Biomarkers in Gastrointestinal Tumors Detection: A Meta-Analysis Based on 42 Articles , 2014, PloS one.

[34]  D. Levac,et al.  Scoping studies: advancing the methodology , 2010, Implementation science : IS.

[35]  D. Moher,et al.  Guidance for Developers of Health Research Reporting Guidelines , 2010, PLoS medicine.

[36]  Constantine Gatsonis,et al.  Analysing and Presenting Results , 2010 .

[37]  D. Moher,et al.  Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement , 2009, BMJ.

[38]  D. Moher,et al.  Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement , 2009, BMJ : British Medical Journal.

[39]  David Moher,et al.  [CONSORT for reporting randomized controlled trials in journal and conference abstracts: explanation and elaboration]. , 2008, Zhong xi yi jie he xue bao = Journal of Chinese integrative medicine.

[40]  Andrew Booth,et al.  Clear and present questions: formulating questions for evidence based practice , 2006, Libr. Hi Tech.

[41]  H. Arksey,et al.  Scoping studies: towards a methodological framework , 2005 .

[42]  M. Gardner,et al.  More informative abstracts revisited. , 1990, The Cleft palate-craniofacial journal : official publication of the American Cleft Palate-Craniofacial Association.

[43]  W. Richardson,et al.  The well-built clinical question: a key to evidence-based decisions. , 1995, ACP journal club.

[44]  D. Hunter,et al.  Qualitative Research: Consensus methods for medical and health services research , 1995 .