IADR Abstract Archives

Students Evaluation of Systematic Reviews Using the AMSTAR Tool

Objectives: It has been proposed that dental education should empower graduates to continuously adapt to evolving new evidence. In 2007, the development of an assessment tool for the quality of systematic reviews (AMSTAR) was reported, consisting of a validated 11 item questionnaire.
The purpose of this abstract is to describe how junior dental students apply the AMSTAR tool to evaluate systematic reviews.
Methods: Junior students received a lecture in which an systematic review was appraised with AMSTAR. Subsequently, students had to independently evaluate the quality of an unseen article. The time allocated for the exam was 50 minutes.
Results: Seventy four junior students participated and 100% answered all AMSTAR questions. The frequency of answers is presented in Table 1. The mean number of correct answers was 9 (SD = 1.047, Min = 6, Max = 10) (Fig 1).
Spearman’s nonparametric correlation analysis revealed statistically significant correlation between questions 7 and 8. Cross tabulation of these responses (Fisher Exact Test p<0.001, Table 2) shows that answers to these questions are related. This result was expected because these questions are related by design (Table 3).
Conclusions: The pattern of answers to Q9 denotes that the construct of the question (Table 3) may be questionable because the students face multiple levels of decision: first, if results were combined meta-analytically one has to decide if the combining methodology was appropriate (possible answers Yes or No); second, if the systematic review did not combine results the correct answer will be “Not Applicable”.
AMSTAR is a tool that potentially will help students developing competency in appraisal of systematic reviews. It is important however to provide extensive training in regards to how AMSTAR questions should be applied during the literature evaluation. Special emphasis should be directed towards determination and evaluation of methodologies used to combine results in meta-analyses.
IADR/AADR/CADR General Session
2015 IADR/AADR/CADR General Session (Boston, Massachusetts)
Boston, Massachusetts
2015
0946
Education Research
  • Teich, Sorin  ( Case Western Reserve University School of Dental Medicine , Cleveland , Ohio , United States )
  • Heima, Masahiro  ( Case Western Reserve University School of Dental Medicine , Cleveland , Ohio , United States )
  • Lang, Lisa  ( Case Western Reserve University School of Dental Medicine , Cleveland , Ohio , United States )
  • None
    Poster Session
    Student Perceptions, Performance Measures and Interprofessional Stakeholders
    Thursday, 03/12/2015 , 03:30PM - 04:45PM
    Frequency of correct/incorrect answers
    Item Right Wrong
    Frequency Percent Frequency Percent
    1. Was an “a priori” design provided? 74 100.00% 0 0.00%
    2. Was there duplicate study selection and data extraction? 68 91.89% 6 8.11%
    3. Was a comprehensive literature search performed? 64 86.49% 10 13.51%
    4. Was the status of publication (such as gray literature) used as an inclusion criterion? 46 62.16% 28 37.84%
    5. Was a list of studies (included and excluded) provided? 71 95.95% 3 4.05%
    6. Were the characteristics of the included studies provided? 71 95.95% 3 4.05%
    7. Was the scientific quality of the included studies assessed and documented? 63 85.14% 11 14.86%
    8. Was the scientific quality of the included studies used appropriately in formulating conclusions? 61 82.43% 13 17.57%
    9. Were the methods used to combine the findings of studies? 2 2.70% 72 97.30%
    10. Was the likelihood of publication bias assessed? 73 98.65% 1 1.35%
    11. Was the conflict of interest stated? 73 98.65% 1 1.35%

    Cross tabulation of answers to AMSTAR Q7 and Q8
    Q8 Total
    wrong right
    Q7 wrong 10 1 11
    90.90% 9.10% 100.00%
    right 3 60 63
    4.80% 95.20% 100.00%
    Total 13 61 74
    17.60% 82.40% 100.00%
    Fisher's Exact Test p<0.001
    AMSTAR questions 7-9
    AMSTAR Question
    Q7. Was the scientific quality of the included studies assessed and documented? “A priori” methods of assessment should be provided (for example, for effectiveness studies if the author(s) chose to include only randomized, double-blind, placebo-controlled studies, or allocation concealment as inclusion criteria); for other types of studies alternative items will be relevant.
    Note: Can include use of a quality scoring tool or checklist, e.g., Jadad scale, risk of bias, sensitivity analysis, etc., or a description of quality items, with some kind of result for EACH study (“low” or “high” is fine, as long as it is clear which studies scored “low” and which scored “high”; a summary score/range for all studies is not acceptable).
    Q8. Was the scientific quality of the included studies used appropriately in formulating conclusions? The results of the methodological rigor and scientific quality should be considered in the analysis and the conclusions of the review, and explicitly stated in formulating recommendations.
    Note: Might say something such as “the results should be interpreted with caution due to poor quality of included studies.” Cannot score “yes” for this question if scored “no” for question 7.
    Q9. Were the methods used to combine the findings of studies appropiate? For the pooled results, a test should be done to ensure the studies were combinable, to assess their homogeneity (that is, χ2 test for homogeneity).
    Note: Indicate “yes” if they mention or describe heterogeneity, i.e., if they explain that they cannot pool because of heterogeneity/variability between interventions.