Identifying Key Components for an Effective Case Report Poster: An Observational Study

Residents demonstrate scholarly activity by presenting posters at academic meetings. Although recommendations from national organizations are available, evidence identifying which components are most important is not.

OBJECTIVE

To develop and test an evaluation tool to measure the quality of case report posters and identify the specific components most in need of improvement.

DESIGN

Faculty evaluators reviewed case report posters and provided on-site feedback to presenters at poster sessions of four annual academic general internal medicine meetings. A newly developed ten-item evaluation form measured poster quality for specific components of content, discussion, and format (5-point Likert scale, 1 = lowest, 5 = highest).

Main outcome measure(s): Evaluation tool performance, including Cronbach alpha and inter-rater reliability, overall poster scores, differences across meetings and evaluators and specific components of the posters most in need of improvement.

RESULTS

CONCLUSIONS

Our evaluation tool provides empirical data to guide trainees as they prepare posters for presentation which may improve poster quality and enhance their scholarly productivity.

Key Words: case report, poster, evaluation tool, academic meetings

BACKGROUND

The Accreditation Council on Graduate Medical Education (ACGME) requires residents to participate in scholarly activities and endorses case reports as one such venue. 1 , 2 Case reports, or clinical vignettes, are drawn from everyday patient encounters, do not require financial resources or formal research training, and are, therefore, well suited for resident scholarly activity. 3 Case reports add value to medical education by reflecting contemporary clinical practice and real time medical decision making. 4 , 5 Many journals publish case reports 4 , 6 , 7 and a case report may be one of the first manuscripts a resident will write 4 , 8 or present in poster format. 9

Trainees and junior faculty can begin their academic careers with a poster presentation. 8 , 9 In addition to presenting their scholarly work, poster sessions provide trainees a place to network, develop mentorship, and interact with potential future employers. Poster sessions add to meeting content by disseminating early research findings, 10 sharing interesting clinical case presentations, and allowing collaboration within the scientific community. 10 – 12 Across specialties, poster presentations have been shown to lead to subsequent manuscript publication. 10 , 11 – 17

When faced with the task of preparing a poster, trainees find guidance within their institution, from national organizations, 18 – 20 published recommendations, 12 , 21 – 23 or the Internet. Although the ACGME requires institutions to support trainees’ scholarly activities, 2 not all institutions are able to provide adequate mentoring. 9 Recommendations from national organizations or published references detail multiple components of a standard poster and are often presented as “how to tips” or general guides, 20 – 24 but there is currently no evidence to support the recommendations and no published data to identify which components of a poster are most important. As a result, the quality of posters presented at academic medical meetings is highly variable.

We implemented a mentorship program designed to give trainees feedback on their case report posters at academic meetings. Through this program, our study aim was to 1.) develop and test an evaluation tool to measure the quality of the posters and 2.) identify specific components of the posters most in need of improvement.

METHODS

Evaluation Tool

We developed a one-page standardized form to evaluate the poster components. Two authors (LLW, AP) developed the initial form, incorporating elements from the peer-review submission process of the Society of General Internal Medicine (SGIM) vignette committee and the American College of Physicians’ poster judging criteria. 18 The form was refined by members of the 2006 Southern Regional SGIM planning committee through an iterative process. The final version had ten items, rated on a five-point Likert scale (1 = lowest rating, 5 = highest rating) grouped in three domains 1) content (clear learning objectives, case description, relevant content area); 2) conclusions (tied to objectives, supported by content, increased the understanding or improved the diagnosis/ treatment); and, 3) format (clarity, appropriate amount of words, effective use of color, effective use of pictures and graphics) (Appendix). We calculated a mean score for each poster based on the ten items.

Evaluation and Feedback Process

We provided feedback to all posters presented at four case reports poster sessions. The sessions were from academic general internal medicine annual meetings: three regional meetings (Southern Society of General Internal Medicine; 2006–2008) and one national (Society of General Internal Medicine; 2007) and were 90 minutes in duration. The case reports selected for poster presentation were determined by the meetings’ requirements and peer-reviewed process.

Faculty evaluators were clinician educators from academic medical centers. We invited a convenience sample of evaluators from the meetings’ planning committees and reviewers from the initial peer-review process. Each evaluator reviewed 5–10 posters during the poster session and gave immediate feedback to the presenter; the feedback was both verbal (if the presenter was available) and written via the completed feedback form. At the end of each poster session, we collected a copy of the completed form, without poster identification information. We also collected anonymous identifiers of the evaluators including faculty rank, primary academic role, and number of years on faculty. We did not provide any specific training to the evaluators and poster assignment was random. The University of Alabama at Birmingham Institutional Review Board approved the study.

ANALYSIS

We used standard descriptive statistics and calculated Cronbach’s alpha to determine internal consistency of the evaluation tool. We assessed inter-rater reliability among faculty evaluators using 46 posters at one meeting (Southern Society of General Internal Medicine, 2008) and with a Spearman’s rho. We used the Kruskall-Wallis test to compare ratings by faculty (faculty rank, primary academic role, years on faculty) and the Mann-Whitney test to compare ratings by meeting location (regional or national); we used a value of < 0.05 to determine statistical significance.

RESULTS

A total of 45 evaluators representing 20 medical institutions (United States and Canada) completed review on 347 of 432 scheduled posters (80.3%) at four meetings (Southern SGIM 2006 = 63, 2007 = 84, 2008 = 123; National SGIM 2007 = 77); posters not reviewed were due to lack of evaluator time or poster absence. The 432 scheduled posters were from 70 institutions, 24 states, one province, and two countries (United States and Canada).

Evaluation Tool: Overall Scores and Tool Performance

Poster Components Most Needing Improvement

The overall importance of each component is shown in Figure ​ Figure1. 1 . The components with lowest scores were not clearly stating the learning objectives (content), not linking the conclusions to the learning objectives (conclusions), and having an inappropriate amount of words (format). All poster components correlated with the overall poster score, with conclusions tied to learning objectives having the highest correlation coefficient ( = 0.79).

An external file that holds a picture, illustration, etc. Object name is 11606_2008_860_Fig1_HTML.jpg

Poster component scores (5-point Likert scale; 1 = lowest rating, 5 = highest rating).

DISCUSSION

Many organizations and articles provide how-to tips to guide trainees when preparing a poster for presentation. 18 – 24 However, they are limited by lack of evidence and validation. We developed an evaluation tool to measure the quality of case report posters that, when tested in a widely representative sample, had high internal consistency by evaluators. Furthermore, we provide empirical evidence on specific poster components most needing improvement: clearly stating learning objectives, tying the conclusions to the learning objectives, and using the appropriate amount of words.

In our literature review, we found few published evaluation tools which objectively measure scientific posters. In the nursing literature, Bushy published a 30 item tool titled the Quality Assurance (QA)-Poster Evaluation Tool in 1990, which was later modified to a 10-item tool. 25 , 26 It scores posters on overall appearance and content, but lacks scientific evaluation in regards to its validity as an evaluation tool. In the neurology literature, Smith et al., evaluated 31 scientific posters at one national neurology meeting in an effort to generate poster assessment guidelines for “best poster” prizes. He found high correlation among neurologist evaluators ( = 0.75) for first impressions of a poster, and that posters with high scientific merit but unattractive appearance risked being overlooked. 27 For evaluation tools specific to case report posters, we found only one, used by The American College of Physicians for judging posters at meetings. 28 It includes items related to the case significance, presentation, methods, visual impact, and presenter interview, but does not include specific check-list items or validation of the tool.

The evaluation tool we developed can be used at academic meetings to score case report posters. First, the tool had face validity. We used criteria by national academic organizations and refined the tool through an iterative process with academic clinician educators. Second, the Cronbach alpha score of 0.84 demonstrates high internal consistency. Third, the evaluation tool had discriminant validity. Scores from the national meeting were higher than the regional meetings. The national meeting accepted approximately 50% of case report submissions, whereas the regional meetings accepted all submissions. With a more competitive selection process, national scores should be higher than regional. Finally, the tool had similar ranges across faculty evaluators. Scores were not different by number of years on faculty, by evaluator rank, or by academic role; while the inter-rater reliability was modest, 86% of ratings were either identical or within one unit in the 5-point Likert scale. In light of these findings, we believe our evaluation tool may be valuable for other educators as it yields reliable and valid data and provides specific details for a successful poster.

Mentoring trainees in poster presentations is important, and one should not underestimate the scholarly significance of presenting a poster at an academic meeting. Several studies demonstrate the scholarly importance of research abstracts initially presented as posters. Across medical specialties, 34% to 77% of posters presented at meetings were subsequently published in peer reviewed journals. 10 , 14 – 17 In one study, 45% of abstracts presented over 2 years at a national pediatric meeting were subsequently published in peer-reviewed journals. 10 Abstracts selected for poster presentation have higher rates of publication than those not accepted for presentation (22% vs. 40%), and posters have similar rates of publication, 10 , 17 time to publication, and journal impact factor 10 as oral presentations. Thus, a poster appears to be an important step in dissemination of scholarly work.

A case report presentation, either in poster or publication, serves as a venue for trainees to participate in scholarly activity. 1 , 4 , 8 , 9 In a national survey of internal medicine residents, 3 53% of respondents presented a case report abstract. When compared to residents presenting research abstracts, those residents who presented a case report were more likely to initiate the project on their own and were less likely to have had a mentor. Sixty-eight percent of respondents planned to submit their project for publication. This highlights the need for dedicated faculty to mentor trainees through the process of writing a case report and creating an effective poster presentation. 1 , 2 Our study can inform faculty mentors who may feel inexperienced to assist trainees with a case report poster.

Our study has some limitations. We evaluated only case report posters at four general internal medicine meetings. Our project was designed to provide educational mentorship and therefore the feedback not anonymous. This may have tempered the critiques and contributed to the overall high median scores. We evaluated the poster independently of the presenter and did not attempt evaluate the presenter’s knowledge, presentation skills, or interaction with the evaluator; yet the presenter’s skills may have influenced the scores. Despite the overall high median scores, and these limitations, we were still able to identify specific areas well below the 25th percentile.

In conclusion, our study validates an evaluation tool which effectively identified poster components most in need of improvement. Our evaluation tool can be utilized as a way to judge case report posters for awards or ranking. Our results can guide mentors and trainees to optimize their poster presentations which may improve the quality and enhance their scholarly productivity. Case report posters should clearly state learning objectives, tie the conclusions to the learning objectives, and use the appropriate amount of words. Future steps should adapt and further validate our evaluation form for broader settings and scientific posters. A successful poster may not only improve the educational goals of the presentation, but enhance personal and professional goals for trainees.

Acknowledgements

The authors have no potential conflicts of interest to report. Dr. Lisa L. Willett had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. The authors wish to thank the members of the 2006 Southern Regional SGIM planning committee and the faculty evaluators who provided feedback at the meetings.

Funding None.

Conflict of Interest None disclosed.