medical / technology / education / art / flub
Complexity in the evaluation of medical education - how would you evaluate this one?
I am really enjoying putting together the 'current topics and controversies' week at the end of our module on 'evaluation' in the MSc/PgDip Medical Education programme at University of South Wales - though I am now well past the deadline.
Just wrote this imaginary example for discussion inspired by some keywords in a recent email from our newly appointed medical director at my Trust. It is a part of a series of questions on evaluating complex medical education interventions. How would you evaluate it?
"One of your high profile education programmes you agree is probably complex. It is a clinical leadership programme designed to support staff to become the next generation of leaders. Learners are self-selected but supported by their line managers. They undertake a 6 month programme of e-learning, mentoring, seminars, written assignments shared with their peers, conferences, and a quality improvement project that feeds into the region’s ‘Examples of Best Practice’ website. Learners are expected to work closely with their clinical teams and “learn by doing”. They form a Community of Practice of clinical leaders helping to implement practical, short-term, strategic targets for the organisation and sharing their passion for improving patient care.
Of the following evaluation models which one addresses this complexity the most?
A. A model that focuses on program improvement rather than proving something about the program.
B. Intact-group design (randomly assigning learners to one of two groups)
C. Kirkpatrick’s four-level evaluation model (finding and evaluating learning outcomes)
D. The logic model (measuring the links between inputs, activities, outputs, and outcomes)
E. Time-series experimental design (pre- and post-tests of knowledge, skills, and attitudes)"
evaluation programme learners education medical one model clinical