Original Research

Are medical student results affected by allocation to different sites in a dispersed rural medical school?

AUTHORS

name here
Tarun Sen Gupta
1 PhD, Chair of Medical Education ORCID logo

name here
Richard B Hays
2 MD, Dean *

name here
Gill Kelly
3 MS, Education officer

name here
Petra G Buettner
4 PhD, Associate professor

CORRESPONDENCE

*Prof Richard B Hays

AFFILIATIONS

1, 3 School of Medicine & Dentistry, James Cook University, Townsville, Queensland, Australia

2 Faculty of Health Sciences & Medicine, Bond University, Queensland, Australia

4 School of Public Health, Tropical Medicine and Rehabilitation Sciences, Townsville, Queensland, Australia

PUBLISHED

17 January 2011 Volume 11 Issue 1

HISTORY

RECEIVED: 27 April 2010

REVISED: 18 October 2010

ACCEPTED: 17 January 2011

CITATION

Sen Gupta T, Hays RB, Kelly G, Buettner PG.  Are medical student results affected by allocation to different sites in a dispersed rural medical school? Rural and Remote Health 2011; 11: 1511. https://doi.org/10.22605/RRH1511

AUTHOR CONTRIBUTIONSgo to url

© Tarun Sen Gupta, Richard B Hays, Gill Kelly, Petra G Buettner 2011 A licence to publish this material has been given to James Cook University, jcu.edu.au


abstract:

Introduction: As medical education becomes more decentralised, and greater use is made of rural clinical schools and other dispersed sites, attention is being paid to the quality of the learning experiences across these sites. This article explores this issue by analysing the performance data of 4 cohorts of students in a dispersed clinical school model across 4 sites. The study is set in a newly established medical school in a regional area with a model of dispersed education, using data from the second to fifth cohorts to graduate from this school.
Methods: Summative assessment results of 4 graduating cohorts were examined over the final 2 years of the course. Two analyses were conducted: an analysis of variance of mean scores in both years across the 4 sites; and an analysis of the effect of moving to different clinical schools on the students' rank order of performance by use of the Kruskal-Wallis test.
Results: Analysis revealed no significant difference in the mean scores of the students studying at each site, and no significant differences overall in the median ranking across the years. Some small changes in the relative ranking of students were noticed, and workplace-based assessment scores in the final year were higher than the examination-based scores in the previous year.
Conclusions: The choice of clinical school site for the final 2 years of an undergraduate rural medical school appears to have no effect on mean assessment scores and only a minor effect on the rank order of student scores. Workplace-based assessment produces higher scores but also has little effect on student rank order. Further studies are necessary to replicate these findings in other settings and demonstrate that student learning experiences in rural sites, while popular with students, translate into required learning outcomes, as measured by summative assessments.

Key words: Key words: assessment, clinical teaching, medical education, rural clinical school, undergraduate.

full article:

Introduction

Background

The use of multiple clinical sites is a strategy that many medical schools adopt in order to provide clinical resources to support student learning across large healthcare systems. Typically, a large urban medical school would allocate students to clinical placements in one of several teaching hospital sites, often called clinical schools. These sites are generally not far apart, provide care for parts of the similar urban communities, and can be reached by students and faculty without too much difficulty. It is relatively easy to apply common assessment practices at all sites. The equivalence of student experience and outcomes is rarely challenged. However, with the recent expansion of medical education, and particularly the establishment of medical education in rural and remote regions, clinical schools are now more commonly separated by substantial distances or travel times, in facilities caring for populations with different characteristics, and even functioning within different healthcare systems1. The greater distances between sites often limit movement between them of students and teachers. As a result of these differences, it is more likely that there will be greater variation in student learning opportunities than in more traditional models.

Despite these differences, the underlying principle of the varied models of dispersed learning is the same: students are placed where they can access sufficient clinical learning resources to support the curriculum and facilitate achievement of identical learning outcomes, usually with common assessment approaches. Indeed, this is a requirement for accreditation of medical schools in Australia and New Zealand2.This poses an important question: are learning outcomes related to the clinical school site?

The literature provides little research evidence about this question. Two reported studies are noteworthy. The first found that students in community-immersed rural medical education in Australia obtain assessment results similar to those of students in an urban environment3. The other found that the results of students studying in a dispersed clinical school model in Canada appeared not to be related to site4. However, neither of these studies examined student performance in the much more dispersed clinical school structure, such as that found in one new Australian rural medical school5, where there were 4 clinical school sites separated by up to 1500 km. This article reports an analysis of assessment data for graduating students from this school to determine whether assessment outcomes were related to clinical school site.

The setting

The communities associated with this medical school are relatively small, with populations of less than 200 000. None of these individual centres provides sufficient clinical resources for the entire student cohort, so senior students have to be allocated to 4 clinical school environments (including hospitals and primary care practices) that are up to 2000 km from the main base. The allocation pattern evolved from the first to subsequent cohorts as student numbers increased and more dispersed sites were developed. While none of the 4 sites is based on a large urban teaching hospital, there is still variation in the size, capacity and activity of the 4 hospitals, with three offering varied elements of tertiary care and one only secondary care. All provide for dispersed populations that have somewhat different characteristics, with marked variations in the proportion of Indigenous and immigrant populations. The most distant site is not in the same State and has a different healthcare system. Hence, it is likely that students have a combination of similar and different learning opportunities at each of the 4 hospitals and their surrounding primary care practices, as has been found elsewhere6, and is currently being investigated locally. Students express preferences for the clinical school allocation for the last 2 years of the course. The penultimate year (Year 5) has workplace clinical assessment and an end-of-year battery of written papers and an objective structured clinical examination (OSCE). All students sit identical written examinations that are scored centrally, and are brought into the two larger hospitals for the OSCE, where trained examiners are randomly assigned, providing a combination of local and 'visiting' examiners. The final year has only workplace-based assessment. Given the variation in clinical site capacity, clinical experience and assessment locations, students and faculty have naturally wondered if there is any impact on learning outcomes, with many students in the early cohorts believing that staying at the main base was likely to result in higher academic achievement.

Methods

Summative assessment results of the first 5 graduating cohorts were available for analysis, but the first cohort was excluded from the study because this was a smaller cohort that was taught predominantly at the 2 more central sites. Table 1 lists the assessment data for the second to fifth graduating cohorts. The effect of clinical site location on assessment results was examined through analysis of variance of mean scores in both the Years 5 and 6. The effect of moving to different clinical schools on the rank order of student test performances was analysed by applying the Kruskal-Wallis test on the inter-quartile ranges of scores in each of Years 3-6 of the course, from before dispersal to after dispersal at the 4 clinical sites. This period also spanned the move from predominantly campus-based to predominantly workplace-based assessment. Ethics approval was granted by the James Cook University Ethics Committee

Table 1: Sample sizes for the 4 years at the 4 clinical school sites




Results

Tables 2 and 3, respectively, provide the mean test scores and test rankings for students completing Years 5 and 6 at the four clinical school sites in each of the 4 cohorts. There were no significant differences in the mean scores of students studying at each site (p values = 0.15-0.63). There were also no significant differences overall between inter-quartile rankings across years as students dispersed to the different clinical sites (p values = 0. 27-0.78).There were however some small changes in rank order of students within sites, with some slightly improving their relative position, particularly at the smaller sites, and others slightly worsening their relative positions, particularly at the larger sites, but these changes had no effect on pass decisions at the end of the course. In general the workplace-based assessment scores from the final year were higher than the more examination-based scores in the penultimate year.

Table 2: Mean test scores for students in Years 5 and 6 at the four clinical school sites




Table 3: Median test rankings and inter-quartile ranges for students by test scores in Years 5 and 6 at the four clinical school sites




Discussion

These results demonstrate that there was no significant effect of clinical site location on mean examination scores and rank order of students at the 4 sites. This is reassuring for students, faculty and regulators because it indicates that the learning objectives required by the curriculum (as approved by the Australian Medical Council2) are achievable in each of the clinical school locations, even though they offer somewhat different clinical learning opportunities.

The slight changes in rank order after dispersal are not significant but invite speculation. Such differences may happen in any curriculum, as students approach graduation and are assessed against endpoint learning objectives. However in this medical school the final year assessment is more workplace based, raising the possibility that different attributes are being assessed. Information bias cannot be excluded because different sites may have assessed students differently and therefore Year 6 results have to be interpreted with caution. The focus of student learning has been shown in associated research to be different in the 2 years (Sen Gupta TK, Hays RB, Kelly G, Jacobs H; unpubl. data; 2010). In summary, students in the penultimate year focus on learning to pass exams, whereas those in final year focus on learning to be junior doctors and preparing for longer term career objectives. Hence those students who improve scores in the final year may be better able to make the transition from student to workplace learning and assessment. It is interesting that the smaller centres are associated with the improvement in scores and rankings. These sites may offer a better workplace experience due to lower staff : patient ratios and more general clinical case mix7, and so may be more appropriate for workplace immersion models8. However, the possibility of less robust supervisory structures in the smaller centres may mean that weaker students receive less support. Until the differences in performance at the different sites are explored further, it may be prudent to retain weaker students at the larger, more central sites, closer to more formal educational support.

The higher mean scores derived from workplace assessment are also worthy of comment. It is possible that workplace-based assessment, which is conducted by clinicians who may form stronger relationships with students during longer, workplace immersion placements, simply inflates scores artificially. However, because the mean score rises at all 4 sites there is little direct effect on rank order of students. The effect on pass/fail decisions is more difficult to measure, because the number of students failing final year should be low. To date only one student has failed and repeated final year, and with the relatively small numbers of students it is not possible to know if this indicates 'normal', a lenient system or a system that allows through to final year only students who are ready for the transition to final year. The subsequent performance and career choice of graduates, and possible correlations with these student performance data, are currently being investigated in a longitudinal cohort study.

Limitations

This study involved relatively small numbers of students in only 4 graduating cohorts from one dispersed rural medical school. The effect of selection bias cannot be discounted, because most students were able to choose the clinical school site they attended.

Conclusion

The choice of clinical school site for the final 2 years of an undergraduate rural medical school had no effect on mean assessment scores and only a minor effect on the rank order of student scores. It may be that workplace-immersed placements suit some students better than others, but at this school they appear to provide a valuable transition experience between undergraduate and postgraduate learning.

References

1. Tesson G, Strasser R, Pong RW, Curran V. Advances in rural medical education in 3 countries: Canada, the United States and Australia. Rural and Remote Health 5: 397.(Online) 2005. Available: www.rrh.org.au (Accessed 1 April 2010).

2. Australian Medical Council. Assessment and Accreditation of Medical Schools: Standards and Procedures, 2009; Part 3. (Online) 2009. Available: http://www.amc.org.au/images/Medschool/procedures%20medical%20schools%202009.pdf (Accessed 1 April 2010).

3. Worley P, Esterman A, Prideaux D. Cohort study of examination performance of undergraduate medical students learning in community settings. BMJ 2004; 328: 207-209.

4. Bianchi F, Stobbe K, Eva K. Comparing academic performance of medical students in distributed learning sites: the McMaster experience. Medical Teacher2008; 30: 67-71.

5. Hays RB. Rural Initiatives at the JCU School of Medicine. Australian Journal of Rural Health 2001; 9: S2-S5.

6. Colquhoun C, Hafeez MR, Heath K, Hays RB. Aligning clinical resources to curriculum needs: the utility of a group of teaching hospitals. Medical Teacher 2009; 31: 1084-1088.

7. Patrick CJ, Peach D, Pocknee C, Webb F, Fletcher M, Pretto G. The WIL (Workbase Immersed Learning) Report: A national scoping study (Australian Learning and Teaching Council).Final Report. Brisbane, QLD: Queensland University of Technology. Available www.altc.edu.au (Accessed 1 April 2010).

8. Dornan T, Boshuizen H, King N, Scherpbier A. Experience-based learning: a model linking the processes and outcomes of medical students' workplace learning. Medical Education 2007; 41: 84-91.

This PDF has been produced for your convenience. Always refer to the live site https://www.rrh.org.au/journal/article/1511 for the Version of Record.