The influence of a continuing education program on the image interpretation accuracy of rural radiographers

Introduction: In regional, rural and remote clinical practice, radiographers work closely with medical members of the acute care team in the interpretation of radiographic images, particularly when no radiologist is available. However, the misreading of radiographs by non-radiologist physicians has been shown to be the most common type of clinical error in the emergency department. Further, in Australia few rural radiographers are specifically trained to interpret and report on images. This study aimed to evaluate the accuracy of a group of rural radiographers in interpreting musculoskeletal plain radiographs, and to assess the effectiveness of continuing education (CE) in improving their accuracy within a short time frame. Methods: Following ethics approval, 16 rural radiographers were recruited to the study. At inception a purpose-designed ‘testobject’ of 25 cases compiled by a radiologist was used to assess image interpretation accuracy. The cases were categorised into three grades of complexity. The radiographers entered their answers on a structured radiographer opinion form (ROF) that had three levels of response – ‘general opinion’, ‘observations’ and ‘open comment’. Subsequent to base-line testing, the radiographers participated in a CE program aimed at improving their image interpretation skills. After a 4 month period they were re-tested using the same methodology. The ROFs were scored by the radiologist and the pooled results analysed for statistically significant changes at all ROF levels and grades of complexity.


Introduction
In rural and remote health services, where there is often no radiologist in attendance, radiographers work closely with non-radiologist medical practitioners in the interpretation of radiographs.
However, non-radiologist physician interpretation of radiographs has been reported as the leading cause of diagnostic error in the accident and emergency department 1 , although it is reasonable to predict that this is reduced by physician-radiographer consultation 2 .This is evident in studies dating back to the 1980s.De Lacey et al 3 found that 2.5% of medically significant findings were missed by 'casualty' (emergency department) medical officers in a study of 531 patients; while in a later study Berman et al 4 found that radiographers correctly identified 28 abnormalities that were missed by casualty doctors among 1496 patients.In a more recent study, Guly found that 77.8% of diagnostic errors in the emergency department were due to misreading of radiographs 1 , often by relatively junior medical officers, and that little had changed compared with a study in the same department 20 years earlier.
Radiographic examinations offer the greatest benefit when a radiologist's report is immediately available 5 .However, delays of 1 to 3 days are commonplace in both rural and metropolitan public hospitals in Australia, and much longer delays have been reported 6,7 .Delayed reporting of images is considered to be less satisfactory, although it increases the detection of clinically significant abnormalities and provides clarification in cases where the referring doctor is unsure of the diagnosis.
An alternative practice model that has been extensively implemented and evaluated in the UK is the training of radiographers in frontline radiological image interpretation and reporting 8 .A meta-analysis of UK studies found that, compared with a reference standard, radiographers' overall sensitivity and specificity were 92.6% and 97.7%, respectively 9 .After radiographers received specific training in image interpretation there was no statistically significant difference in their accuracy, compared with radiologists.
In the absence of a radiologist, as is often the case in rural hospitals, the healthcare outcomes for patients may be improved by the introduction of a system of frontline radiological reporting by radiographers.Rural radiographers are often put in the position where their opinion is actively sought and valued by referring doctors, particularly in the emergency care setting The cases included plain radiographic examinations of the appendicular and axial skeleton.Although the radiographers were not told so, all of the examinations demonstrated abnormalities, whether traumatic, non-acute or both.No clinical history was given.The radiographers each viewed the images separately, isolated and under examination conditions.Most completed their interpretation of the cases to their satisfaction in less than 1.5 hours.
The cases were graded according to the degree of complexity, as follows: • grade 1: a new medical graduate would be expected to interpret the case correctly (3 cases) • grade 2: most radiology fellowship candidates would correctly interpret the case at the time of undergoing their final examination (17 cases) • grade 3: all specialist musculoskeletal radiologists and experienced general radiologists would correctly interpret the case (5 cases).
To attain an accuracy of 100% a radiographer had to correctly identify and describe most, but not necessarily all of the abnormal radiological signs in all 25 cases.A target was set of 85% accuracy compared with the radiologist's interpretation.
Participants were directed to enter their interpretation on a radiographer opinion form (ROF) 16 , which had 3 levels of response: • level 1: 'general opinion' -whether or not there was any abnormality

Results
The results are shown in Table 1.Overall, for the combined total of 400 cases, the radiographers' level of accuracy did not reach the 85% target, either before or after the CE intervention.At the 'general opinion' and 'observations' levels there was, however, a statistically significant improvement in the radiographers' accuracy between the pre-and post-intervention testing.At the 'open comment' level there was a slight increase the proportion of cases interpreted correctly for all grades of complexity, although the improvement was not statistically significant.
For the grade 1 cases, of which there were only 3, the 16 radiographers agreed with the radiologist on more than 90% of the interpretations at all ROF levels.No statistically significant improvement was found in interpretations between the pre-and post-intervention for grade 1 cases at any of the ROF levels.
For the grade 2 and 3 cases, the radiographers' accuracy showed a statistically significant improvement after the intervention at both the 'general opinion' and 'observation' levels.With the exception of the small sample of grade 1 cases, the proportion of cases correctly interpreted by the radiographers decreased as the ROF level, and therefore the amount of detail required, increased.In fact, both before and after taking part in the CE program, the radiographers' image interpretation accuracy decreased with the demand for a more detailed description of their observations.There was also no statistically significant improvement at the 'open comment' level overall, or for any of the grades of complexity.

Discussion
It is apparent that, while for the small number of grade 1 cases there was no change in image interpretation accuracy, for the more numerous and more complex grade 2 and grade 3 cases there was a significant improvement at the 'general opinion' and 'observation' ROF levels.This suggests that it is possible to advance the knowledge and skills of radiographers in image interpretation using short bursts of CE.The other studies mentioned that also involved pre-and postintervention testing used a similar methodology.
The sample size of radiographers was small and limited to a particular region, which decreases the generalisability of the findings to the broader population.Further, no break down of the radiographers' years of experience or other variable characteristic is given.It was decided at the outset that these independent variables were not relevant in that all of the radiographers involved in the study were accredited practitioners and regularly worked after-hours, on-call duty or as sole practitioners, or both.
Finally, some limitations in the statistical methods were imposed on the study out of necessity.The test-object contained no negative cases, which precluded the chance of true negative interpretations by the radiographers.Thus, it was not possible to construct a contingency table, calculate sensitivity and specificity or plot receiver operator characteristic (ROC) curves.
One particularly interesting finding of this study is the decrease in the level of accuracy as the radiographers were required to provide a more precise description of their interpretation in the 'open comment' section of the ROF.
This suggests that the radiographers had difficulty converting their observations into words with the result that, in some cases, the validity of their answers at the other two ROF levels came into question.This seems to point strongly to a need to further 'up-skill' radiographers in the vocabulary of radiology so that they can better communicate their observations to the doctors and other members of the acute care team, as well as to radiologists.This needs to be addressed in both undergraduate and postgraduate education and training of radiographers.

Conclusions
In spite of the limitations of this study, it is reasonable to conclude that short-term, intensive CE programs can have a positive effect on the ability of radiographers to accurately interpret plain musculoskeletal radiographic examinations.
Extrapolating this finding, it seems that providing such programs could be beneficial in reducing the risk of misdiagnosis in the emergency department and other acute care settings, especially if no radiologist is available.It may also be argued that encouraging greater been radiographers and other members of the healthcare team will have a positive effect on teamwork and on patient outcomes.
It has been extensively argued during recent years that extending the role of some health professionals, creating new roles and working more collaboratively will be necessary strategies to meet future growth in demand for health services with declining workforce participation rates.
However, if such innovations are to be effective in maintaining or improving service quality and safety they must be reinforced by CE, as well as close monitoring and further research.Initiatives such as the one described in this article may inform future, larger scale development of CE and research in this field.
level 2: 'observations' -indicating the nature of the abnormality(ies) from a list of possibilities • level 3: 'open comment' -a brief, concise written description of the abnormal appearance(s).The first two levels required the radiographers to simply tick the correct box(es), while the third level required a more detailed explanation of the radiographers' responses at the other 2 levels.Subsequent to base-line testing, the radiographers participated in a CE program over a 4 month period in 2007 aimed to improve their ability to correctly interpret musculoskeletal plain radiography examinations.Because the participating radiographers were distributed across a wide geographical area the program used flexible, IT-based modes of delivery that consisted of: • self-guided Microsoft PowerPoint presentations emailed to each participant approximately every 2 weeks.Presentations included directed-learning material and self-test case studies with model answers (Fig1)• weekly one-hour tutorials or discussion groups, facilitated by a radiographer academic (author 1), which were videoconferenced at all 7 sites in the region where the radiographers were located• recommended readings, emailed as PDF files• internet URLs of relevant, high-quality internet sites.After the CE intervention had been completed, the radiographers' image interpretation accuracy was reassessed using the same test-object.Both the pre-and postintervention ROF answer sheets were examined and scored by the radiologist who had assembled the cases.While for the first two ROF levels of response the scoring was dichotomous (agree = 1, disagree = 0), the open-ended responses were scored as:• A = strong to perfect correlation between the radiographer's and radiologist's opinion• B = no clinically significant differences in opinion• C = clinically significant false positive on the part of the radiographer• D = clinically significant false negative on the part of the radiographer.For the purpose of statistical analysis of the responses at this higher level of radiographer opinion, scores of both A and B were considered agreement (1) and scores of either C or D as disagreement (0).Cases were pooled for all 16 radiographers, creating an overall total of 400 cases (grade 1 = 48; grade 2 = 272; grade 3 = 80).Results were entered into a Microsoft Excel spreadsheet.Two-sided, paired t-tests (α = 0.05) were used to test for statistically significant changes in accuracy for the group of radiographers as a whole at all three levels of ROF response, as well as all three grades of complexity.

Figure 1 :
Figure 1: An example of the Microsoft PowerPoint directed learning, self-test image interpretation quizzes (top) with model answers using the radiographer opinion form (ROF) format.
Similar improvements have been shown in other such smallscale studies.Using a test bank of 30 radiographs, the sensitivity of radiographers in fracture detection improved from 78.9% to 88.2% (p < 0.05) as a consequence of a 2 day face-to-face training program in orthopaedic radiology and skeletal trauma15 .Interestingly, 6 months after the course the radiographers combined sensitivity had fallen to below the pre-course level, suggesting a need for ongoing education to maintain competency.Other similar studies, however, have shown that the gains achieved can be held after the completion of a short course in image interpretation.In the study described in this article, no follow-up assessments have been performed of the radiographers' image interpretation accuracy to date.In another previous study, the reporting accuracy of a radiographer who was undergoing formal postgraduate training in image interpretation increased progressively from 87.8% to 100%, compared with the supervising radiologist, over a 9 week period11 , illustrating that a sustained improvement is achievable with ongoing education.In addition to the need for more comprehensive, universitybased education programs in image interpretation and reporting for radiographers, there is need for short-term, intensive means of improving the image interpretation accuracy of non-radiologists in the acute care setting.It is evident that such programs have the potential to quickly improve the detection rate of radiological abnormalities, increasing the immediacy with which patients receive definitive treatment.The combined accuracy of radiographers and emergency physicians has been shown to closely approach that of radiologists 2 .It may be argued, therefore, that the complementary role that exists between radiographers and non-radiologist physicians should be nurtured and developed in the context of a decline in the availability of radiologists, particularly in regional, rural and remote areas.As well as radiographers, therefore, intensive training in image interpretation may also target emergency department junior medical staff, GPs and critical care nurse practitioners.Short-term education programs such as the one described in this article may also have relevance in large metropolitan hospitals where there is a general shortage of radiologists performing 'hot' reporting.There is great potential to develop online interprofessional education in image interpretation for non-radiologists who are required to interpret radiographic images as part of their healthcare role.This may in turn improve the quality of service in the acute care setting.By delivering courses online the material can be widely accessible and effectively managed using existing online delivery technology.There is a danger, however, of perceiving such initiatives simply as a threat to traditional professional roles.Unfortunately, these perceptions impacted on this study, with the radiologist involved eventually having to relinquish his role in the study because of his colleagues' negative attitudes to educating radiographers in image interpretation and reporting.Such attitudes are unproductive at a time when changes in the way that health care is delivered in Australia appear to be timely and imminent.There is need for collaboration across interprofessional boundaries to ensure that quality and safety are maintained and improved as changes are implemented.The investigators concede that there are several to this study, which were generally related to funding and time constraints, as well as to the difficulty in engaging radiologists in this type of research.Only one radiologist's opinion was used as the gold standard.However, that radiologist had meticulously compiled the 25 cases used as the test-object, together with model answers.Furthermore, it is not common practice for more than one radiologist to report on musculoskeletal plain radiographs.The methodology may also be criticised because the same 25 cases were used for both pre-and post-intervention testing, which may have biased the results.This can be discounted against the fact that 4 months elapsed between tests, during which time the radiographers would have seen a large number of other cases.The influence of a bias is likely to be marginal compared with the effect of the intervention.

Table 1 : Results for the combined 25 cases and 16 radiographers ('overall'), as well as the breakdown for grade of complexity at each radiographer opinion form level
ROF, Radiographer opinion form.