Advertisement

Evaluating antimicrobial therapy: How reliable are remote assessors?

  • Menino Osbert Cotta
    Correspondence
    Corresponding author. Department of Medicine, Royal Melbourne Hospital, University of Melbourne, Grattan St, Parkville, Victoria 3052, Australia. Tel.: +61 422 356 468; fax: +61 3 8344 1222.
    Affiliations
    Victorian Infectious Disease Service, Royal Melbourne Hospital at the Peter Doherty Institute for Infection and Immunity, Melbourne, Australia

    Department of Medicine, University of Melbourne, Melbourne, Australia
    Search for articles by this author
  • Tim Spelman
    Affiliations
    Victorian Infectious Disease Service, Royal Melbourne Hospital at the Peter Doherty Institute for Infection and Immunity, Melbourne, Australia
    Search for articles by this author
  • Caroline Chen
    Affiliations
    Victorian Infectious Disease Service, Royal Melbourne Hospital at the Peter Doherty Institute for Infection and Immunity, Melbourne, Australia
    Search for articles by this author
  • Rodney S. James
    Affiliations
    Victorian Infectious Disease Service, Royal Melbourne Hospital at the Peter Doherty Institute for Infection and Immunity, Melbourne, Australia
    Search for articles by this author
  • Danny Liew
    Affiliations
    Department of Medicine, University of Melbourne, Melbourne, Australia
    Search for articles by this author
  • Karin A. Thursky
    Affiliations
    Victorian Infectious Disease Service, Royal Melbourne Hospital at the Peter Doherty Institute for Infection and Immunity, Melbourne, Australia
    Search for articles by this author
  • Kirsty L. Buising
    Affiliations
    Victorian Infectious Disease Service, Royal Melbourne Hospital at the Peter Doherty Institute for Infection and Immunity, Melbourne, Australia
    Search for articles by this author
  • Caroline Marshall
    Affiliations
    Victorian Infectious Disease Service, Royal Melbourne Hospital at the Peter Doherty Institute for Infection and Immunity, Melbourne, Australia

    Department of Medicine, University of Melbourne, Melbourne, Australia
    Search for articles by this author
Published:February 22, 2016DOI:https://doi.org/10.1016/j.idh.2016.01.002

      Highlights

      • Many hospitals lack the capacity to assess antimicrobial appropriateness by local assessors with infectious diseases training.
      • It is unclear whether assessments of antimicrobial prescriptions are consistent among remote assessors.
      • Findings can be used to determine the types of remote assessors that can reliably assess antimicrobial therapy.

      Abstract

      Introduction

      Assessing the quality of antimicrobial prescribing provides hospitals with a means of targeting and measuring the impact of antimicrobial stewardship interventions. There are limited data available on the reliability of these assessments among different types of hospital assessors deployed away from the bedside (ie remotely). Importantly, it is unclear if assessors inexperienced in clinical infectious diseases can reliably evaluate the quality of antimicrobial prescriptions. This study sought to determine the reliability of assessments made by remote hospital assessors with different levels of clinical infectious diseases experience. These assessments were based on (1) concordance with national prescribing guidelines and (2) ‘overall appropriateness’.

      Methods

      180 prescriptions were assessed for ‘concordance with guidelines’ and ‘overall appropriateness’ at the bedside (ie locally). Prescription data were then given to fifteen remote assessors. These assessors were blinded to local assessments. Inter-rater reliability was calculated using Fleiss' kappa statistics.

      Results

      Higher levels of agreements were achieved for ‘concordance with guidelines’ assessments. Local and remote antimicrobial management teams had the highest level of agreement and this improved when looking at antimicrobial treatment used to treat respiratory tract infections (kappa score = 0.67). Reliability in assessments was moderate for local pharmacist assessments and fair to slight for local infection control assessments.

      Conclusions

      There is scope to develop tools that will improve assessment reliability of antimicrobial therapy among remote assessors. Clinical pharmacists provide reliability comparable to infectious diseases experts, however, infection control practitioners may require further education and training to improve reliability in assessments.

      Keywords

      Introduction

      Hospital antimicrobial use has been implicated in accelerating development of antimicrobial resistance worldwide [
      • Septimus E.J.
      • Kuper K.M.
      Clinical challenges in addressing resistance to antimicrobial drugs in the twenty-first century.
      ]. Many institutions have targeted improving antimicrobial prescribing through reducing unnecessary use and rationalising therapy by a variety of antimicrobial stewardship (AMS) interventions [
      • Owens R.C.
      Antimicrobial stewardship: concepts and strategies in the 21st century.
      ].
      Consumption of antimicrobials has been used as an outcome indicator for AMS; however, use of crude volume-based usage data is often not accurate when evaluating antimicrobial use. Assessing the quality of antimicrobial prescribing represents a more descriptive method. Periodic auditing of antimicrobial prescriptions has the ability to inform AMS program coordinators of prescribing practices that require targeting via a continuous quality improvement process [
      • Chassin M.R.
      • Loeb J.M.
      • Schmaltz S.P.
      • et al.
      Accountability measures — using measurement to promote quality improvement.
      ].
      Evaluation of antimicrobial therapy can be made solely on the basis of pathogen-antimicrobial susceptibility [
      • Evans R.S.
      • Pestotnik S.L.
      • Classen D.C.
      • et al.
      A computer-assisted management program for antibiotics and other antiinfective agents.
      ,
      • Thursky K.
      • Buising K.
      • Bak N.
      • et al.
      Reduction of broad-spectrum antibiotic use with computerized decision support in an intensive care unit.
      ], however, there are other aspects of the antimicrobial prescription, such as spectrum of activity, dose and duration that need to be considered when assessing antimicrobial prescriptions. Concordance with endorsed prescribing or treatment guidelines provides an alternative method to audit the quality of antimicrobial therapy [
      • Willemsen I.
      • Groenhuijzen A.
      • Bogaers D.
      • et al.
      Appropriateness of antimicrobial therapy measured by repeated prevalence surveys.
      ]. There are limitations with this approach in that guidelines may not be available for all indications or may be considered insufficient when taking into account patient specific factors, such as patient drug allergies or the risk of drug toxicity [
      • Schouten J.A.
      • Hulscher M.E.J.L.
      • Wollersheim H.
      • et al.
      Quality of antibiotic use for lower respiratory tract infections at hospitals: (How) can we measure it?.
      ].
      As a result, the opinion of health professionals trained in clinical Infectious Diseases (ID) may be used to determine ‘overall appropriateness’ of therapy [
      • Willemsen I.
      • Groenhuijzen A.
      • Bogaers D.
      • et al.
      Appropriateness of antimicrobial therapy measured by repeated prevalence surveys.
      ,
      • Volger B.
      • Ross M.
      • Brunetti H.
      • Baumgartner D.
      • et al.
      Compliance with a restricted antimicrobial agent policy in a university hospital.
      ,
      • Gyssens I.C.
      • Van den Broek P.
      • Kullberg B.
      • et al.
      Optimizing antimicrobial therapy. A method for antimicrobial drug use evaluation.
      ]. Multi-disciplinary teams consisting of an ID physician or clinical microbiologist and specialist ID pharmacist (termed antimicrobial management teams [AMTs]) have previously been shown to be effective in reducing antimicrobial use through post-prescription assessment with direct intervention and feedback [
      • Fraser G.L.
      • Stogsdill P.
      • Dickens J.D.
      • et al.
      Antibiotic optimization. An evaluation of patient safety and economic outcomes.
      ,
      • Solomon D.H.
      • Van Houten L.
      • Glynn R.J.
      • et al.
      Academic detailing to improve use of broad-spectrum antibiotics at an academic medical center.
      ]. Additionally, national consensus statements have recommended the use of AMTs as part of hospital-wide AMS programs [
      • Dellit T.H.
      • Owens R.C.
      • McGowan J.E.
      • et al.
      Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship.
      , ].
      However, a recent Australia-wide antimicrobial prescribing survey noted that many hospitals lack the capacity, either due to geographical location or funding constraints, to have local (ie onsite) expert assessors such as AMTs, ID physicians or clinical microbiologists [
      • Australian Commission on Safety and Quality in Health Care
      Antimicrobial prescribing practice in Australia: results of the 2013 National Antimicrobial Prescribing Survey.
      ]. Results of this survey showed that approximately a quarter of the 151 participating hospitals sought assistance from assessors that only had access to the data collection form, and thus performed the assessment of antimicrobial prescriptions away from the bedside (ie remotely or ‘offsite’). Additionally, of the 334 assessors that participated, a third (33%) were non-specialist ID pharmacists whilst a further 27% of assessors were infection control practitioners (ICPs) from a nursing background.
      Taking into account that many assessors of antimicrobial prescriptions may not have formal training in clinical ID and/or may have performed this assessment remotely, there is need to ascertain whether quality assessments of antimicrobial prescriptions are consistent among this heterogeneous group of remote assessors.
      Therefore, the aim of this current study was to determine the level of inter-rater reliability between assessments made by local and remote health professionals with different levels of clinical ID experience for (1) concordance with national antimicrobial prescribing guidelines and (2) ‘overall appropriateness’ using a newly developed appropriateness assessment tool. It was hypothesised that there would be greater inter-rater reliability among local and remote ID experts compared to those not trained in ID (termed ‘non-ID experts’).

      Methods

      Antimicrobial prescriptions included in the study were sampled from prescription data collected as part of the 2013 Australia-wide antimicrobial prescribing survey conducted by the National Antimicrobial Prescribing Survey (NAPS) initiative [
      • Australian Commission on Safety and Quality in Health Care
      Antimicrobial prescribing practice in Australia: results of the 2013 National Antimicrobial Prescribing Survey.
      ]. Data for each prescription were collected using a standard data collection form (available upon request from the authors – see Appendix A). Types of assessors, both locally and remotely, were chosen on the basis of results published in the recent nation-wide prescribing survey [
      • Australian Commission on Safety and Quality in Health Care
      Antimicrobial prescribing practice in Australia: results of the 2013 National Antimicrobial Prescribing Survey.
      ]. As part of the NAPS initiative and associated research, prior ethics approval was obtained from Melbourne Health and all participating hospitals.

      Local assessment of antimicrobial prescriptions

      Local assessors made assessments at the bedside and had access to the medication chart, patient admission and progress notes, surgical notes (where applicable), pathology and microbiology results. These assessors collected prescription data using the standard data collection form and made assessments independently. Local assessors constituted one of the following:
      Local ID experts.
      • An AMT assessment team – consisting of either an ID physician or clinical microbiologist and a specialist ID pharmacist
      Local non-ID experts.
      • A non-specialist ID pharmacist assessor (no ID experience/training termed ‘clinical pharmacist’)
      • An ICP assessor (nursing background with no ID experience/training)
      Local assessors then transferred all data in the data collection form onto an online database, ensuring all codes were consistent and all annotated free text included. This online prescription data were then sent to remote assessors who were blinded to assessments.

      Remote assessment of antimicrobial prescriptions

      Fifteen remote assessors were purposefully selected by investigators either by phone or email through professional contacts and included the following:
      Remote ID experts.
      • Three AMT assessor teams (consisting of either an ID physician or clinical microbiologist and a specialist ID pharmacist)
      • Three ID physicians or clinical microbiologists (termed ‘ID specialist’)
      • Three specialist ID pharmacists (>3 years' experience working in ID) (termed ‘ID pharmacist’)
      Remote non-ID experts.
      • Three clinical pharmacists with no experience in ID
      • Three experienced ICPs (>5 years' experience)
      Remote assessors could not work in any of the hospitals from where antimicrobial prescriptions were sampled.

      Assessment of ‘concordance with guidelines’ and ‘overall appropriateness’

      For each antimicrobial prescription, assessors were asked to independently assess:
      • i)
        Concordance with the national prescribing guidelines of Australia (Therapeutic Guidelines: Antibiotic [
        • Antibiotic Expert Group
        Therapeutic guidelines: antibiotic. Version 14.
        ])
      • ii)
        ‘Overall appropriateness’ (taking into account that guidelines may not cover all situations and/or additional patient specific factors)
      A newly developed NAPS appropriateness assessment tool (available upon request from the authors – see Appendix B) helped guide assessments of ‘overall appropriateness’. This tool was developed via an iterative process involving a panel of ID physicians, clinical microbiologists and specialists ID pharmacists and took into account excessive or overlapping spectrum of activity, severity of patient allergies and risk of toxicity. All local and remote assessors received training via a teleconference and an online slide presentation describing ten examples of how the tool could be used to guide appropriateness assessments.

      Sample antimicrobial prescriptions

      A sample of 180 prescriptions was deemed large enough to determine ‘fair’ or greater inter-rater reliability (see ‘statistical analyses’ for further detail). These 180 prescriptions were distributed in three batches (Fig. 1).
      Figure thumbnail gr1
      Fig. 1Distribution of sample antimicrobial prescriptions among remote assessors.
      Antimicrobial prescriptions for the following indications were included in each batch of 60 prescriptions:
      • Community acquired pneumonia (9 prescriptions)
      • Chronic obstructive pulmonary disease (3 prescriptions)
      • Hospital acquired pneumonia (6 prescriptions)
      • Skin and soft tissue infections (9 prescriptions)
      • Urinary tract infections (9 prescriptions)
      • Intra-abdominal infections (9 prescriptions)
      • Surgical antibiotic prophylaxis (9 prescriptions)
      • Miscellaneous prescriptions (6 prescriptions)

      Statistical analyses

      Categorical variables for both ‘concordance with guidelines’ and ‘overall appropriateness’ were regrouped into broader categories (1 – ‘concordant’ or ‘appropriate’, 2 – ‘non-concordant’ or ‘inappropriate’, 3 – ‘not assessable’) to increase sample numbers and rationalise variables into meaningful groups. These were then summarised using frequency and percentage.
      Continuous variables were assessed for significant departures from normality using a Shapiro–Wilk test of skew (or equivalent) and summarised using mean and standard deviation (SD) or median and inter-quartile range (IQR) as appropriate. Inter-rater agreement of categorical concordance and appropriateness were assessed using Fleiss' kappa. Factors contributing to inter-rater agreement were analysed using predictive modelling.
      Fleiss' kappa scores were interpreted as follows: 0.01–0.2 as slight agreement, 0.21–0.4 as fair agreement, 0.41–0.6 as moderate agreement, 0.61–0.8 as substantial agreement, and 0.81–1.0 as almost perfect agreement [
      • Landis J.
      • Koch G.
      The measurement of observer agreement for categorical data.
      ]. P-values ≤ 0.01 were assumed to be statistically significant, adjusting for multiple comparisons. All analyses were undertaken using Stata version 13 (Statacorp®, College Station, Texas).

      Ethics approval

      The study had current ethics approval at the research institute where investigators were based. Participating hospitals had previously agreed that de-identified data entered by them into the NAPS database could potentially be utilized for research activities, and so ethics approval at each individual hospital was not sought. Participation was voluntary and no remuneration was given to any of the assessors.

      Results

      Prescription data were sampled from 34 hospitals around Australia. Table 1 shows the characteristics of these hospitals based on their Australian Institute of Health and Welfare classifications [
      • Australian Institute of Health and Welfare
      Australia's Hospitals 2011–12 at a glance.
      ].
      Table 1Characteristics of sampled hospitals (n = 34).
      Characteristicn (%)
      Hospital location
       Metropolitan20 (58.8)
       Regional13 (38.2)
       Rural1 (3)
      Classification
       Principal referral or specialist hospitals18 (52.9)
       Large hospitals8 (23.5)
       Medium hospitals4 (11.8)
       Small hospitals4 (11.8)
      Type
       Public32 (94.1)
       Private2 (5.9)

      Inter-rater reliability for ‘concordance with guidelines’ – aggregate data (Table 2)

      None of the inter-rater reliability scores achieved almost perfect agreement (kappa scores ≥ 0.81) or substantial agreement (kappa scores between 0.61 and 0.8), however, moderate agreement (kappa scores between 0.41 and 0.6) was achieved in three out of the five remote assessor groups for both local AMT and local clinical pharmacist assessments. The highest agreement occurred between local and remote AMT assessments (kappa score = 0.53).
      Table 2Concordance with guidelines
      • Antibiotic Expert Group
      Therapeutic guidelines: antibiotic. Version 14.
      : kappa scores for local and remote assessments – categorised by TYPE OF ASSESSOR.
      Remote assessments
      ID expertsNon-ID experts
      Local assessmentsAMTID specialistID pharmacistClinical pharmacistICP
      Aggregate0.45*0.240.350.300.22
      ID expertsAMT0.53*0.230.44*0.41*0.21NS
      Non-ID expertsClinical pharmacist0.46*0.41*0.45*0.370.2NS
      ICP0.280.12NS0.180.15NS0.3
      kappa scores at p ≤ 0.01 except NS (p > 0.01).
      Moderate or higher agreement in bold and denoted by *.
      Local ICP assessments only had either fair agreement (kappa score between 0.21 and 0.4) or slight agreement (kappa score between 0.01 and 0.2) among the five remote assessor categories, whilst remote ICPs had the lowest kappa scores among all five remote assessor groups for both local AMT and clinical pharmacist assessments.

      Inter-rater reliability for ‘overall appropriateness’ – aggregate data (Table 3)

      Compared to the level of agreement achieved for ‘concordance with guidelines’, agreement on ‘overall appropriateness’ was much lower. Kappa scores reflected either fair agreement or slight agreement, with none achieving moderate agreement or higher. Nine out of the twenty kappa scores did not achieve statistical significance. The highest level of agreement was seen between local clinical pharmacist and remote AMT assessments (kappa score = 0.33).
      Table 3Overall appropriateness: kappa scores for local and remote assessments – categorised by TYPE OF ASSESSOR.
      Remote assessments
      ID expertsNon-ID experts
      Local assessmentsAMTID specialistID pharmacistClinical pharmacistICP
      Aggregate0.230.230.170.120.15
      ID expertsAMT0.230.260.300.08NS0.15NS
      Non-ID expertsClinical pharmacist0.330.260.22NS0.19NS0.05NS
      ICP0.140.18NS0.01NS0.09NS0.18NS
      kappa scores at p ≤ 0.01 except NS (p > 0.01).
      Due to the observed lower levels of agreement, no further analysis was performed for ‘overall appropriateness’ assessments. Local ICP assessments were also excluded from any subsequent analysis for the same reason.
      Further analysis was performed for remote AMT concordance with guidelines data as this group had the highest level of agreement.

      Inter-rater reliability for ‘concordance with guidelines’ – by indication for antimicrobial therapy (Table 4)

      Agreement between local and remote AMT assessments reached substantial agreement for respiratory tract infections and moderate agreement for skin and soft tissue infections and surgical antibiotic prophylaxis.
      Table 4Concordance with guidelines
      • Antibiotic Expert Group
      Therapeutic guidelines: antibiotic. Version 14.
      : kappa scores for local and remote AMT assessments – categorised by INDICATION FOR ANTIMICROBIAL THERAPY.
      Indication for antimicrobial therapyAgreement among local and remote AMT assessments
      Intra-abdominal infections0.25NS
      Respiratory tract infections0.67*
      Surgical antibiotic prophylaxis0.45*
      Skin and soft tissue infections0.59*
      Urinary tract infections0.31NS
      Miscellaneous0.26NS
      Note: Respiratory tract infections = community acquired pneumonia, chronic obstructive pulmonary disease and hospital acquired pneumonia.
      kappa scores at p ≤ 0.01 except NS (p > 0.01).
      Moderate or higher agreement in bold and denoted by *.

      Discussion

      Results of our study indicate that, at best, moderate reliability in the assessment of antimicrobial prescriptions is achieved. Previous inter-rater reliability studies assessing appropriateness of pharmacotherapy have found favourable results. A 2008 study seeking to validate a new screening tool of older persons' prescriptions incorporating criteria for potentially inappropriate drugs found substantial agreement (kappa score = 0.75) among 100 data set evaluations [
      • Gallagher P.
      • Ryan C.
      • Byrne S.
      • et al.
      STOPP (Screening tool of older person's prescriptions) and START (Screening tool to alert doctors to right treatment). Consensus validation.
      ]. Likewise, Hanlon and colleagues found almost perfect agreement (kappa score = 0.83) between a clinical pharmacist and internist-geriatrician when assessing appropriateness of chronic medications taken by ten ambulatory, elderly male patients [
      • Hanlon J.
      • Schmader K.
      • Samsa G.
      • et al.
      A method for assessing drug therapy appropriateness.
      ].
      Our results, however, are more consistent with a previous study on inter-rater reliability among antimicrobial prescriptions conducted in 2005 [
      • Mol P.
      • Gans R.
      • Panday P.
      • et al.
      Reliability of assessment of adherence to an antimicrobial treatment guideline.
      ]. In that investigation, Mol and colleagues found fair to moderate agreement among six remote assessors (two hospital pharmacists, two internists and two clinical microbiologists) who were asked to assess adherence of antimicrobial prescriptions to local hospital guidelines.
      Interestingly, the investigators noted a comparatively lower level of agreement (kappa score = 0.36) between the two participating clinical microbiologists attributed to one of the clinical microbiologists not following assessment instructions. This may have been a potential reason for the comparatively lower observed kappa (kappa score = 0.23) that was attained for assessments made between local AMTs and remote ID specialists (both considered ID experts), however, we were unable to verify this.
      In light of consensus statements endorsing AMTs as best practice in the assessment of antimicrobial prescriptions, inter-rater reliability between local AMT assessments were assumed to be significant in determining which remote assessors were most closely aligned with this “gold standard”. Perhaps unsurprisingly, remote AMTs had the highest level of agreement, supporting the concept that multi-disciplinary teams of ID experts are best placed to assess antimicrobial prescription data remotely.
      Interestingly, the level of agreement for local clinical pharmacist assessments tended to be higher among remote ID experts as seen in Table 2, inferring that clinical pharmacists may be able to reliably collect and assess antimicrobial prescriptions. In contrast, local ICP assessments did not reflect this level of agreement. Given that clinical pharmacists and ICPs consisted of 33% and 27% of all assessors for the 2013 NAPS initiative respectively [
      • Australian Commission on Safety and Quality in Health Care
      Antimicrobial prescribing practice in Australia: results of the 2013 National Antimicrobial Prescribing Survey.
      ], these findings have significant implications for future nation-wide survey activities. It may be plausible to have two tiers of training for assessors who are not considered ID experts. One tier could be for professionals with experience in assessing prescription quality against endorsed prescribing guidelines, such as clinical pharmacists. Another more intensive form of training may be adapted for personnel who may lack prescription-auditing experience and have been recently tasked with evaluating antimicrobial therapy at their respective hospitals, as many ICPs in Australia have anecdotally reported.
      It is clear from the results that better reliability in assessments is achieved when assessing concordance with national prescribing guidelines rather than the concept of ‘overall appropriateness’. Perhaps this was because, by endeavouring to take into account additional factors such as excessive spectrum of activity and risk of allergies and toxicity, the appropriateness tool added to the complexity of assessment. A recent analysis by DePestel and colleagues supports this explanation as they highlight a divergence in levels of appropriateness when comparing assessments made according to objective definitions such as susceptibility data versus more subjective assessments based on clinical judgement [
      • DePestel D.D.
      • Eiland E.H.
      • Lusardi K.
      • et al.
      Assessing appropriateness of antimicrobial therapy: in the eye of the interpreter.
      ]. In contrast to the newly developed NAPS appropriateness tool, the national antimicrobial prescribing guidelines were first established in 1978, with their 15th version recently released in Australia. Development of these national guidelines has been through an extensive iterative process involving multidisciplinary input and even though they are limited in the ability to incorporate patient-specific factors, they still provide the most robust method guiding antimicrobial prescribing in hospitals. Familiarity with these guidelines and their application may have contributed to the greater reliability observed among assessors.
      Given that current evidence suggests that there is less consistency in the evaluation of antimicrobial therapy compared to other forms of pharmacotherapy, further work looking to improve reliability in assessing antimicrobial prescriptions should be made a priority. This is particularly poignant if ‘appropriateness of antimicrobial therapy’ is to become a valid method when determining the impact of AMS interventions. A potential way forward to improving assessment reliability may be to design data collection and assessment tools that are specific for certain infections, as has been developed by the Centers for Disease Control and Prevention [
      • Centers for Disease Control and Prevention
      Get smart for healthcare.
      ]. Results of our study also point to this, as levels of reliability were shown to increase when looking at antimicrobial therapy prescribed for specific indications such as respiratory tract infections.
      Limitations to this study do exist. Firstly, there was an assumption that local assessors can reliably collect and transcribe clinical information online. This may not always be the case. A 2009 study investigating comparisons of assessments of antimicrobial appropriateness made by the same reviewer for 24 computerised vignettes and corresponding paper medical records revealed only fair intra-rater reliability (kappa score = 0.30) despite possible reviewer recall bias [
      • Schwartz D.N.
      • Wu U.S.
      • Lyles R.D.
      • et al.
      Lost in translation? Reliability of assessing inpatient antimicrobial appropriateness with use of computerized case vignettes.
      ]. Secondly, there is an inherent assumption that assessments made at the bedside are consistently reliable. There is currently limited data that neither refutes nor confirms this claim and so further investigation is required to ascertain whether there is better inter-rater reliability among local assessors. Thirdly, the use of purposeful sampling of remote assessors could not eliminate selection bias. Finally, local prescription data were sampled from a variety of hospitals throughout Australia. This is no doubt a strength in study design, however, incorporating hospital-specific information into data collection, such as antibiograms, may have proven useful in terms of how local assessments were made on the basis of locally derived data.
      In summary, given the current challenges of accelerating antimicrobial resistance and need to optimise antimicrobial therapy, regular assessment of appropriateness is of great importance for hospitals to consider as part of their AMS program. Results of this study indicate that more work needs to be done in order to improve the reliability of these assessments especially in the context where there is a heterogeneous mix of remote assessors with differing levels of experience. Consideration should be given to repeat this study after further training has been undertaken with non-ID assessors, so as to compare results and see if improvements have been achieved. Findings of this current study may be applicable to countries where healthcare facilities lack onsite ID experts, perhaps due to being located outside of major urban centres, such as is the case with a significant proportion of hospitals in Australia.

      Authorship statement

      MOC carried out the data collection and interim data analysis and drafted the manuscript. TS assisted with data analyses with specific input into use of correlational statistics. CC and RJ assisted with recruiting remote assessors. MOC, KT, CM, DL and KB conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.

      Conflicts of interest

      All author(s) declare that they have no competing interests.

      Funding

      This work was supported by a National Health and Medical Research Council Project Partnership grant (APP 1013746). MOC received a National Health and Medical Research Council postgraduate scholarship (APP1055713).

      Provenance and peer review

      Not commissioned; externally peer reviewed.

      Acknowledgements

      The authors would like to acknowledge the fifteen assessors who remotely assessed the allocated antimicrobial prescriptions.

      Appendix A. Supplementary data

      The following are the supplementary data related to this article:

      References

        • Septimus E.J.
        • Kuper K.M.
        Clinical challenges in addressing resistance to antimicrobial drugs in the twenty-first century.
        Clin Pharmacol Ther. 2009; 86: 336-339
        • Owens R.C.
        Antimicrobial stewardship: concepts and strategies in the 21st century.
        Diagn Microbiol Infect Dis. 2008; 61: 110-128
        • Chassin M.R.
        • Loeb J.M.
        • Schmaltz S.P.
        • et al.
        Accountability measures — using measurement to promote quality improvement.
        N Engl J Med. 2010; 363: 683-688
        • Evans R.S.
        • Pestotnik S.L.
        • Classen D.C.
        • et al.
        A computer-assisted management program for antibiotics and other antiinfective agents.
        N Engl J Med. 1998; 338: 232-238
        • Thursky K.
        • Buising K.
        • Bak N.
        • et al.
        Reduction of broad-spectrum antibiotic use with computerized decision support in an intensive care unit.
        Int J Qual Heal Care. 2006; 18: 224-231
        • Willemsen I.
        • Groenhuijzen A.
        • Bogaers D.
        • et al.
        Appropriateness of antimicrobial therapy measured by repeated prevalence surveys.
        Antimicrob Agents Chemother. 2007; 51: 864-867
        • Schouten J.A.
        • Hulscher M.E.J.L.
        • Wollersheim H.
        • et al.
        Quality of antibiotic use for lower respiratory tract infections at hospitals: (How) can we measure it?.
        Clin Infect Dis. 2005; 41: 450-460
        • Volger B.
        • Ross M.
        • Brunetti H.
        • Baumgartner D.
        • et al.
        Compliance with a restricted antimicrobial agent policy in a university hospital.
        Am J Hosp Pharm. 1988; 45: 1540-1544
        • Gyssens I.C.
        • Van den Broek P.
        • Kullberg B.
        • et al.
        Optimizing antimicrobial therapy. A method for antimicrobial drug use evaluation.
        J Antimicrob Chemother. 1992; 30: 724-727
        • Fraser G.L.
        • Stogsdill P.
        • Dickens J.D.
        • et al.
        Antibiotic optimization. An evaluation of patient safety and economic outcomes.
        Arch Intern Med. 1997; 157: 1689-1694
        • Solomon D.H.
        • Van Houten L.
        • Glynn R.J.
        • et al.
        Academic detailing to improve use of broad-spectrum antibiotics at an academic medical center.
        Arch Intern Med. 2001; 161: 1897-1902
        • Dellit T.H.
        • Owens R.C.
        • McGowan J.E.
        • et al.
        Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship.
        Clin Infect Dis. 2007; 44: 159-177
        • Duguid M.
        • Cruickshank M.
        Antimicrobial Stewardship in Australian Hospitals.
        ACSQHC, Sydney2011 ([accessed 06.04.15])
        • Australian Commission on Safety and Quality in Health Care
        Antimicrobial prescribing practice in Australia: results of the 2013 National Antimicrobial Prescribing Survey.
        ACSQHC, Sydney2014 ([accessed 31.03.15])
        • Antibiotic Expert Group
        Therapeutic guidelines: antibiotic. Version 14.
        Therapeutic Guidelines Limited, Melbourne2010
        • Landis J.
        • Koch G.
        The measurement of observer agreement for categorical data.
        Biometrics. 1977; 33: 159-174
        • Australian Institute of Health and Welfare
        Australia's Hospitals 2011–12 at a glance.
        AIHW, Canberra2013 ([accessed 04.04.15])
        • Gallagher P.
        • Ryan C.
        • Byrne S.
        • et al.
        STOPP (Screening tool of older person's prescriptions) and START (Screening tool to alert doctors to right treatment). Consensus validation.
        Int J Clin Pharmacol Ther. 2008; 46: 72-83
        • Hanlon J.
        • Schmader K.
        • Samsa G.
        • et al.
        A method for assessing drug therapy appropriateness.
        J Clin Epidemiol. 1992; 45: 1045-1051
        • Mol P.
        • Gans R.
        • Panday P.
        • et al.
        Reliability of assessment of adherence to an antimicrobial treatment guideline.
        J Hosp Infect. 2005; 60: 321-328
        • DePestel D.D.
        • Eiland E.H.
        • Lusardi K.
        • et al.
        Assessing appropriateness of antimicrobial therapy: in the eye of the interpreter.
        Clin Infect Dis. 2014; 59: S154-S161
        • Centers for Disease Control and Prevention
        Get smart for healthcare.
        2014 ([accessed 24.03.15])
        • Schwartz D.N.
        • Wu U.S.
        • Lyles R.D.
        • et al.
        Lost in translation? Reliability of assessing inpatient antimicrobial appropriateness with use of computerized case vignettes.
        Infect Control Hosp Epidemiol. 2009; 30: 163-171