August, 2008 - SUPPORT Summary of a systematic review | print this article |

Does providing healthcare professionals with data about their performance improve their practice?

Audit and feedback is commonly used as a strategy to improve professional practice. It appears logical that healthcare professionals would be prompted to modify their practice if given feedback that their clinical practice was inconsistent with that of their peers or accepted guidelines.

 

Key messages

  • Audit and feedback can be effective in improving professional practice. The effects are generally small to moderate, but may be worthwhile.
  • The evidence does not support mandatory use of audit and feedback as an inter-vention to change practice.
  • The relative effects of audit and feedback are more likely to be larger when base-line compliance to recommended practice is low and when feedback is provided more intensively.
  • Decisions about if and how to use audit and feedback to improve professional practice must be guided by pragmatic factors and local circumstances, including whether:

- The known or anticipated baseline compliance to guidelines is low;
- Conducting an audit is feasible and the costs of collecting data are low;
- Routinely collected data are reliable and could be used for the audit;
- Small to moderate improvements would be worthwhile.

Background

Audit and feedback, defined as "any summary of clinical performance of health care over a specified period of time", can be given in a written, electronic or verbal format. The summary may also include recommendations for clinical action.

It appears logical that healthcare professionals would be prompted to modify their practice if given feedback that their clinical practice was inconsistent with that of their peers or accepted guidelines. Yet, audit and feedback has not consistently been found to be effective. Previous reviews have suggested that the provision of information alone results in little, if any change in practice.



About the systematic review underlying this summary

Review Objectives: To assess the effects of audit and feedback on the practice of healthcare professionals and patient outcomes.
/What the review authors searched forWhat the review authors found
Interventions Audit and feedback, defined as any sum-mary of clinical performance of health care over a specified period of time with or without other interventions compared to no intervention or other interventions.

118 studies were included. The interventions used were highly heterogeneous with respect to their con-tent, format, timing and source.

Targeted behaviours were preventive care (21 trials), test ordering (14), prescribing (20), length of stay in hospitals (1), and general management of a variety of problems.

Participants Healthcare professionals responsible for patient care.
In most trials the healthcare professionals were phy-sicians. One study involved dentists, three studies nurses, two studies pharmacists and 14 studies mixed providers.
Settings Healthcare setting
The studies were from the USA (58), Canada (9), Western Europe (30), Australia (9), Thailand (2), Uganda (1) and Laos (1).
Outcomes Objectively measured provider perform-ance or healthcare outcomes.
There was large variation in outcome measures, and many studies reported multiple outcomes.
Date of most recent search: January 2004
Limitations:This is a good quality systematic review with only minor limitations.

Jamtvedt G et al. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2006, Issue 2. See in Cochrane Library

Summary of findings

The review included 118 studies. Most studies were done in North America (67) and Western Europe (30), and only four studies were conducted in low and middle-income countries (two in Thailand and one each in Uganda and Laos).

The interventions used were very different with respect to their con-tent, format, timing and source. In 50 studies one or more groups re-ceived a multifaceted intervention that included audit and feedback as one component.

Many studies reported multiple outcomes. Most studies reported professional practice, such as prescribing or use of laboratory tests. Most of the studies were of moderate quality.

 

1) Any intervention in which audit and feedback is a component compared to no intervention

A total of 88 comparisons from 72 studies with more than 13 500 health professionals were included in the primary analysis. There were 64 comparisons of dichotomous outcomes from 49 trials, and 24 comparisons of continuous outcomes from 23 trials. There was important heterogeneity among the results across studies.

  • Interventions that include audit and feedback as a component can improve compliance with desired practice compared to no intervention
  • Low baseline compliance and high intensity of audit and feedback are factors that seem to increase the effect of audit and feedback.

Any intervention including audit and feedback compared to no intervention

Patient or population: Healthcare professionals
Settings
: Different healthcare settings
Intervention
: Highly heterogeneous interventions where audit and feedback was included
Comparison
: No intervention aimed at improving practice
Outcomes Illustrative comparative risks Relative effect
(95% CI)

Number of Participants
(studies)

Quality of the evidence
(GRADE)
Comments

Assumed risk (range)
Without audit and feedback

Corresponding risk(95% CI)
With audit and feedback

Compliance with desired practice

40%

70%

54%*

83%*

RR 1.08
(0.99 to 1.30)

Over 7000
(49 studies)†


†Studies reporting dichotomous outcomes

CI:Confidence interval        RR: Risk ratio      GRADE: GRADE Working Group grades of evidence (see above and last page) 
*Corresponding risk estimates based on model with an estimated coefficient of -0.005 (p=0.05) indicating smaller relative effects with increasing baseline

 

2) Audit and feedback alone compared to no intervention

A total of 51 comparisons from 44 trials reporting 35 dichotomous and 17 continuous outcomes compared audit and feedback alone to no intervention

  • Audit and feedback alone can improve compliance with desired practice, compared to no intervention.

Audit and feedback alone compared to no intervention

Patient or population: Healthcare professionals
Settings
: Different healthcare settings
Intervention
: Audit and feedback alone
Comparison
: No intervention aimed at improving practice
Outcomes

Absolute effect
Median adjusted increase in compliance with desired practice (interquartile range)

Relative effect
Median adjusted RR(interquartile range)
No of Participants
(studies)
Quality of the evidence
(GRADE)
Comments
Compliance with desired practice

4%*
(-0.8% to 9%)

RR 1.07
(0.98 to 1.18)

Over 8000
(44 studies)†


†35 comparisons in the 45 studies reported dichotomous outcomes

CI: Confidence interval     RR: Risk ratio     GRADE: GRADE Working Group grades of evidence (see above and last page)
*Median (and interquartile range) for risk differences from 35 comparisons with dichotomous outcomes adjusted for baseline differences in compliance.

 

3) Audit and feedback with educational meetings compared to no intervention

A total of 24 comparisons from 13 trials compared audit and feedback with educational meetings to no intervention.

  • Audit and feedback with educational meetings can improve compliance with desired practice compared to no intervention.

Audit and feedback with educational meetings compared to no intervention

Patient or population: Healthcare professionals
Settings
: Different healthcare settings
Intervention
: Audit and feedback with educational meetings
Comparison
: No intervention aimed at improving practice
Outcomes

Absolute effect
Median adjusted increase in compliance with desired practice(interquartile range)

Relative effect
Median adjusted RR (interquartile range)
Number of Participants
(studies)
Quality of the evidence
(GRADE)
Comments
Compliance with desired practice

1.5%*
(1.0% to 5.5%)

RR 1.06
(1.03 to 1.09)

13 studies†


†5 of the comparisons in the 13 studies reported dichotomous outcomes

CI: Confidence interval; RR: Risk ratio GRADE: GRADE Working Group grades of evidence (see above and last page)
*Median (and interquartile range) for risk differences from 35 comparisons with dichotomous outcomes adjusted for baseline differences in compliance.

 

4) Audit and feedback as part of a multifaceted intervention compared to no intervention

Fifty comparisons from 40 trials compared audit and feedback as part of a multifacted intervention to no intervention.

  • Audit and feedback as part of a multifaceted intervention can improve compliance with desired practice compared to no intervention.

Audit and feedback as part of a multifaceted intervention compared to no intervention

Patient or population: Healthcare professionals
Settings
: Different healthcare settings
Intervention
: Audit and feedback as part of a multifaceted intervention
Comparison
: No intervention aimed at improving practice
Outcomes

Absolute effect

Median adjusted increase in compliance with desired practice

(interquartile range)

Relative effect

median adjusted RR (interquartile range)

Number of Participants
(studies)
Quality of the evidence
(GRADE)
Comments
Compliance with desired practice

24%*

(5% to 49%)

RR 1.10

(1.03 to 1.36)

40 studies†


†41 comparisons in the 40 studies reported dichotomous outcomes
CI: Confidence interval; RR: Risk ratio GRADE: GRADE Working Group grades of evidence (see above and last page)

 

5) Short term effects of audit and feedback compared to longer term effects after feedback stops

This comparison included eight trials with 11 comparisons. Follow-up period varied from three weeks to 14 months

  • Results are mixed regarding short term effects compared to longer terms effects of audit and feedback after feedback stops.

 

6) Audit and feedback combined with complementary interventions compared to audit and feedback alon

Twenty-one trials with 25 comparisons were included. In all trials a multifaceted intervention with audit and feedback was compared to audit and feedback alone. Reminders, economic incentives, outreach visits, opinion leaders, patient education material and quality improvement tools were among the complementary interventions that were used.

  • Some studies found an effect of adding other interventions to audit and feedback, but most did not.

 

7) Audit and feedback compared to other interventions

Eight comparisons from seven studies were included. Audit and feedback was compared to reminders, patient education, local opinion leaders, economic incentives, self-study and practice based education.

  • Reminders and use of local opinion leaders may be more effective than audit and feedbac
  • Audit and feedback reduced test ordering more than economic incentives (one study
  • Studies comparing audit and feedback with patient education, self-study and practice based education found little or no difference in effects.

 

8) All comparisons of different ways audit and feedback are done

Seven studies were included. Different formats of audit and feedback that were tested included content (with or without peer comparisons or achievable benchmarks), source (feedback or outreach to physicians by peers versus non-physicians) and recipient (group feedback alone versus group plus individual feedback).

  • No firm conclusions can be drawn regarding how best to do audit and feedback.

Relevance of the review for low-income countries

FindingsInterpretation*
APPLICABILITY
  • The 118 randomized trials reviewed covered an extensive range of interventions and settings, but only four of the studies were from low and middle-income countries. Generally, there were small to moderate improvements in compliance with guidelines. It is not possible to determine when or why audit and feedback was more effective.
  • Decisions about if and how to use audit and feed-back to improve professional practice must be guided by pragmatic factors and local circumstances, includ-ing whether:
  •  The known or anticipated baseline compliance to guidelines is low;
  • Conducting an audit is feasible and the costs of col-lecting data are low;
  • Routinely collected data are reliable and could be used for the audit;
  • Small to moderate improvements would be worthwhile.
EQUITY
  • Overall, the included studies provided little data regarding differential effects of the interventions for disadvantaged populations.
  • Resources needed for audit and feedback may be less easily available in disadvantaged populations.
ECONOMIC CONSIDERATIONS
  • The findings summarised here are based on randomised trials in which the levels of organization and support were potentially higher than those available outside of research settings. Few trials reported the cost of the interventions.
  • The cost of audit and feedback is likely to be highly variable and must be estimated based on specific local conditions, including the availability of reliable routinely collected data and personnel costs.
  • Providing adequate support to programmes for audit and feedback is likely to be vital to ensure effectiveneness when scaling up.
MONITORING & EVALUATION
  • There is little evidence of the effects or cost-effectiveness of audit and feedback in resource poor settings.
  • Scarcity of health professionals, potential problems with staff morale and lack of motivation to perform activities other than direct patient care may limit the feasibility and potential for audit and feedback to improve professional practice.
  • The impact of audit and feedback, with or without additional interventions, should routinely be monitored by auditing practice after the intervention.
  • The effects of audit and feedback or alternative interventions to improve professional practice should be evaluated before they are taken to scale in resource poor settings.

*Judgements made by the authors of this summary, not necessarily those of the review authors, based on the findings of the review and consultation with researchers and policymakers in low- and middle-income countries. For additional details about how these judgements were made see: http://www.support-collaboration.org/summaries/methods.htm

Additional information

Related literature

Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD, Jamtvedt G et al. Does telling people what they have been doing change what they do? A sys-tematic review of the effects of audit and feedback. Qual Saf Health Care 2006; 15: 433-6.

 

Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, Grilli R, Harvey E, Oxman AD, O'Brien M. Changing provider behavior: An overview of systematic reviews of interventions. Medical Care 2001; 39: Supplement 2, II-2 - II-45.

 

Getting evidence into practice. Effective Health Care 1999; 5: (1). http://www.york.ac.uk/inst/crd/pdf/ehc51.pdf

 

Grimshaw JM, Thomas RE, MacLennan G, Fraser C, Ramsay C, Vale L et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004; 8: (6). http://www.hta.nhs.uk/fullmono/mon806.pdf

 

Pommerenke FA,Dietrich A. Improving and maintaining preventive services. Part 1: Applying the patient path model. Journal of Family Practice 1992; 34: 86-91.

 

NorthStar is a tool that provides a range of information, checklists, examples and tools based on current research on how to best design and evaluate quality improvement interventions. http://www.rebeqi.org/?pageID=36&ItemID=18

 

This summary was prepared by

Signe Flottorp, Norwegian Knowledge Centre for the Health Services, Oslo, Norway

 

Conflict of interest

None declared. For details, see: Confilcts of interests

 

Acknowledgements

This summary has been peer reviewed by: Gro Jamtvedt, Norway; Elizeus Rutebemberwa, Uganda; Godfrey Woelk, Zimbabwe; Blanca Peñaloza, Chile.

 

This summary should be cited as

Flottorp S. Does providing healthcare professionals with data about ther performance improve their practice? A SUPPORT Summary of a systematic re-view. August 2008.



Comments