Audit and Feedback Meeting
Day 1 - December 6th, 2012


Large group presentations: Results from Cochrane review of AF: what do we know and where do we go from here?
Noah’s presentation “Using AF to improve quality of care”
  • Findings from the 2006 Cochrane review
  • 2012 update
    • Meta Regression
    • Format, source, frequency, recipient, baseline performance, risk of bias
    • Plus exploratory analyses
    • 140 trials – AF as a primary component – most were multifaceted
    • Results:
      • Similar to the previous results – AF seems to work
      • Both verbal + written seems to be more effective
      • Given by a senior colleague more effective than from employer representative
      • Moderately frequent vs. only once
      • Goal plus action plan
      • More effective when the feedback attempted to decrease the behaviour
      • AF more effective for prescribing than other behaviours – why? Need to list exactly what is targeted in context – both of situation (material and social) and of other behaviours.
      • More effective when baseline performance is lower
      • What you were targeting seems to be important – e.g., getting doctors to change their prescribing behaviour---needs more work.
Comments from the group members:
  • Should focus on the middle bullets first – as this is what many AF researchers tick off first.
  • How many behaviours can we look at once?
  • Important to look at the context
  • Try to think about why they may or may not be a modifier
  • Personal vs. interpersonal characteristics of the feedback
  • Issue of sustainability


Susan Michie’s Presentation: “Applying theory to designing AF interventions and evaluations in head to head trials”
Why Theory?
  • More effective, provides a framework to facilitate (accumulation of evidence, communication across research groups), identifies mechanisms of action

MRC Guidance for developing and evaluating complex interventions (Craig et al 2009, BMJ)

What Theories?
  • MRC guidance silent on this
  • NICE’s Behaviour Change evidence review (2008)
    • Identified evidence-based principles of behaviour change (Abraham, Kelly, West, Michie, 2008)
    • No guidance on which theories to use
    • Starting point for selecting theory
      • Understand intervention content
      • Need a method for specifying content (BCTs)

Types of A&F
  • Intensive (individual recipients) AND (verbal format) OR (a supervisor or senior colleague as the source) AND (moderate or prolonged feedback)
  • Non-Intensive (group feedback) NOT (from supervisor) OR (individual feedback) AND (written) AND (containing info about cost or numbers of tests without personal incentives)
  • Moderate (any other combination of characteristics than described in intensive or non-intensive)

  • Problems of categorising by intensity (no theoretical rational, few recommendations for practice offered, mixture of modes of delivery and content)
  • A theory-based approach: Specify content as BCTs to allow theoretically based categorisation & analysis; generate theory-based hypotheses concerning effectiveness
  • Specifying content: 13 papers from AF review; 28 distinct BCTs grouped into goal/standard setting, feedback & action planning
  • What kind of theory would help? Self-regulation theory (Carver & Scheier, 1982)
  • Feedback more effective when goal/target is set
  • Most effective where goal/target & action plan

Head to head trials: On what basis does one select intervention components?
  • Need to have a theory about how AF is working
  • What functions are AF playing?
    • Structure for noticing and reducing discrepancy (target, fb, action plan)
    • Cue to action
    • Reinforcement
    • Social support (how it’s given)
    • Others?
Ensure all behaviour change techniques are identified (within and beyond AF; both intervention and control group)

Summary:
  • Detailed description of intervention a starting pt for identifying mechanisms of action (ie. theory)
  • Might need to draw on more than one formal theory to generate hypotheses about mechanisms.
  • These hypotheses should guide intervention design, optimisation, evidence synthesis and trial design
Group Discussion:
  • Anne Sales: Does feedback have a role in promoting motivation? SM – yes-absolutely
  • Feedback more than once is more effective – how we should apply the theory in terms of multiple cycles of feedback….Susan: want feedback to be unpredictable of when it is coming.
  • Survey response rates are low – survey physicians to see if they are reducing discrepancy first
  • Theories of feedback; theories of goals – should we be going for a theory of AF? Susan: yes – that’s what we are trying to do – a program theory. Can we think about generating a theory of AF from more general theories?




Heather Colquhoun: “A SR of AF interventions: the good, the bad, and the ugly”
  • To whom: 51% given to individuals; rest group (18%), both (16%), unclear (14%); 92% FB given to target person
  • What was given? 79% given about behaviour, 14% on outcomes, 32% other (cost, accuracy of diagnosis, risk data, education, barriers to change) 58% on an individual’s performance, 64% group performance, mostly aggregate (81%)
  • *Comment: Why cost not part of behaviour – it is actionable? Susan – these are outcomes of behaviour – feedback on behaviour is the most effective.
  • Another participant comment: It’s a construct that can be pulled out. Like what we did.
  • Comment: How much were classified as education – correct solution information?
  • 36% was graphical; 14% was unclear.
  • 89% addressed the behaviour to be changed; 6% no; 5% unclear.
  • Clear comparison in the fb? 74% yes. (mostly others previous performance 68% – very few multiple comparisons)
  • How delivered? Face to face (44%)
  • How much? Total fb : 24% unclear; clear 76% (33/107 given only once, only 27 given more than 4x)
  • Comment: self activated/real time – have it in front of you? Were there any of these?
  • Comment: giving it the same way each time (e.g. 3 X the same way…would there be a difference? )
  • Comment: Anne Sales – new information to people – highly variable; background issue that is important.
  • Lag time between collection of the fb and the provision of the fb not clearly stated (37%); if yes – mostly months.
  • Rational for AF in the trial? 36% was empirical; 28% was intuitive (AF has never been used, but thought it would be a good idea), 26% no rational (just appear in the intervention), 9% used theory to design the intervention.
  • Running list of the articles that included the AF reports – only 7 did this.
  • Summary – some consistency, but also wide variation, reporting is poor, rationale for the intervention is lacking



AFTER LUNCH....
Large Group Discussion – Key Thoughts to Consider for the Meeting:
  • Susan Michie: What might be in a good intervention (includes theory)? The criteria that you would use to evaluate? Nature of the study design for the evaluation?
  • Sylvia Hysong: Better idea of what the priority criteria is (e.g., the 5 things you would want to test because....)
  • Cumulative meta-analysis– shouldn’t discount 140 trials – clearly a lot of theory has gone into it. Don’t want to get into “paralysis by analysis” by spending a lot of time working thru this and not doing new trials.
  • Merrick: Unconfident about the prospects of a detailed predictive theory. But, explicit ideas of how something will work seems like a good idea.
  • Not really sure what the “best bet” looks like
  • Jeremy: Goal of the meeting: same mistakes have been made over the years. If we can start to think about some of the basics of what we should do – this would be helpful – so how to take it forward in the field. This is a “call to action” – to be more effective, rather than just repeating trials.
  • How do we best do it is the real question
  • Jamie Brehaut: the cumulative meta-analysistells us that we are not getting better – most likely because we don’t have any theory. Very little theory went into the development of these trials –need to develop more thoughtful AF.
  • Susan Michie: 3 research questions that we talked about 1.) Can we design a better AF intervention as a whole? 2.) What is the relative contribution of components within an AF intervention? 3) What are the mechanisms by which the effects are happening?
  • Anne Sales: We don’t want to necessarily come up with a top 5 today, but to leave tomorrow with some possible studies?
  • Even if you don’t think you have theory – you have theory! Need a program theory – some things you deprioritize and some things become more vivid. Can trade off one program theory for another.
  • Plan: To suggest where things could move forward.



Group exercise 1: Susan’s group presentation
  • Goal standard audit
    • Visibility of required behaviour
    • Modifiability of behaviour (co-morbidity of pts, extent to which it is automated, habit strength)
    • Engagement with setting the goal
    • Number of goals
    • Quality of goal
    • SMART goals (achievable, relevant, time bound).
Other group’s additions: Acceptability (fits into above list), actionability, strength of evidence in relation to the guideline (fit with SMART category)
  • Feedback:
    • Competence of deliverer (e.g., how to give fb)
    • Seriousness of outcome
    • Profile/importance of outcome in society
    • Salience/vividness/emotional connotations of fb
    • Recipient seeks fb rather passively receiving it
    • Type/aggregation of behaviours
    • Multiple behaviours
    • Framing(positive vs. negative/gain vs. loss)
    • Complexity of target (different types, draw on theory)
    • Credibility of source
    • Goal orientation (motivation, mastery vs. learning) of recipients
    • Intrusiveness of intervention vs. part of daily routine
    • Visibility of the feedback
    • Timeliness
    • Interpretation, time to review
    • Opportunity to reflect and discuss (e.g. on causes of behaviour, instances of successful performance)
    • Support from management/leaders for feedback given
    • Graded feedback ( starting positively)
    • Multimodality
    • Format re new media
    • Linked to key messages
    • Feedback to teams
    • Longitudinal feedback - to tell a story

Other group’s additions:
  • How feedback is collected (e.g., self-reported), counts vs. priorities (% achieved or appropriateness),
  • Jeremy: cognitive understanding of the AF (comprehensibility of the feedback)
  • Timing, frequency – hard to tease out
  • Context – picking apart things piece by piece isn’t going to work; interventions need to consider if they are patient level, organizational level etc, what are the elements that are modifiable?

  • Action Plan – the group had little to say on this.
    • Involvement with creating action plans
    • Nature of action plans ( e.g. “ if then plans”)
    • Others groups addition:
      • Availability of tools for action
      • Who controls the action? (Anne Sales)
      • Who’s developing the action plan? (Jeremy)

  • Characteristics of the recipients
    • Past experience of AF as more or less useful
    • Alignment with goals, values etc of recipient re. receiving feedback and changing behaviour
    • Knowledge of the goals, and shy they are there
    • allegiance to goals
    • Perceived social comparison with groups/individuals/social norm
    • Trust of the organization
    • Autonomy of recipients
    • Intuitive vs. reflective orientation
    • Learning style
    • Authority of recipient
    • Degree to which recipients are organized/self-regulation
    • Education level/reflectiveness
    • Responsibility of practice
    • Union membership
    • Other group’s additions:
      • Was the recipient a volunteer?
      • Personality traits of the recipients
      • Is feedback given to individual or a group?
      • Time that AF is being delivered
      • Considerations of equity (fairness of comparison)
      • Motivation for change
      • Perceived consequences of achieving the goal
      • Self efficacy (confidence in being able to do it)

  • General/ External aspects
    • Combined with other interventions/elements (how to define these?)
      • Financial, or other consequences of AF
      • Whether AF is evaluated as a process
  • Automation of practice
  • Preparation/anticipation for intervention in advance (e.g., present audit details, and that they will be assessed)
  • Time spent on intervention
  • Engagement in designing AF
  • Alignment with goals/values, etc of recipient
    • Comment on automation – control behaviour through system design – via technologies, financial incentives, public reporting – is it different from AF?
    • Patient activation (patient feedback for a physician)
    • Action planning as a co-intervention
    • Prevalence of the condition being audited (e.g., if it’s rare – doesn’t matter how much you get) ---AF may be much better suited for common conditions, as opposed to rare/expensive interventions. Note: Heather added that based on the extraction – only 3 instances where this was reported in the literature.
    • Mechanism of Action:
      • Relationship between components and outcome (e.g. Curvilinear)
      • Is this really point a mechanism (Anne)? Association between type of AF and the type of outcome. Basically - more is not always better – not necessarily linear.
    • Other group’s additions:
        • R. Foy’s comment: We’ve been discussing elements of an intervention and other things that are apart of context. Hard to generalize from one study to another if we don’t describe these in our studies. What’s the risk of repeating this work? Number of existing taxonomies describing context.
        • *Susan: what is AF most relevant for in terms of type of behaviour, situation etc., nuts and bolts of the actual intervention, context in which it happens, mechanisms of action...


FOLLOWING SMALL GROUP SESSIONS –
RESEARCH AREAS THAT SHOULD BE ADDRESSED TO MOVE THE AF FIELD FORWARD**
  • Jill Francis’s Group
    • Discussed the need to have a publication for the design principals of AF
    • Need for reporting guidelines – but then steered away from this idea – they don’t always work that well
    • Discussion about how clear those principals could be. (e.g., could we say “don’t do AF once” ----do we have the evidence to support that?)
    • Leaders of AF – many areas and pieces that can be manoeuvred
    • Need to define AF in terms of its prototypical and discretionary elements – what needs to be there in order to be minimally defined?
    • Need for a common language
    • Need to look at the variation in effects of AF trials that only used 1 time AF (variation of effects or case studies)
    • Movement towards enrolling everyone in the study – every study for the control arm would have the AF---worked effectively in child health.

  • Anne Sales Group
    • Taxonomy would be great; so start with a Delphi process?
    • Key questions: (“Meta questions”)
      • When is AF the preferred intervention and what is it conditioned on? Likely around the characteristics of the topic/target group.
  1. What is it that needs to be tailored about an AF intervention that is sensitive to context/topic/participants?
  2. Meta-synthesisof non RCT – there is another literature that doesn’t get included in the Cochrane reviews

  • Merrick’s group
    • 5 research questions
  1. What is the impact of engaging the recipient in (design, implementation) vs. not engaged
  2. Impact of combining AF with one of the following:
  • Incentives/penalties (financial, CME, licensing)
  • Tools/practise aids (clinical decision tool)
  • Educational or academic detailing, group learning, supplementary
  • Practise redesign (coaches, facilitation, mentorship)
  1. Presence of face to face feedback vs. non personal feedback (degree of skill of person providing the feedback)
  2. Take the top quintile of AF studies and replicate their interventions
  3. The 5 most important aspects of AF to study are:
  • Frequency; individual vs. group; benchmarks, in person vs. electronic delivery, number of targets
  • Developing strategies for replicating and implementing successful interventions in other settings, looking at sustainability
  • What is the yield of AF focussed on - low performing vs. on average performing practitioners (where the target is important of course!)
  • Is there a different impact If feedback is given to different professions providing the same kind of service (e.g., physician vs. midwife)
  • Does it have a different effect if given to office staff vs clinician only + clinician

  • Susan’s group – types of research questions
  1. What tasks are most amenable to effective AF?
  2. How to optimise -
  • Don’t need placebo, but have standard feedback as control
    • Individualised data, comparison data, feedback of recent performance, include written/visual feedback
    • Is audit on its own an intervention – if people are aware that their performance is being measured without feedback?
    • Are Cochrane findings a starting point from which to generate more effective interventions?
  1. Relative effectiveness of, and interactions between components
  2. What components are most promising? Hypotheses
    1. More time to reflect on the feedback
    2. Level of aggregation of feedback data should be aligned to performance to be changed
    3. Te AF targets to team based or organisational goals
    4. Engage recipients in iteratively designing and implementing AF

  1. What are mechanisms of action of AF interventions e.g., time may work, identify goals for improvement, plans to improve performance, greater depth of processing
  2. How to standardise language for describing AF interventions?
  3. What paradigm are we in regarding user-centre design; adaptive interventions – methodological innovation + qualitative studies of recipients to look at user experiences



Jeremy’s summary comments from Day 1:
  • AF seems to be effective, but it’s a disorganized field.
  • Want to develop a position/guidance paper to improve the value of the science.
  • Agreement about foundational work that needs to get done:
    • Need to have an agreement about language
    • Need better reporting of existing studies
  • Broad meta questions (e.g., when it might be useful).
  • Sometimes AF can be done , but it might not be the most optimal thing to do
  • How can we optimise the effects of AF?
    • Can think about in terms of intrinsic features (how can we do it better), absence or presence of co-interventions, comparing it to other interventions, extrinsic features – how does the context modify AF?
    • Sustainability (research to service world)
    • Understanding the mechanism better
  • Head to head trials – engaging healthcare systems/integrated approach
  • Can we provide a menu of priorities that would be useful?
General comments from the group:
  • Susan – still quite a lot of overlap in the ideas. Reflect on what the “buckets” are – members can go to whichever group their expertise/interests are.
  • Merrick – reducing the list is needed – we need a fairly formal approach of doing this. Which attributes/co interventions go well with AF is not very different from when we should use AF.
  • Susan – methodologies – thinking about engineering design principles




Day 2: December 7th, 2012
Developing an active, international AF research collaboration


Review of Day 1
  • Intrinsic , extrinsic factors (topic, recipient) and mechanisms of action
  • Position paper – what is the state of the science/best practice in general
    • Key questions to ask for people developing AF interventions/elements to include
    • Key questions to move the field forward
Real life instances where AF is occurring – Laboratories
Robbie Foy‘s Presentation
  • Blood Transfusions study (transfusions costly/given to people who don’t need them)
  • 4 streams/stages (developing/piloting/code/enhanced fb)
  • Clustered randomised trial on user fb incorporating economic modeling and process evaluation
  • Enhanced fb compared with usual fb
  • Enhanced context (approximating goal setting?) – Does the fb identify the behaviours interested in – is it related back to a goal?
  • Enhanced delivery (approximating to action planning?) – Who gets the fb; do they make an action plan; does the team respond /use the fb?
  • 2x2 factorial cluster randomised trial; targeted behaviour: red cells and platelets
  • 152 hospitals
Steve Ornstein’s Presentation
  • Practice Partner Research Network (PPR net ) – over 200 practices in the US
  • Practice based learning and research organization to improve health care in its member practices and elsewhere in US
  • Aim to turn clinical data (EMR) into actionable info; empirically test theoretical sounds interventions using the EMRs ; disseminate successful interventions
  • In all PPRNET studies - they use AF as the control condition.
  • Do summary measures of care; number of measures that practices are above the PPR benchmark
  • This info isn’t really actionable – what about the pts that need care? They have a patient level report for this.
    • A list of all measures that they report on – # of patients, number that meet the criteria, and number that don’t, each pt level report has context sensitive information
Sue Wells’s Presentation – Audit and Feedback NZ style
  • Primary care organisations
  • Each person gets a unique NHI (linked with age, gender, ethnicity, NZDep score)
  • 2002 – Cardiovascular risk assessment – guidelines put into computerized decision support (~ 400 practices)
  • PREDICT – at first ~ 3% had a cardiovascular risk assessment; now about 50%
  • Report provided to GPs and nurses – could see practice/GP level data – could see which pts they needed to see. Found that it wasn’t helpful – denominator = eligible population assessed
  • Widespread implementation, AF query tools for enrolled population, for multiple conditions, variable PHO use – benchmarking, CME, MOH reporting
  • Anonymised linkage of CVD risk profiles to outcomes; NHI is linked to national and regional data
  • ProCare radiology point of care commissioning – gives them decision support about the appropriateness of their order, cost info, waiting time for routine/urgent, budget, balance.
    • Before –overspending budget by 30% - after remained within budget every year for past 4 years
    • Before – 25% ultrasound referrals clinically inappropriate, currently 0.1% inappropriate
    • Before waiting time for ultrasound 4 weeks to 6 months....now waiting times less than 5 days
David Price’s Presentation
  • Kaiser (consists of medical groups & health plan; 9 different States in the US)
  • Regional health plan that partners with independent Permanente medical group (Kaiser Permanente) – independent entities that work together
  • All use the same EMR
  • HMO research network
  • Beginning to introduce AF into some of their research projects – not an AF trial per se
  • What does AF add to CME?
Stakeholder point of view:
  • More attention needs to be paid to best evidence to networks
  • Targeting funding for something like this…
  • Chronic pain – looking for a pre/post project around practice guidelines
  • Inclusion of decision makers/policy makers in the work
Discussion/questions about the “laboratories” presented:
  • Anne Sales: If doing feedback directly - how is it delivered? How do you measure uptake?
Response: Robbie Foy – written feedback that may or may not be shared with staff/department. [Action plans – given suggestions and support materials, meetings]. David Price ---“Audit and dissemination” – they get a score card that is posted, sometimes do quality improvement projects. Steve – “passive dissemination” on a website, reviewing with providers and staff.
  • Jeremy (to Steve) When doing these “tweaks” – have you been reporting these?
Response from Steve – Could systematically evaluate them at a low cost.
  • Mary – why weren’t people not complying with guidelines? Why giving blood when no indication to do so?
No clear reason----social influences, risk aversion, my patients are different, believing that they aren’t based on gold standard recommendations etc.
Sylvia Hysong: responding to the question about how uptake was measured – could track and monitor whether people were logging into and checking the report – via the website. They could see that uptake dropped after the financial incentive was removed. Another study – survey of recipient’s reactions to feedback.
Noah: how clear is it what people are supposed to do with the feedback? Is there an opportunity to automate their responses to the feedback?
  • Not easy – reporting mechanism exists outside of electronic health record – any action taken would be within the electronic health record.
  • Action planning is often not individualized
  • Components might be automated…
Anne Sales: Suggestion to start with places that have less audit and feedback experience (patient settings/long term care etc) Other settings should be considered …
Jeremy:
  • These presentations show that there are many different ways of doing this.
  • Embedding research into routine settings – research ethics, consent etc.

Merrick: framework of pragmatic randomized trial design – must be careful not to confuse the number of subjects with the rigor of the design.

Susan: patents in past data would be useful, but it would be nice to have collaboration around the issues that we want to study - informal international collaborations are possible.



AFTER LUNCH
Mark Chignell’s presentation “Audit and Feedback Science: Lessons from Cathedral Building”
  • Science vs. Iterative design
    • Aerodynamics – Science
    • Cockpit Design – Iterative Design
    • E.g., early altimeters led to “controlled flight into terrain”
    • Human Factors: tools of the trade
      • Guidelines and Heuristics (E.g., use a knob that is this big and requires that much torque)
      • Requirements Analysis (Find out what people (and the situations) require)
      • Conceptual Design (Based on general principles, relevant science, requirements
      • Iterative Design (Prototype, test (user testing), refine)
      • AF guidelines and requirements – good
      • Iterative design of AF tools – Good
      • High level theory of feedback intervention – Good
      • Detailed theory in specific, but important areas – Good
      • Theories of everything relating to AF – may be impossible
      • RCTs – not good for complex multivariate situations
      • Controlled experiments need to have fractional factorial designs that sacrifice higher order interactions to allow more factors to be considered
      • Response Surface Analysis is a similar approach (regression) using continuous variables.
      • Yes, we can do science as well as iterative design. But there will probably never be a science of Smartphone design. Maybe we need a bit more of a design focus at this time.
Susan Michie on behalf of Linda Collins “Unpacking the black box”: Engineering more potent behavioural interventions to improve public heath”
  • What is the black box? Treatment Package Approach: encourages people to put a lot of stuff in it in order to get an effect; doesn’t take into account efficientness, cost effectiveness, time effectiveness etc
  • Can borrow ideas from engineers and build optimized behavioural interventions
  • Resource Management Principle – how engineers think; what I need to find out; what are the resources that I have
  • Objective is identity 1 of the 2-3 best approaches and choose efficient experimental design
  • Start with the resources you already have
  • Multiphase Optimization Strategy (MOST) – a framework and not a procedure
  • Optimized – “best possible solution given constraints”– this is the goal that you want to achieve.
  • Many components that we can study – idea is to have a factorial design with combinations and come up with a fraction intervention design. Draw on a theory ….
  • Is the intervention the best possible given the constraints vs. evaluation (statistically significant)?
  • IF= not best possible given constraints + not statistically significant =optimize using effect size as criterion; IF= not best possible + statistically significant = intervention can probably be improved; IF = best possible given constraints + not statistically significant = different intervention strategy needed; IF= best possible + significant = what to aim for
  • This approach puts more emphasis and strategic approach into the early stages of intervention design.

Discussion:
  • How do you know if the intervention is the ‘best possible’? You start off with your constraints (time, resources etc) and make a judgement about what can be achieved within that constraint. (objective function data)

  • Anne Sales: we don’t know the effect size of different variations of the component parts; how do you get those effect sizes?

  • Susan Michie: have 3 approaches. 1.) kitchen sink approach – don’t do that; 2.) do many studies – takes a lot of time, misses the point of complexity about synergistic interaction between components 3.) this approach – accepting the idea of complexity and using empirical knowledge, theoretical principles etc

  • Lets you rule out certain pairs of components that are of no value, not feasible etc.

What sorts of extrinsic factors need to be considered in the design and evaluation of AF?
Robbie Foy & Anne Sales Group
In thinking about how to prioritise these, we might use a basic framework/set of criteria, i.e.,
  • Is there a good evidence base or distinct lack of an evidence base for considering or measuring this factor in a study?
  • Is there a good theoretical base or underpinning rationale for considering or measuring this factor in the study
  • What practical factors or feasibility issues need to be considered?
  • To what degree can these factors be sufficiently well defined, understood and operationalized in practice, studies and reviews?
  • How costly is it to implement/measure; how parsimonious is the intervention with this included or factored in?
Other general questions to consider: (for “what should happen next” paper)
  • What external conditions are likely to optimise our ability to design and evaluate AF or enable us to make an optimal decision about whether or not to use AF?
  • Can we decide which sorts of external factors should ideally be reported to inform judgments about generalizability or feasibility of studies?
  • What do evidence users need to know?
By extrinsic factors, we refer to:
  • Context (probably includes several of the following…)
    • Characteristics of recipients
    • Characterizes of deliverers
    • Characteristics of targeted behaviour/topic characteristics
    • Characteristics of potentially available audit measures
    • Co-interventions (always being mindful of cost)
    • Organizational and/or setting characteristics
Mechanisms of Action – Jill Francis’s Group
Identified themes to explore:
  • How theory is ranked or grouped in a way that it can be used as resources– how do you build those theories
  • Level of interventions and what there targets are
  • A need to explain variability and impact of interventions
  • Are there generalizable outcomes of AF – not so convinced of this
  • * A need to create a directory of resources to inform design of interventions to structure problem definition and solution design
  • There may be a role for design and evaluation manual – so how do you design them and how do you evaluate them?
  • Theory itself is a problem – There are career incentives for theory fiddling
  • Are audit and feedback two separate or linked processes?
Intrinsic Factors – Susan Michie’s Group
  1. What is an intrinsic factor?
Prototypical: Components of AF
Discretionary: Action plans, modes of delivery
Assumptions:
  1. Should only be used for an issue that is regarded as important
  2. Interactions between co-interventions and how intrinsic factors work
  3. Minimal standard for feedback – (comparison data, feedback of recent performance, include written/visual feedback)
  4. Criteria for selecting components
  5. Empirical findings
  6. Theory/principles from relevant literature/disciplines
  7. Dominant beliefs/Frequent current practice
  8. Practitioner experience
    1. Cost
    2. Current priorities and policy
    3. Current practice and possibilities (e.g., quality of data, measures)
    4. Feasibility and screening work
    5. Designs to:
  9. Testable hypotheses – Areas that may generate successful hypotheses
    1. Level of aggregation of feedback data should be aligned to performance to be changed – individual vs. practice/team
    2. Complexity/nature of feedback (e.g., number of indicators reported on; number of behaviours; bundles; order/priority of indicators; tailoring)
    3. Format of feedback
    4. Mode of delivery of feedback – e.g., paper, electronic, social
    5. Frequency of feedback
    6. Time to reflect/act on feedback
    7. Engagement in design and delivery of feedback vs. top down imposed
    8. Role for intermediate outcomes/process measures
    9. Providing information/commentary alongside the feedback to add to/interpret the feedback
    10. Including action plans- type e.g., if – then
    11. Training of recipients to understand and act on feedback
  10. Constraints/considerationscriteria to use to select what to study
  • Translate activity into evidence
  • Maximise number of components to test within resources
  1. Sample size – collaborations across groups/sharing data etc.



Jeremy’s closing summary:
  • Current state of the science about how we use AF is problematic
  • Repeat studies that don’t let us advance our knowledge base
  • New from today - examples of how AF is used; a lot of variability
  • Need to keep the collaboration going
  • Starting points of a definition of AF; prototypical elements of AF; common co-interventions
  • When is AF useful? How do we optimise it under those circumstances?
  • Recognition that there are modifiers – need to understand them
  • Looking in terms of extrinsic/intrinsic was useful; how to prioritise – using current empirical knowledge, using theories from a broad range of disciples, practical issues
  • Framework about how to prioritise
  • “Intrinsic group” – hypotheses -
  • “Mechanisms group” – develop and test both program and mid level theory
  • How do we advance the field? Collaborative thinking, embracing the different disciples