Many practices think of risk management as belonging firmly in their health and safety policy or disaster recovery plan. But despite an awareness of risk registers and risk assessment, many practices miss the opportunity to apply these principles to other high risk systems in the practice.
In a GP practice, some systems carry high risk of harm to many patients. Medical defence organisations often highlight results handling, repeat prescribing and passing on messages as good examples. In situations like this, the system itself can fail and its impact is harm to a patient, or patients.
What causes systems to fail?
Professor James Reason, author of several books on managing risk, calls this an ‘organisational accident’. A hazard manifests itself which is a threat to a system, the system’s defences do not cope with the hazard and "losses" (accidental injury/loss of money etc) occur.
Professor Reason illustrates this for us with a ‘Swiss cheese’ diagram. Slices of cheese are defensive steps in a system, protecting the system against failure.
Checks and safeguards are in place which should allow corrective action to be taken if something has gone wrong at an earlier step in the system. In general practice these defences might include:
- Pop-up warnings
- A final checklist to use at the end of a series of actions
- Checking that the number of items processed matches the original number of items put for processing (eg scanning)
- An on-screen warning that a step in a procedure has been missed (eg information has been missed out)
The hazard is stopped by the defence. But a system with a lot of holes or where all the holes line up and none of the defences work carries a higher risk of allowing an accident. The system is weakened and a patient is harmed.
Holes in the system
Holes in the system are caused by the people, usually under pressure, who use the system in a way that challenges the integrity of its defences. For example:
- You miss a check because you are running late.
- A staff member overrides a warning because it was OK last time they did that.
- A new receptionist was not properly trained because you were short staffed when she arrived.
- You designed a system with an inherent flaw which has not arisen until today’s combination of circumstances.
- The system you designed was so watertight that it’s time-consuming and clumsy and people cannot follow it.
- The computer software and/or hardware are inadequate or unreliable.
This does not mean that it is the individual’s fault that something went wrong. It is a system fault.
False sense of security
It is easy for a practice to be lulled into a false sense of security about the safety of its systems. No accidents or near misses may have happened for a very long time (at least, that you are aware of). However, this may just be down to luck.
Over time, a lack of accidents reassures system users that the system must be working correctly. This is just the point at which an accident is most likely to happen.
Safeguards and checks in the system will have become weakened over time. Warnings may be habitually ignored and steps or checks habitually omitted. Training may have become lax and diluted.
What’s the answer?
Practices need to design and monitor systems so that they can both anticipate where things may go wrong and put in measures to reduce the risk of that happening. Build into your system-design process the following steps:
What could go wrong at this stage in the system?
Identify the hazard:
How likely is this to happen?
The risk that the hazard will occur:
Low, medium or high risk or occurrence
If this happened, whom could it affect?
How serious would that be?
Quantify risk of harm to patients:
What should we do to stop that happening?
Minimise the risk of the hazard occurring:
Introduce defences in the system to make sure the chances of this happening are made as low as possible.
Things to remember
- Involve as many people as possible in the design process
- Encourage them to challenge each step for risk
- Defences need to be proportionate to the risk
- The cost of the defences (time and effort) should be balanced against the cost of something going wrong
- Train everyone in the new system
- Ask for feedback about how the system is working
- Review and amend high risk systems regularly
Fiona Dalziel is a practice management consultant. www.dlpracticemanagement.co.uk
Managing the Risks of Organizational Accidents: James Reason, Ashgate Publishing Ltd 1997