If you only have a hammer, everything looks like a nail. All too often in AI projects, the rush to implement overtakes a fuller appreciation of the problem at hand, spawning underwhelming solutions. Here, with a worked example, we explain how to take a more structured approach
It’s part of human nature to see everything through the prism of our own experience. Whether you’re a writer or a physicist, this trait has its uses – but it can also lead to incomplete, siloed ideas, rather than thought-through, workable solutions.
AI projects are particularly prone to tunnel vision; people focus on the first thing they see that falls within the familiar category of problems that can be addressed by algorithmic machinery. But is that just one part of the puzzle? And might it not even be the most important part?
Failure to grasp this is one of the reasons AI companies can get caught up in the proof-of-concept trap, pushing promising ideas through to expensive testing stages, when taking a quick step back to consider that idea in its context would have revealed its shortcomings.
A worked example: when alarms cry wolf
Imagine you are working on a project that aims to optimize alarm management in hospital ICUs.
The overwhelming majority of alarms – between 72% and 99%, according to Sue Sendelbach and Marjorie Funk1 – are actually false-positive, which means the only action required is that they be manually stopped by hospital staff.
As the multitude of sensors connected to patients in a modern-day ICU grows, this high incidence of false-positives becomes an ever more pressing problem. Patient safety is undoubtedly enhanced, and the sensors enable better therapy – but the daily flood of alarms can desensitize hospital staff, eventually causing alarm fatigue (sensory overload caused by the excessive exposure to alarms)1.
How might AI mitigate the problem?
Well, surely that’s pretty clear? Deciding alarm vs no-alarm, based on sensor data, is a good-old classification problem, which is what machine learning is all about (kind of). So let’s dust off the support-vector machines and the neural networks and get cracking, right?
Maybe not. You can see how looking at sensor data in a multivariate fashion, analysing the time series, or examining patients’ EHR data to start identifying patterns within the false alarms, feels like an open goal. Algorithm training could begin relatively swiftly.
But now is the time to take the blinkers off and get a broader view. Question your fundamental assumptions. Is it naive to think optimization of alarm management can only/best be achieved simply through filtering false-positive alarms? And where is the bar set for alarm filtering; would a 5 percentage point reduction in false-positives warrant investment? Could it be that the true value of this project lies in discovering solutions beyond the most obvious?
To find those solutions, you’ll need to think like a designer rather than an implementer: go broad before you go deep and embrace discovery first. Embrace the ambiguity, embrace the complexity. Familiar frames of thinking may not always be your friend when you’re approaching a new problem.
If you stop at the first sight of something that looks like it can be addressed by an algorithm, you’ll have missed a chance to fully understand the situation. And once you start focusing on one angle of attack and throwing in algorithms, changing tack will be nigh-on impossible.
How to design for AI
What does it even mean to develop a concept in the realm of AI? AI-enabled solutions are hard to envision, and the familiar tactics for rapid prototyping to validate, invalidate and iterate a potential solution are not so effective in an area where the core is so abstract.
However, what can be taken from design is to take the research phase seriously, and approach it in a structured way. Don’t even think about data and algorithms before you’ve invested a good chunk of time – at least two weeks of proper research – into getting to know the broader environment your AI project will sit within inside-out. That means speaking to the people who experience it every day. If, as in our example, you’re working on a project that will sit within existing ICU operations, try to grab an hour or two of some ICU staff’s time; it will pay huge dividends down the line.
Research done, it’s time to structure the problem space; here’s how we do it …
Step 1: Getting everything into view
Your first task is to generate a mind map. Pop your topic – in this case ‘Alarm management in the ICU’ – at the centre and record whatever associations spring to mind around it. These can be pretty much anything, from adjectives to objects, people and processes. Be sure to focus on the current state of play and go for quantity over quality (as you can sift through the ideas later).
It can help to begin with a short brainstorming about all the associations that instantly pop into your mind. After that, when they have stopped effortlessly appearing, you can focus precisely on different processes related to the topic. In this example, starting with free associations might yield words such as loud, confusing or patient information. Thinking precisely about processes might generate words such as alarm confirmation, prioritization or conclusion (for therapy).
Step 2: Put some structure on it
The next step is to review your mind map and start grouping associations into clusters. These will help you structure your thoughts and think more efficiently about the project’s topic and its underlying connections. Feel free to merge branches at this stage, or delete them if they start to seem irrelevant.
Now name your clusters. This will help you identify other fields that may be relevant to those clusters.
Step 3: From clusters to levers
Next up, you’ll want to identify some levers – elements that can be directly influenced by the potential AI project. To do this, look at each individual cluster in turn. Is it something that could directly be improved through your work? If not, is there a related aspect the project could influence? Again, free your mind from the tyranny of the familiar machine learning toolbox! More often than not, there are indirect routes for algorithms to have an impact.
In our example, the Shortage of staff cluster is not something you can directly influence using data science. Workload Management, however, is; algorithms could “intelligently” divide and assign alarms only to certain staff, for example, lightening others’ workloads.
Therefore, you would derive the Workload Management lever from the Shortage of staff cluster.
Another example could be the layout of the rooms in an ICU – relevant because inconvenient room layouts can lead to staff taking circuitous routes and alarms going off for longer, putting patients at greater risk.
Since you are not an architect, you cannot influence the layout of rooms – but what you can influence is which routes are taken by whom (by, for example, using algorithms to suggest an order in which alarms should be processed to optimize the routes taken by the staff).
Therefore, you derive the lever Workflow Optimization from the Room Layout cluster.
Step 4: Pulling the levers
Now you’ve identified all levers you can influence you should be able to devise a range of more complete solutions. And you can always add up to your topics later on, by repeating steps 1–3 or by adding relevant subjects as they crop up.
Just make sure everything you add to your map brings more clarity (not more complexity) and is relevant to your work. By the end of the exercise you’ll have defined a broad range of situations that are ripe for improvement. In the ICU example, we identified five key levers.
Instead of simply seeking to reduce the frequency of alarms, we could use algorithms to prioritize them. Medical staff might then only need to interact with top-priority cases. This approach would still carry a risk – of a genuine emergency being downgraded. And yet the repercussions would be less than in the naive binary approach, where that alarm would never even have been triggered.
Informative alarm displays
If the aim of an alarm in the ICU is to inform medical staff an individual’s therapy is not going as planned, those alarms should tell you everything you need to know. Simple sirens tell you very little. Variations in sound (frequency and tone) and light (colour and intensity) could indicate different types or severities of alarm, helping staff interpret the situation and draw swifter conclusions.
Staffing levels vary between hospitals and even between shifts in the same hospital, which can make even distribution of duties difficult. If staff could be assigned responsibility for these alarms according to their current workload, and provided with a route that helps them move from A to B (and C and D perhaps) to deal with them more efficiently, this capacity problem would be mitigated.
And what if staff didn’t need to physically move to where that alarm is going off to confirm it? Given the vastness of modern hospitals, the remote confirmation of alarms promises sizeable reductions in time and effort – plus the technology is relatively simple and easy to integrate into medical staff’s working lives.
Finally, what if an alarm did more than alerted medical staff to the situation? Typically, staff responding to an alarm see just a headline and a short description stating the circumstance that set it off. They then have to analyze what exactly caused the alarm, as the culprit can be a combination of different patient parameter values, fed via different machines, plus sensor or machine failure. Only after this often time-consuming and confusing task can they suggest a suitable change of response. Providing more information directly with every alarm – potentially also recommending a course of action based on that information – would speed up how alarms are acted upon.
Step 5: Comparing your options
These are all good options – the questions is which one should we do first? Again, think breadth. Before diving deep into any one option – or all of them (which would be a hell lot of work) – let’s assess each one along three dimensions:
- Barriers to acceptance: how easy will it be to fit the solution into the existing workflows – and minds – of hospital staff?
- Mode of intervention: will the solution provide staff with extra information, or recommend further action (or both)?
- Technical complexity: how difficult will it be to implement and integrate?
The obvious quick-win is providing better alarm visualization. Low tech complexity and straightforward integration make it an obvious first step that would quickly improve the situation. Remote alarm confirmation would also be technically quite simple – although hospitals and their staff may be uneasy about introducing a system that did not require a patient visit for each alarm.
Next up in terms of feasibility are the interventions that recommend actions, such as how responsibility for alarms is assigned to staff and optimizing the route they take to deal with them in the most efficient way.
Ultimately, it all depends on the exact nature of the challenge you’re facing. If the root cause seems to be a very imprecise alarm system, then running a PoC to see whether you can get more relevant alarms, either by suppression or prioritization, makes absolute sense. Once that option is exhausted, however – or if it’s not the key issue – workload management assistance is the way to go.
What have we learned?
The structured design approach makes you more aware of surrounding processes and aspects, making it easier to envisage obstacles before they actually appear. It also frees you to explore how different solutions might be combined to create an even bigger, holistic benefit, without the need to invest much more time or work.
You’ll have more clarity (and therefore solutions) too, as well as the security of knowing your thoughts have been sorted and interrogated – and it will spare you the pursuit of dead ends.
Next time you’re itching to jump in and get developing, think twice: take a deep breath, and a step back, and absorb the whole picture.