Beyond post-its: how to identify worthwhile applications for AI

Sorting realistic applications for artificial intelligence from delusional fantasies is crucial – and yet, in a field where everyone thinks they’re an expert, it’s no easy task. Here’s how to ask the right questions, and steer clear of pie in the sky

The seemingly limitless potential of AI has many organisations encouraging their people to let their imaginations run free. In open and unconstrained blue-sky sessions, ideas – however tangential or fanciful – are scribbled down on Post-its and plastered on whiteboards the world over.

It’s a potentially exhilarating exercise that can generate all manner of creative concepts. But in that exhilaration lies danger – because asking “What could we do with this exciting technology?” tends to prompt a torrent of ideas that are either prohibitively difficult, or just fancy, nice-to-have add-ons that don’t address any strategic issue. A swirl of activity with no eventual impact depletes the most precious resources of any organisation: attention, trust and enthusiasm. It may also strengthen the perception that the possibilities of AI are overhyped.

Does this mean ideation sessions should be abandoned? Not at all. But forward thinking needs a framework. Otherwise it’s inevitable that at some point, when someone asks “Do we need this?” or “Is it worth it?”, the answer won’t be a resounding yes. 

The question we need to be asking – at the very outset – is: “What do we need?”

True innovation starts with needs, not possibilities

Consider the example of a forestry investment fund that wants to become the largest owner of woodlands in Central Europe. Its plan is to develop and roll out an efficient, scalable approach to buying up sub-100 ha forests – of which there are about 50,000 – currently under the radar of larger funds.

The key here is to frame your approach with one key question: “How can AI help us do what we want to do?” Or, in other words: to start with the overall strategy, and then narrow the field by looking at constraints and unique assets. Make the mistake of asking the wide-open question “What can AI do for us?” and you’ll be distracted – and possibly seduced – by non-essential possibilities.

Stack of post its

Ask the wide-open question "what can AI do for us?" and you'll be distracted - and possibly seduced - by non-essential possibilities.

Second, don’t use current data as your starting point. One stubborn misleading metaphor in the context of AI is that “data is the new oil”. If this were so, it might make sense to use data as the launching point for any AI-related ideation. But data is neither universally valuable and fungible, like Brent crude, nor is the data you currently have necessarily an asset at all – and past data may be even less strategically relevant. 

For the forestry investment fund, several worthwhile directions can be derived directly from its current strategic intent:

  • It could automatically identify acquisition targets from satellite imagery

  • It could intelligently prioritise, by analysing unstructured information in land registries

  • It might even predict future areas for reforestation based on climate change patterns

Would the fund have identified any of these possibilities had it started from its current data? Probably not! 

To innovate with AI, think like a designer

Despite what many vendors of database systems or other business software would have you believe (looking for an extension of their existing product line), AI is much more than a new form of IT. Yes, it’s ultimately embedded in software – but the key difference between the average IT system and AI-enabled applications lies in the complexity of cognitive tasks being performed, and this has implications for how AI projects are best approached.

If you boil it down, most software systems are largely concerned with data logistics: storing, transporting, filtering, displaying, aggregating and disaggregating pieces of information. The schema I/O is emblematic of this. Like in a logistics centre, it’s all about input and output – with not much value creation happening in the middle. AI systems, on the other hand, perform much more complex cognitive tasks between the I and the O. This makes them qualitatively different to the vast majority of IT systems.

The cognitive tasks automated or augmented by AI systems are much closer to the unique way an organisation creates value. Take the forestry fund case: the way an AI system would prioritise new investment cases (forests) critically determines the shape of the fund. Determining the right parameters, and weighing them, is thus not a job for IT – but falls firmly within the remit of the Head of Portfolio, if not that of the CEO. That is why AI projects are strategic innovation projects first, and software projects second.

Anyone wanting to tap into AI’s huge and sweeping potential needs to think broadly. Consider scope and context, as a designer would. Using a structured canvas during ideation will encourage expansive thinking, as will involving a broad spectrum of people, including executive decision makers. The more viewpoints the better; your ideas will have more variety – and be interrogated from more angles. 

Technical details are still important in AI projects, of course – not just in the traditional sense of information technology but in new ways that extend beyond the scope of conventional IT. There’s often a deep inherent technical feasibility risk to AI that comes from a deep mathematical or algorithmic place; these additional devils in the detail can only be tamed by a specific skill – call it data science, AI engineering or, more traditionally, the role of quants.

Play to human weaknesses and AI’s strengths

It’s tempting and somehow natural to think we can all come up with practical and worthy applications of AI. We all have first-hand knowledge  of how “intelligence” works, after all. Surely we can bring our expertise to bear on artificial intelligence? 

But to conflate human and artificial intelligence is to misunderstand the fundamental opportunity: the differences between the two. In fact, a pragmatic and surprisingly effective way of steering your ideation towards the more valuable suggestions is to ask two simple questions: what are humans bad at?; and what is AI good at?

Human weaknesses. We’re slow, unreliable, dislike repetitive work, and can’t combine an unlimited amount of information. As a result, we can’t be “scaled up” – or, if we are, we make tasks prohibitively expensive. 

AI’s strengths. This is not just an inverted image of what humans are bad at. AI excels in standardised, highly repetitive, large-scale tasks where the relevant information is well-captured in data. Because it’s based on statistics, AI is ideally suited to situations where “being right most of the time” is what you are looking for. 

To try using AI to automate the later stages of the forestry fund's deal process, for example, would be a bad idea. Every forest is unique, and so is every owner. A large number of factors need to be taken into account – many of them emotional – and by that stage, too much is at stake, and too much time has been invested, for “mostly correct” to be good enough.

When it comes to initially identifying and ranking those investment possibilities, however, total accuracy is not essential: a few missed trees won’t be the end of the world. And let’s not forget that the non-AI approach – to give a human the job of manually and rigorously assessing a dataset of that size – wouldn’t be economically viable (always assuming anyone applied for the job). 

But an AI capable of pinpointing the top 1,000 targets from 50,000, with 99% accuracy, will add huge value. 

If you are looking for a systematic de-risked approach to identify where artificial intelligence and machine learning can have a strategic impact on your organisation, get in touch. We are happy to walk you through our process and share lessons learned.

Further reading 

The Economist’s Technology Editor, Tim Cross, covers how people have perhaps been a little naive in their assessment of AI’s abilities. The author questions whether the hype many subscribed to has left some disappointed with real-world results.

This detailed article by Google focuses on how to design for AI, suggesting methods of bridging the gap between human expectations and AI’s abilities. The chapter on “Mental Models” gives advice on how we can re-shape our expectations of AI to be more realistic.

The Apollo 13 Mission Control team faced a huge number of seemingly insurmountable obstacles after an oxygen tank exploded on board the 1970 mission to the moon. How did they solve it? Ideation. Design Better’s article about brainstorming illustrates why ideating openly with others is crucial for fast and productive problem solving.

Previous
Previous

How misleading metaphors for ‘artificial intelligence’ lead us astray – and what to do about it

Next
Next

Into the unknown: biotech’s quest to target the 'undruggable' proteome