Beyond Post-its: how to find worthwhile opportunities for AI

Sorting the realistic AI possibilities from the delusional fantasies is crucial – and yet, in a field where everyone thinks they’re an expert, it’s no easy task. Here’s how to ask the right questions, and steer clear of pie in the sky

The seemingly limitless potential of AI has many organisations encouraging their people to let their imaginations run free. In open and unconstrained blue-sky sessions, ideas – any and all ideas – are scribbled down on Post-its and plastered on whiteboards the world over. 

It’s a potentially exhilarating exercise that can generate all manner of creative concepts. But in that exhilaration lies danger – because asking “What could we do with this exciting technology?” tends to prompt a torrent of ideas that are either prohibitively difficult, or just fancy, nice-to-have add-ons that don’t address any strategic issue. A swirl of activity with no eventual impact depletes the most precious resources of any organisation: attention, trust and enthusiasm. It may also strengthen the perception that the possibilities of AI are overhyped. 

Does this mean ideation sessions should be abandoned? Not at all. But forward thinking needs a framework. Otherwise it’s inevitable that at some point, when someone asks “Do we need this?” or “Is it worth it?”, the answer won’t be a resounding yes. 

The question we need to be asking – at the very outset – is: “What do we need?” 

Start with needs, not possibilities

Consider the example of a forestry investment fund that wants to become the largest owner of woodlands in Central Europe. Its plan is to develop and roll out an efficient, scalable approach to buying up sub-100 ha forests – of which there are about 50,000 – currently under the radar of larger funds.

More viewpoints equals more ideas

The key here is to frame your approach with one key question: “How can AI help us do what we want to do?” Shape the journey with constraints and unique assets. Make the mistake of asking “What can AI do for us?” and you’ll be distracted – and possibly seduced – by non-essential possibilities.

Second, don’t use current data as your starting point. One stubborn fallacy in the context of AI is that “data is the new oil”. If this were so, it might make sense to use data as the launching point for any AI-related ideation. But data is neither universally valuable and fungible, like Brent crude, nor is the data you currently have necessarily an asset at all – and past data may be even less strategically relevant. 

For the investment fund, several worthwhile directions can be derived directly from their current strategic intent:

  • It could automatically identify acquisition targets from satellite imagery
  • It could intelligently prioritise, by analysing unstructured information in land registries
  • It might even predict future areas for reforestation based on climate change

Would the fund have identified any of these possibilities had it started from its current data? Probably not! 

Think like a designer, not a software engineer (at first)

Despite what many vendors of database systems or other business software would have you believe, AI is much more than a new form of IT. Yes, it’s ultimately embedded in software – but the key difference between the average IT system and AI-enabled applications lies in the complexity of cognitive tasks being performed, and this has implications for how AI projects are best approached.

If you boil it down, most software systems are largely concerned with data logistics: storing, transporting, filtering, displaying, aggregating and disaggregating pieces of information. The schema I/O is emblematic of this. Like in a logistics centre, it’s all about input and output – with not much value creation happening in the middle. AI systems, on the other hand, perform much more complex cognitive tasks between the I and the O. This makes them qualitatively different to IT.

The cognitive tasks automated or augmented by AI systems are much closer to the unique way an organisation creates value. Take the forestry fund case: the way an AI system would prioritise new investment cases (forests) critically determines the shape of the fund. Determining the right parameters, and weighing them, is thus not a job for IT – but falls within the remit of the Head of Portfolio, if not that of the CEO.

That’s why anyone wanting to tap into AI’s huge and sweeping potential needs to think broadly. Consider scope and context, like a designer would. Using a canvas like this one during ideation will encourage expansive thinking, as will involving a broad spectrum of people, including executive decision makers. The more viewpoints the better; your ideas will have more variety – and be interrogated from more angles. 

Technical details are still important in AI projects, of course – not just in the traditional sense of information technology but in new ways that extend beyond conventional IT’s scope. There’s often a deep inherent technical feasibility risk to AI that comes from a deep mathematical or algorithmic place; these additional devils in the detail can only be tamed by a specific skill – call it data science, AI engineering or, more traditionally, the role of quants. 

Play to human weaknesses – and AI’s strengths

Believing we know what ‘intelligence’ is, we tend to underestimate the complexity of artificial intelligence projects

Can’t we all imagine new applications for AI, given that we know how “intelligence” works? Aren’t we all experts in this field? It’s tempting and somehow natural to think of artificial intelligence as being closely akin to our own. 

In fact, the complexity at play in AI is better thought of as being on a par with blockchain technology – something few of us would claim to understand, much less think of new applications for. 

Bearing this in mind, a pragmatic and surprisingly effective way of steering your ideation towards the more valuable suggestions is to ask two simple questions: what are humans bad at?; and what is AI good at?

  • Human weaknesses

We’re slow, unreliable, dislike repetitive work, and can’t combine an unlimited amount of information. As a result, we can’t be “scaled up” – or, if we are, we make tasks prohibitively expensive. 

  • AI’s strengths

This is not just an inverted image of what humans are bad at. AI excels in standardised, highly repetitive, large-scale tasks where the relevant information is well-captured in data. Because it’s based on statistics, AI is ideally suited to situations where “being right most of the time” is what you are looking for. 

To try using AI to automate the later stages of the forestry fund’s deal process, for example, would be a bad idea. Every forest is unique, and so is every owner. A large number of factors need to be taken into account – many of them emotional – and by that stage, too much is at stake, and too much time has been invested, for “mostly correct” to be good enough.

When it comes to initially identifying and ranking those investment possibilities, however, total accuracy is not essential: a few missed trees won’t be the end of the world. And let’s not forget that the non-AI approach – to give a human the job of manually and rigorously assessing a dataset of that size – wouldn’t be economically viable (always assuming anyone applied for the job). 

But an AI capable of pinpointing the top 1,000 targets from 50,000, with 99% accuracy, will add huge value.

Further reading 

1. The Economist’s Technology Editor, Tim Cross, covers how people have perhaps been a little naive in misunderstanding AI’s limitations. The author questions whether the hype many subscribed to has left some disappointed with real-world results.

2. Looking beyond the lockdowns, face masks and hand sanitiser, the pandemic made many people turn to technology as a savvy defence mechanism against Covid-19. Here, the Financial Times writes how AI has disappointed all of us. Could better ideation have helped make progress? You decide.

3. This detailed article by Google focuses on how to design for AI – how to bridge the gap between human expectations and AI’s abilities. The chapter on ‘Mental Models’ gives advice on how to re-shape people’s expectations on AI to be more realistic.

4. The Apollo 13 Mission Control team faced a huge number of seemingly insurmountable obstacles after an oxygen tank exploded on board the 1970 mission to the moon. How did they solve it? Ideation. Design Better’s article about brainstorming illustrates why ideating openly with others is crucial for fast and productive problem solving.

5. How can our biases negatively influence brainstorms? This article suggests ideation sessions should have rules, but that the rules should change dramatically once AI gets involved. It covers humanity’s complicated relationship with AI and touches on many of the themes in our piece.

Contact

Paul von Bünau

Managing Director

Mobile
+49 (0) 173 24 16 000

E-Mail
paul.buenau@idalab.de

Address
Potsdamer Straße 68
10785 Berlin