How misleading metaphors for ‘artificial intelligence’ lead us astray – and what to do about it

Metaphors help us succinctly convey a lot of information, explaining new stuff in terms of the familiar. Taken too literally, however, even the good ones become dangerous. Are terms such as ‘machine learning’ and ‘neural networks’ doing more harm than good?

Airbnb for cars. Time is money. Shattered dreams and broken hearts. We use metaphors as shortcuts to quickly convey the essence of new concepts, to transfer intuition from the familiar to the new. They can be powerful and tenacious; long after the new has become familiar, that initial image remains – a subconscious mental frame, shaping our thinking.

Good metaphors facilitate swift and sure-footed decisions, circumventing the need to learn everything from the bottom up. Car-sharing app Turo helps people rent their cars out when they’re not using them, just as airbnb does with homes. For those of us paid hourly, time truly is money. And while people who are broken-hearted don’t need urgent surgery for a blocked aorta, the pain they feel is no less intense. These are all metaphors that, by and large, work.

Ill-fitting metaphors, on the other hand, can lead us astray. But where do AI’s most commonly used metaphors stand on a scale from dangerously deceitful to essentially accurate?

First things first: even the term “artificial intelligence” – bandied around with such regularity that few would pause to think about its origins – is a metaphor. Its acceptance into common parlance has fuelled both heightened expectations of AI’s potential and paranoia surrounding its “intentions”. Because human intelligence may have brought us Newton and Einstein, but it also brought us Pol Pot and Hitler. Hollywood wouldn’t want it any other way.

Delving deeper into AI, a world underpinned by mathematical, statistical and algorithmical underpinnings, metaphorical explanation just makes things easier. After all, how many people intuitively “get” probability distributions and matrix multiplication? Metaphors can help position abstract and elusive concepts in familiar realms.

But do they stand up to interrogation?

The ‘data is the new oil’ metaphor

Before artificial intelligence and machine learning elbowed their way into the mainstream, we had “big data” and its mantra, “Data is the new oil”, popularised in a 2017 piece in The Economist.

Suddenly, data was – quite correctly – perceived not as a mere byproduct of IT systems, but something that had value of its own, a key raw ingredient. Data in a CRM system, for example, wasn’t just there to make it function: it could be used to create something new, such as an algorithm to target a specific customer segment.

Questions of data were propelled upward from the mundane realm of the IT department to the strategic level. Just like oil – the humble hydrocarbon with a permanent place on the geopolitical agenda – data took a seat at the top table. Who would have predicted, 10 years ago, that boards would be employing Chief Data Officers? So far, so oily.

But then the comparison starts to wear thin. Unlike oil, data is not a homogeneous fungible commodity. Taken out of context, there is no universal value-generating mechanism – no combustion engine – for data. Without an application that tessellates precisely with this specific data, it is worthless.

The analogy also prompts needless, directionless accumulation. If data really were the new oil, amassing as much of the stuff as possible would be a sound strategy. But investing in generic, application-agnostic IT data infrastructure – a mistake many companies make – won’t yield you valuable data. There may be general patterns for data architectures, but there’s no such thing as a general “refinery pipeline”. Every application needs to tailor its own.

You can also fall into the trap of thinking the data you currently hold has intrinsic value – is a key strategic asset to be exploited. This is not the best starting point for developing a cohesive data and AI strategy. You can’t build a market for “diesel data” just because you have a surplus of the stuff. Instead, derive data and AI applications rigorously from customer value. The existing data landscape may help generate ideas but should neither limit nor direct our thinking.

The ‘neural networks’ metaphor

Like “artificial intelligence”, “neural networks” – as a metaphor for a family of algorithms – evokes a wide range of powerful associations. Thinking, consciousness, autonomous decision making, agency, intent, emotions … this phrase suggests all these very human traits.

For computer scientists, it’s an analogy that helps convey the topology of information processing through several interconnected layers; it also captures the flexible modular nature of this computing model, where complexity can be arbitrarily increased by adding further layers.

Moreover, from a systems engineering perspective, the term “neural” makes an important distinction clear: unlike traditional software, NN-based systems are not explicitly programmed but implicitly “trained” – by feeding in data which exemplifies the desired behaviour.

Problems arise largely because people forget that this is a metaphor – and so, by its nature, an imperfect or artistic, imprecise association. Artificial neural network algorithms are, in fact, a highly simplified approximation of certain aspects of how information is processed in the human brain; they don’t even come close to being a cohesive model of cognitive processes.

And so, despite excelling in certain narrow tasks (such as detecting objects in a picture), the actual capabilities of artificial neural networks lag far behind human cognition. A more useful way to think of neural networks is as powerful machines trained to recognise pre-defined patterns. This is true even for impressive systems such as ChatGPT. In essence, ChatGPT is an incredibly powerful “next word predictor”. Trained on massive amounts of data, it analyses the patterns of existing text to predict the likelihood of the next word. And it does so with an uncanny level of accuracy, on many subjects and in many voices. But this does not make it an earth-shattering breakthrough in cognitive science. In fact, it’s  a bit embarrassing that so much of what we do as humans can be reduced to “next word prediction”.

Most of the big fears and horror scenarios, such as the singularity, can be pinned on our anthropomorphising of neural networks. But in fact, there is no plausible path for current neural networks coming anywhere close to human-level capabilities. This is not to say that large-scale super-human pattern recognition machines do not pose real dangers and ethical issues. However, these threats are more subtle and nuanced than the fictions played out on our screens.

Similarly, expectations for neural networks’ potential become muddled when we assess situations in terms of human thinking. We typically get two outcomes. First, neural networks are expected to solve tasks that require more than pattern recognition, such as reasoning or knowledge representation. And second, neural networks are not employed where they could excel: spotting complex patterns at super-human scale, speed and at zero marginal cost.

So, while it would be too much to expect a neural network to read a single scientific paper and capture all of its knowledge accurately, it could scan a million publications superficially for certain semantic patterns in less than 30 minutes – something no human could ever do. In failing to think about the relative strengths and weaknesses of humans and machines in this way, we let many opportunities such as this pass us by.

The 'machine learning' metaphor

Often thought of as AI’s more practical, down-to-earth sister, “machine learning” is another of those metaphors treated as quasi-literal.

What the term “learning” captures well, similar to the “training” analogy for neural networks, is the key distinction from traditional software development: the desired behaviour of an ML system is not explicitly and transparently programmed, but adjusted to mimic a set of training data.

And yet while the inputs may be similar to human learning, the result is quite different. Whereas human learning is “the process of acquiring new understanding, knowledge, behaviours, skills, values, attitudes and preferences”, according to Wikipedia, machines merely “learn” to recognise patterns in data, where the data needs to arrive in a specific standardised format. Yes, machines can “learn” to play Go, recognise our faces or scan texts. But any “understanding” they may have cannot accommodate the unexpected. Enlarge the Go field, tilt the camera, or use rare or nonsensical words and the algorithms soon stumble.

And is machine learning a continuous, or perhaps even self-directed process? Sadly – or perhaps fortunately – not. When humans learn, we build on prior experience and use our knowledge for subsequent tasks; in machine learning, training is a discrete and rather artisanal human-controlled process, probably better captured by the term “calibration”.

Where human learning can lead to understanding and knowledge, the result of machine learning is a set of technical parameters – often hard for humans to interpret.

Thinking of ML systems as a perpetual technical flywheel, trained cheaply on the job, is one reason the total cost of ownership for ML systems is often underestimated. In fact, it takes a small village simply to keep ML systems from deteriorating, let alone improve them.

On top of this, because machines gain no insight from their “learning”, the pattern-recognition processes they are engaged in tend to produce precious little in the way of insight. Isolating significant correlations and deriving causal hypotheses is not machine learning’s strong suit; best to leave that empirical research to the economists, social scientists and psychologists.

Time for a rethink?

Where do AI's most commonly used metaphors stand on a scale from dangerously deceitful to essentially accurate?

A lot of enthusiasm, trust and money has been wasted by thinking of data as oil, artificial neural networks as mini brains, and pattern recognition as human learning. But we need not necessarily throw the baby out with the bath water.

Certainly, a more sober, nuanced look at what AI is and can do would benefit everyone. But now is not the time to over-correct – because actual intelligent and learning systems will, one day, be built. It could be 100 years away, but small steps are taken every day that bring reality closer to the more problematic connotations of these metaphors.

Yes, big data is not as straightforward to exploit as oil. But global datafication and digitalisation is still in its infancy. An ecosystem of data producers and consumers has begun to emerge, and the standardisations they spawn will make data more fungible. If, for thousands of niches, universally accepted data standards were to be established, then data would become more fungible and, perhaps, tradable. Patient data is already well on the way to having its “Brent crude moment”.

Yes, artificial neural networks cannot be scaled up to ape all the functions of the human brain. But super-human pattern recognition has vastly more applications than we have seen so far – because we have only just started looking in the right places – with the likes of Chat GPT and MidJourney – and because datafication is only now moving into less niche areas. We tend to both overestimate neural networks’ capabilities and underestimate their usefulness.

Yes, machine learning does not lead to knowledge. But the race is on to integrate pattern recognition, knowledge representation and probabilistic reasoning into a unified algorithm. The key question here is neither if nor when, but where the first narrow, but practically useful applications will emerge. However narrow, this will be a huge stride forward – from pattern recognition to a level of thinking.

Breakthrough progress will probably come first in domains such as biomedical knowledge. The vast corpora of scientific publications in this field, containing specifically defined types of information that lend themselves to algorithmic extraction, are already being scanned by ML systems. Add a “reasoning layer” further downstream to make connections within this data, and we may start to see valuable derived insights being generated and new paths suggested.

To unlock this potential, and ensure we don’t get caught up in delusional distractions, we need mental models that enable strategic decision making combined with deep technical know-how. Moreover, rather than approaching AI projects as software engineering, we should be thinking of  them as strategic innovation projects. Get in touch if you need support, or just a quick first or second opinion.


Further reading

Talking about large language models (Murray Shanahan) unpacks – in a non-technical way – our tendency to anthrophomorphise ChatGPT and other technologies, and how that can lead to wrong conclusions.

Daten sind nicht das neue Öl (Paul von Bünau, Sven Jungmann in Der Tagesspiegel) discusses how the “data is the new oil” metaphor shapes the strategies of how organisations think and strategically approach artificial intelligence.

What if we chose new metaphors for artificial intelligence? (European Parliament, Scientific Foresight) examines how metaphors shape the debate around AI regulation and how more considered frameworks can helpreconcile differing positions.

 
Previous
Previous

'We are just starting to see the full spectrum of data science applications in agriculture'

Next
Next

Beyond post-its: how to identify worthwhile applications for AI