Metaphors help us succinctly convey a lot of information, explaining new stuff in terms of the familiar. Taken too literally, however, even the good ones become dangerous. Are terms such as ‘machine learning’ and ‘neural networks’ doing more harm than good?
Airbnb for cars. Time is money. Shattered dreams and broken hearts. We use metaphors as shortcuts to quickly convey the essence of new concepts, to transfer intuition from the familiar to the new. They can be powerful and tenacious; long after the new has become familiar, that initial image remains – a subconscious mental frame, shaping our thinking.
Good metaphors facilitate swift and sure-footed decisions, circumventing the need to learn everything from the bottom up. Car-sharing app Turo helps people rent their cars out when they’re not using them, just as airbnb does with homes. For those of us paid hourly, time truly is money. And while people who are broken-hearted don’t need urgent surgery for a blocked aorta, the pain they feel is no less intense. These are all metaphors that, by and large, work.
Ill-fitting metaphors, on the other hand, can lead us astray. But where do AI’s most commonly used metaphors stand on a scale from dangerously deceitful to essentially accurate?
First things first: even the term “artificial intelligence” – bandied around with such regularity that few would pause to think about its origins – is a metaphor. Its acceptance into common parlance has fuelled both heightened expectations of AI’s potential and paranoia surrounding its “intentions”. Because human intelligence may have brought us Newton and Einstein, but it also brought us Pol Pot and Hitler. Hollywood wouldn’t want it any other way.
Delving deeper into AI, a world underpinned by mathematical, statistical and algorithmical underpinnings, metaphorical explanation just makes things easier. After all, how many people intuitively “get” probability distributions and matrix multiplication? Metaphor can help position abstract and elusive concepts in familiar realms.
And that’s partly how we come to make statements such as …
‘Data is the new oil’
Before artificial intelligence and machine learning elbowed their way into the mainstream, we had “big data” and its mantra, “Data is the new oil”, popularised in a 2017 piece in The Economist.
Suddenly, data was – quite correctly – perceived not as a mere byproduct of IT systems, but something that had value of its own, a key raw ingredient. Data in a CRM system, for example, wasn’t just there to make it function: it could be used to create something new, such as an algorithm to target a specific customer segment.
Questions of data were propelled upward from the mundane realm of the IT department to the strategic level. Just like oil – the humble hydrocarbon with a permanent place on the geopolitical agenda – data took a seat at the top table. Who would have predicted, 10 years ago, that boards would be employing Chief Data Officers? So far, so oily.
But then the comparison starts to wear thin. Unlike oil, data is not a homogeneous fungible commodity. Taken out of context, there is no universal value-generating mechanism – no combustion engine – for data. Without an application that tessellates precisely with this specific data, it is worthless.
The analogy also prompts needless, directionless accumulation. If data really were the new oil, amassing as much of the stuff as possible would be a sound strategy. But investing in generic, application-agnostic IT data infrastructure – a mistake many companies make – won’t yield you valuable data. There may be general patterns for data architectures, but there’s no such thing as a general “refinery pipeline”. Every application needs to tailor its own.
You can also fall into the trap of thinking the data you currently hold has intrinsic value – is a key strategic asset to be exploited. This is not the best starting point for developing a cohesive data and AI strategy. You can’t build a market for “diesel data” just because you have a surplus of the stuff. Instead, derive data and AI applications rigorously from customer value. The existing data landscape may help generate ideas but should neither limit nor direct our thinking.
Like “artificial intelligence”, “neural networks” – as a metaphor for a family of algorithms – evokes a wide range of powerful associations. Thinking, consciousness, autonomous decision making, agency, intent, emotions … it suggests all these very human traits.
For computer scientists, it’s an analogy that helps convey the topology of information processing through several interconnected layers; it also captures the flexible modular nature of this computing model, where complexity can be arbitrarily increased by adding further layers.
Moreover, from a systems engineering perspective, the term “neural” makes an important distinction clear: unlike traditional software, NN-based systems are not explicitly programmed but implicitly “trained” – by feeding in data which exemplifies the desired behaviour.
Problems arise largely because people forget that this is a metaphor – and so, by its nature, an imperfect or artistic, imprecise association. Artificial neural network algorithms are, in fact, a highly simplified approximation of certain aspects of how information is processed in the human brain; they don’t even come close to being a cohesive model of cognitive processes.
And so, despite excelling in certain narrow tasks (such as detecting objects in a picture), the actual capabilities of artificial neural networks lag far behind human cognition. A more useful way to think of neural networks is as powerful machines trained to recognise pre-defined patterns.
Most of the big fears and horror scenarios, such as the singularity, can be pinned on our anthropomorphising of neural networks. But in fact, there is no plausible path for current neural networks coming anywhere close to human-level capabilities. This is not to say that large-scale super-human pattern recognition machines do not pose real dangers and ethical issues. However, these threats are more subtle and nuanced than the exciting, yet unrealistic, fictions played out on our screens.
Similarly, expectations for neural networks’ potential become muddled when we assess situations in terms of human thinking. We typically get two outcomes. First, neural networks are expected to solve tasks that require more than pattern recognition, such as reasoning or knowledge representation. And second, neural networks are not employed where they could excel: spotting complex patterns at super-human scale, speed and at zero marginal cost.
So, while it would be too much to expect a neural network to read a single scientific paper and capture all of its knowledge accurately, it could scan a million publications superficially for certain semantic patterns in less than 30 minutes – something no human could ever do. In failing to think about the relative strengths and weaknesses of humans and machines in this way, we let many opportunities such as this pass us by.
Often thought of as AI’s more practical, down-to-earth sister, “machine learning” is another of those metaphors treated as quasi-literal.
What the term “learning” captures well, similar to the “training” analogy for neural networks, is the key distinction from traditional software development: the desired behaviour of an ML system is not explicitly and transparently programmed, but adjusted to mimic a set of training data.
And yet while the inputs may be similar to human learning, the result is quite different. Whereas human learning is “the process of acquiring new understanding, knowledge, behaviours, skills, values, attitudes and preferences”, according to Wikipedia, machines merely “learn” to recognise patterns in data, where the data needs to arrive in a specific standardised format. Yes, machines can “learn” to play Go, recognise our faces or scan texts. But any “understanding” they may have cannot accommodate the unexpected. Enlarge the Go field, tilt the camera, or use rare or nonsensical words and the algorithms soon stumble.
And is machine learning a continuous, or perhaps even self-directed process? Sadly – or perhaps fortunately – not. When humans learn, we build on prior experience and use our knowledge for subsequent tasks; in machine learning, training is a discrete and rather artisanal human-controlled process, probably better captured by the term “calibration”.
Where human learning can lead to understanding and knowledge, the result of machine learning is a set of technical parameters – often hard for humans to interpret.
Thinking of ML systems as a perpetual technical flywheel, trained cheaply on the job, is one reason the total cost of ownership for ML systems is often underestimated. In fact, it takes a small village simply to keep ML systems from deteriorating, let alone improve them.
On top of this, because machines gain no insight from their “learning”, the pattern-recognition processes they are engaged in tend to produce precious little in the way of insight. Isolating significant correlations and deriving causal hypotheses is not machine learning’s strong suit; best to leave that empirical research to the economists, social scientists and psychologists.
Time for a rethink?
A lot of enthusiasm, trust and money has been wasted by thinking of data as oil, artificial neural networks as mini brains, and pattern recognition as human learning. But we need not necessarily throw the baby out with the bath water.
Certainly, a more sober, nuanced look at what AI is and can do would benefit everyone. But now is not the time to over-correct – because actual intelligent and learning systems will, one day, be built. It could be 100 years away, but small steps are taken every day that bring reality closer to the more problematic connotations of these metaphors.
Yes, big data is not as straightforward to exploit as oil. But global datafication and digitalisation is still in its infancy. An ecosystem of data producers and consumers has begun to emerge, and the standardisations they spawn will make data more fungible. If, for thousands of niches, universally accepted data standards were to be established, then data would become more fungible and, perhaps, tradable. Patient data is already well on the way to establishing its Brent crude.
Yes, artificial neural networks cannot be scaled up to ape all the functions of the human brain. But super-human pattern recognition has vastly more applications than we have seen so far – because we have not yet looked in all the right places, and because datafication is just beginning, restricted to niche areas. We tend to both overestimate neural networks’ capabilities and underestimate their usefulness.
Yes, Machine learning does not lead to knowledge. But the race is on to integrate pattern recognition, knowledge representation and probabilistic reasoning into a unified algorithm. The key question here is neither if nor when, but where the first narrow, but practically useful applications will emerge. However narrow, this will be a huge stride forward – from pattern recognition to a level of thinking.
My guess is that initial progress will come in domains such as legal tech or biomedical knowledge. The vast corpora of documents in these fields, containing specifically defined types of information that lend themselves to algorithmic extraction, are already being scanned by ML systems. Add a “reasoning layer” further downstream to make connections within this data, and we may start to see valuable derived insights being generated and new paths suggested.
Think of it – at least until the metaphor starts to fail – as Google Maps for biochemists.