Hot topics: AI winter and climate change
When the parties of the United Nations Framework on Climate Change (UNFCC) will gather in Paris these days, finding an agreement on effective limitations for carbon dioxide emissions will be top priority on the agenda. Previous attempts to create a long-term deal involving all main CO2-emitting countries have notably failed, increasing public pressure on participating negotiators before the convention. Global warming will soon threaten entire countries and communities in a serious manner. So why do countries fail to reach agreement on carbon-dioxide cuts? Interestingly, there are some striking similarities to the current state of discussion regarding research in artificial intelligence (AI).
Artificial intelligence generally refers to all kinds of efforts undertaken to enable software (or computers respectively) to exhibit intelligent, human-like behavior when confronted with complex problems. The potential upsides of functional systems operating on artificial intelligence are, at least in theory, limitless: natural language processing and (deep) machine learning are among the technologies, which could lift economies towards a new stage of automation and optimization.
However, just the term “artificial intelligence” itself is already associated with a kind of optimistic promise of the future, which it will be always benchmarked against. Sophisticated software, resembling human intelligence across domains and with its multiple-layered facets is something, which has fascinated mankind for decades. Already in the late 70s governments optimistically pushed millions of dollars into AI research with primary applications in defense technology. But the enormous optimism soon experienced a first major set-back, when ambitious speech-recognition projects did not provide insightful results and internal studies of UK and US defense agencies simultaneously concluded that AI will not have any productive areas of application anytime soon.
Public interest in AI decreased and a phase of significantly reduced and restrained funding for AI research followed, which is publicly termed as “AI winter”. Ever since then, AI has been subject to this inevitable dynamic: as its promises are limitless, it is prone to hypes and extreme optimism as soon as there is reasonable evidence for a marketable AI-based technology. If one takes a thorough look at the recent rising technology start-ups, none of them will miss to mention their deep learning, machine learning, natural-language-processing and neural-network algorithms, which enable them to provide superior service to their customers.
Is this evidence of an AI spring or just another hype phase leading up to an AI winter of destroyed optimism? While a broad share of AI practitioners will probably deny any kind of optimism, others (among them Elon Musk and Stephen Hawking) have recently published a widely noticed statement, calling for governmental actions to prevent AI to spur the development of weaponry systems, which could potentially fall into terrorist’s hands (this is a kind of “negative” optimism, as it implies that technological development in AI will accelerate fast).
No externalities – no problem?
Advances in industrial technology have undoubtedly led to a large increase in economic welfare. But they have also caused heavy externalities such as global warming. Efforts to mitigate these negative effects are required on regulatory and technological level.
On the other hand, there is AI research, which – independent of its area of application – has no direct negative externalities. To some degree, this is the core problem. While industrialization has direct negative externalities it more or less lays out a clear path for potential starting points of future technological developments (e.g. currently: clean tech). AI research does not have this comfort. On the contrary, as any of its advances are beneficial, it is somehow benchmarked against this bright science-fiction future, as suggested by the term “artificial intelligence” itself. But as we try to teach computers intelligence, we acquire new insights about the composure of intelligence ourselves. Any step forward should thus be cherished and not demonized.
This logic however, does not hold true for the upcoming COP21 Paris negotiations on climate change. The negative externalities (e.g. global warming) of industrial production do often not outweigh the positive impact for individual players – at least in the short term. While there is agreement that negative externalities should be limited, there is disagreement about how to do it. Involved parties thus actually do not agree ex ante about which steps are beneficial to limit externalities, while keeping the positive impact of industrial production for economic growth.
At least for AI, small steps are better than no steps
Just as countries at the Paris COP21 will not find the one and only solution to limit global warming, there will not be not one single great breakthrough to advance the AI agenda. On the contrary, it will require multiple small steps. Each of them might fall short of the great vision of AI, just as any agreement at COP21 will fall short of the vision of effectively limiting global warming.
But at least with AI we can be assured that any progress will positively contribute on the path toward the AI vision. While any step to limit global warming, can somehow cause negative backlashes.
So, while the AI winter might still exist on a broader scale, there are already areas, which do look more like AI spring. The same does of course apply for the fight against global warming: There are plenty of success stories about carbon-dioxide cuts, but global warming has not been effectively stopped.