In a time when creators of video games are trying to make gaming an ever more immersive and ever more real experience, AI is beginning to have an impact in the field. Everyone who’s ever played a video game knows: Music plays an important part in conveying the atmosphere of altering game states and thus needs to change dynamically as the gamer makes choices and variations of the storyline unfold. However, as storylines get ever more complex and the number of possible choices practically explodes, the task of implementing changes in music into a game’s engine, while at the same time ensuring smooth transitions, eventually becomes too complex to handle manually. The task clearly calls for some form of automatisation – but does AI have the ability to create effective video game music?
Using AI to generate adaptive video game music is not entirely new. A wealth of research has been carried out both in academia and the gaming industry. Deep Adaptive Music (DAM), an innovative solution developed by Melodrive, extends adaptive music by more intensely leveraging AI. DAM allows for the generation of music in realtime by an AI system running directly inside an interactive experience, adapting to emotional states within the game on a granular level. Responding to both the user’s interaction and to the changing game states, DAM complements the idea of co-creation between the AI and a human composer or player. In my talk, I will first give an overview on past research in the field, then dive into the mechanics behind DAM and eventually consider AI’s potentiality in creative tasks.
The event took place on April 4th, 2019.
Find Valerio’s slides on Slideshare.