AI has made great leaps in intelligent decision-making systems, particularly in areas where tasks are specific and repeatable. However, it currently struggles in understanding nuances and creating original content.

In relation to the music industry, AI is well-established. Artists such as Grimes and have openly expressed how it will eventually overtake their own creative abilities, and much more. At present, there are many music production tools built using AI. From music generators such as musenet and amper, to mastering with tools like LANDR and rearrangement and auto syncing platforms such as Match Tune. When applied together, it appears as if these tools can generate music, have it mastered and distributed, all without a human ever being involved.  

Whilst technically possible, what is really happening is that the mastering and generator tools have a giant library of existing music patterns (created by humans). It uses these patterns to “create” and master music. For example:

What we’re seeing is some real audio being fed into an algorithm. It jumbles it up and produces multiple options. The tool then tests these results against its library of patterns, before finally selecting one that matches. The tool isn’t being “creative”. It’s simply mashing up existing solutions and hoping you don’t notice. But this could change. AI is still in its infancy, if it were human, it’s probably a toddler.


Are music streaming algorithms more advanced?

All music streaming services employ some sort of AI to sort and recommend tracks tailored to you, based on your history of listening. Spotify is one of the leaders in using algorithms to curate personalised music. However, it doesn’t always get it right. Music is extremely hard to categorise, whilst many components of a song can be measured, what can’t be, is the emotional reaction of the listener. At least not without the use of additional hardware.  

AI can only read emotion by comparing what it sees against an existing library of recorded human emotions. AI tools also lack the ability to produce music that is rooted in culture. It cannot truly inspire us with something new, it is simply regurgitating statistical variations from learned patterns. This leads to narrow minded curation and big mistakes.

For example, we ran a test for a client to discover how accurate the algorithm on a music streaming service was. Our data input asked the platform to curate an “uplifting”, “positive” and “fun” playlist for a modern juice bar chain, with a target audience of 18-35 year old students and young professionals. The results, a mixture of karaoke covers and children’s songs…need I say more.


Will AI take over our creativity?

Perhaps not now, or even in 15 years, but there aren’t many experts in the field of AI who would disagree that AI is the future, not only for automated task, but for science and art too.

In the meantime, we will have more AI supported tools to help us. This is the era Accenture have branded “the missing middle” from their book, Human + Machine.  

The book illustrates how humans and machines might interact in the workplace in the near future. It suggests tasks could be organised dependent on if you’re a machine or a human. But the tasks will likely require co-operation between the two. But it does remain that creative tasks and judging tasks will still fall under the human-only activity section.


At Altaura, we’re focused on delivering music curation designed by music curators (real people). At the same time, we respect new technologies and AI, using the tools they provide to our advantage. However, the reality is that the algorithm doesn’t understand our client, we do. We can demonstrate the emotion and the nuanced elements of sound. It’s our people that connect our client’s to their audience, without big mistakes.