When Gorillaz burst on the scene in 2001, it felt like a premonition of the future. Comprised of four fictional band members, these digitally rendered characters — the post-Blur brainchild of Damon Albarn, who composed the music, and artist Jamie Hewlett, who handled the animation — appeared in music videos, took part in live performances and even gave interviews.
Fast forward nearly 25 years and music creators are continuing to innovate and experiment with the latest technology. But what happens when a machine is responsible not only for a band’s image, but also their music? Such is the case of The Velvet Sundown. Ostensibly a classic rock band inspired by the music of the sixties and seventies (the four-piece’s Spotify bio claims they are ‘fronted by vocalist and mellotron sorcerer Gabe Farrow’), their first two albums racked up hundreds of thousands of streams upon their release in June.
Listeners, however, soon sensed something was off. After all, there were no live performances, no digital footprint and no concrete proof the musicians actually existed. By the end of the month, speculation grew that The Velvet Sundown might be fake, fuelled by AI-looking press images and a line on their X account which stated: ‘Not quite human. Not quite machine.’ A disclaimer on Deezer, too, noted some tracks ‘may have been created using AI’. While the band’s official channels denied the involvement of AI, things grew stranger still when a supposed press representative contradicted these claims (that individual later admitted to being a hoaxer, prompting more confusion).
Finally, in early July, came an admission that the Velvet Sundown project relied on the use of AI. Their updated Spotify bio now describes them as ‘a synthetic music project guided by human creative direction’. The music had been ‘composed, voiced and visualised with the support of AI,’ while the whole saga was described as an art project which ‘challenged the boundaries of authorship, identity and the future of music itself in the age of AI’.
The Velvet Sundown's updated Spotify bio describes them as ‘a synthetic music project guided by human creative direction’.
The full extent of AI’s role in the band’s music, while clearly predominant, remains undisclosed, but the Velvet Sundown story has sparked major debate about AI’s presence within music. The story isn’t as unique as it may seem — Deezer, for example, has noted that AI-generated tracks make up around 10% of the music that is submitted to its platform daily. Many creators are embracing the use of assistive AI to aid the human-led composition process, such as technical and collaborative tools, but the use of generative AI to create entire works from simple text prompts is more controversial, raising questions around ownership, crediting and licensing.
In UK copyright law, human authorship is an essential requirement for a work to be considered original and therefore protected by copyright. This principle of human authorship is not unique to the UK. Many jurisdictions, including the United States, share this requirement. In fact, the US Copyright Office has explicitly stated that AI-generated music, created without human involvement, is not copyright protected.
But how can human authorship be defined and quantified in the age of generative AI? In the case of The Velvet Sundown’s music, their statement’s description of it having been ‘guided by human creative direction’ invites scrutiny. How do we account for how much human involvement the person(s) behind it actually had?
There are further challenges when it comes to the topic of infringement by AI companies. It’s common knowledge that AI models are trained on existing creative works, which often involves the unauthorised use of copyrighted material. As creators’ rights campaigner Ed Newton-Rex recently told BBC News: ‘It's theft dressed up as competition. AI companies steal artists' work to build their products, then flood the market with knock-offs.’
How can human authorship be defined and quantified in the age of generative AI?
It’s not just the creative community who are rejecting these practices. A 2024 poll commissioned by UK Music found that over 80% of UK adults agree that the law should prevent an artist’s music being used to train an AI application without their knowledge or permission.
AI tools can also be used to create deepfakes— when an artist’s likeness is impersonated to produce fake recordings, often under the guise of unreleased material. One recent high-profile example was the AI-created ‘lost’ Oasis album, as well as a recent viral ‘collab’ between Drake and The Weeknd which turned out to be AI-generated. In the wake of that Drake and Weeknd AI track, which racked up over 600,000 Spotify streams before it was taken down, Universal Music Group told Billboard that such songs ‘demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists’.
So what’s being done about this? Organisations like PRS for Music are playing a central role in stewarding the responsible development of AI as it relates to music. In January 2024 PRS published its guiding principles on AI, centring on the protection of human creativity, the ability for copyright owners to choose whether their works are used, transparency over AI inputs and outputs, and cooperation with partners across the globe. These principles establish a framework through which PRS will work to influence global policy decisions on AI, advocating for the rights and interests of human creators.
In recent months, public pressure to protect the rights of artists and songwriters in the age of AI has increased, with the likes of Sir Elton John, Sir Paul McCartney and Thom Yorke all speaking out in support. Back in February over 1,000 artists, including Albarn and PRS Members’ Council President Crispin Hunt, came together to release a silent album in protest against the UK government’s proposed changes to the law surrounding how AI models use copyrighted work. The album came accompanied by a statement from Kate Bush, who wrote rather poignantly: ‘In the music of the future, will our voices go unheard?’