Can AI compose music? The boundaries between art and technology

Artificial intelligence, once the domain of science fiction, is now entering recording studios, composing songs and imitating the voices of the biggest stars. This technological revolution is forcing us to fundamentally redefine the concepts of authorship and authenticity in art, raising key questions about the future of human creativity.

6 Min Read
musyka ai

In the spring of 2023, the song ‘Heart On My Sleeve’, allegedly performed by Drake and The Weeknd, circulated the music world. The song, which was in fact the work of an anonymous artist and AI, went viral before disappearing from streaming platforms due to copyright infringement. This incident brutally highlighted that the discussion about AI in art is no longer theoretical. The question is no longer whether a machine can make music, but what it means for us now that it does. Is AI just a powerful new tool or an autonomous creator threatening human creativity?

Anatomy of a digital composer

Modern AI-generated music is the fruit of decades of development in algorithmic composition, which has its roots as far back as the 1950s. However, the real breakthrough has come from two key deep learning technologies. The first is Generative Adversarial Networks (GANs), which operate as a duel between a ‘forger’ (Generator) and a ‘critic’ (Discriminator). The Generator creates the music and the Discriminator learns to distinguish it from real music, forcing the Generator to achieve near-perfect imitation.

The real revolution, however, was the Transformer architecture. Unlike older models, which had the problem of ‘forgetting’ the initial parts of a piece, the Transformer uses an attention mechanism. It allows the model to ‘look’ at the entire sequence so far at any time and consciously refer back to earlier motifs, ensuring long-term compositional consistency. This allows tools such as AIVA, Suno or Google Magenta to generate not just simple melodies, but complex, multi-minute pieces in hundreds of styles.

AI in practice: from assistant to partner

Artists are already exploring a wide range of collaborations with AI. In 2018, Taryn Southern released the album I AM AI, on which algorithms (including Amper and AIVA) generated the basic melodic and harmonic structures. The artist took over these sketches, adding lyrics, vocal lines and a final arrangement, treating AI as an inexhaustible source of inspiration.

Taking a completely different route is avant-garde artist Holly Herndon. Her AI, dubbed ‘Spawn’, has been trained solely on her own voice and her band’s recordings. In this model, the AI is not a tool, but a full-fledged, improvising member of the ensemble – a ‘digital twin’. An extreme example of AI’s analytical capabilities is the project to complete Franz Schubert’s Symphony No. 8 ‘Unfinished’, where the algorithm, trained on the composer’s output, generated suggestions for missing movements.

Algorithmic dissonance: the limits of artificial creativity

Despite its impressive capabilities, AI music still struggles with fundamental limitations. The most common complaint is the lack of ‘soul’ – emotional depth derived from human experiences. AI learns from patterns but does not understand the emotional context, so that its compositions, while technically correct, can sometimes be empty and predictable. The producers also point to specific technical flaws: flat, spaceless mixes and problems in reproducing the dynamic transients that determine the clarity of sound.

The current debate is deeper than those surrounding the introduction of synthesisers or digital audio workstations (DAWs). Those technologies automated instruments and the studio. AI is the first technology to attempt to automate the very kernel of creativity – ideation.

Legal and ethical issues: whose tune is it?

The development of AI has overtaken legal regulations, creating chaos on the issue of copyright. Under Polish law, the author of a work can only be a human being. A work generated fully automatically by a machine is not protected and goes into the public domain. Protection is only possible if the human contribution to the creative process (e.g. through editing and arrangement) is significant.

The biggest controversy, however, is the process of teaching the models themselves. They are trained on gigantic datasets containing millions of copyrighted songs, most often without the consent or remuneration of the original creators. This leads to concerns about the economic future of artists. A CISAC report predicts that while the generative music market could be worth $64 billion by 2028, human creators could lose up to 25% of their revenue. Composers of commercial and functional music, where speed and low cost rather than unique artistic vision are most at risk.

Final instrument

Can AI compose? Yes, it can generate music that is increasingly complex and aesthetically satisfying. But it still lacks the intention, experience and context that constitute the essence of human art.

TAGGED:
Share This Article