Classical Music has Lost a Generation. Blame the Metadata (in part)

gasteig empty 1

Classical music has lost a generation’s worth of music lovers beginning in the late-90s with the rise of file-sharing and Napster. A significant part of the reason might be: metadata.

Metadata are the tags that travel with every audio recorded track. For a piece of music or a recording to be found, it needs to be tagged. Metadata comes (mostly) in three varieties:

  1. Descriptive – nuts and bolts listing including artist, composer, titles, date, etc.
  2. Ownership/Rights – who owns the music, licenses, royalty splits, etc.
  3. Recommendation – genre, mood, mix, use, etc.

Each track travels with hundreds of tags allowing it to be found and attributed. But the commercial music industry has complained for years about the inadequacies of the metadata tagging system, and the system is rife with errors and omissions. This is important because it impacts how music is found, who gets paid for it, and even whether or not it is likely to find an audience and become popular.

Classical music has been at particular disadvantage with metadata because the classifications are so very different from pop music. Search “Taylor Swift” and you get a list of the songs performed by her. Type in “Dvorak” and a list of pieces comes up with random orchestras, conductors and performers, movements of sonatas and symphonies separated from one another, and little means of sorting performances of the same pieces.

Simply finding classical music is tough. But also consider how pop music listening habits have been transformed by Spotify and Pandora streaming algorithms and the ability to make a preference and have music effortlessly fed to you, and you have some idea of how classical music has been left behind.

It wasn’t until 2015, when the classical music streaming service Idagio launched (followed by in 2018 Primephonic, which was acquired by Apple and relaunched as Apple Classical) that classical music fans finally had a useful digital music discovery platform. Both services are a huge step forward from platforms like Spotify and Pandora, which are a disaster for finding and listening to classical music. But in the meantime, from the mid-90s to 2015, a generation of listeners who might have turned into regular listeners, were essentially impeded from exploring by a series of digital speedbumps .

Think of it this way: You can publish a blog or put products up to sell on your website. Objectively it can be found if you know exactly where to find it. But if your site isn’t designed and optimized with metadata in ways that Google likes, it’s vanishingly unlikely you’ll be found when your site is listed on page 463 of Google’s search results. In other words, you’re pretty much invisible.

This also happens with social media algorithms. You can have 10,000 followers and publish something. But if the content isn’t “optimized” to what the algorithm is programmed to reward, even your closest 10,000 followers — people who’ve previously expressed an interest in seeing what you have to say — will never be shown it.

One of the ways I think AI will liberate musicians and music lovers from metadata hell, is liberation from the current band-aid metadata system. As I wrote in a previous post:

If the AI is looking at the actual sound data rather than depending only on tags, music can be directly compared to other music (rather than generic descriptors). Watermarks (perhaps another term for metadata, only not removeable) could identify artist and production data, but also — incorporating blockchains — calculate plays, payments, splits and ownership. Perhaps most important, the “recommendation” tagging, which is now so imperfect, could accurately compare like with like.


It is this third category that could change the way we find and listen to music. If data is just data and interchangeable, the “creation” data (artist, ownership, payment rights etc.) will be able to interact with the “listener” data that could tell us how the track is being used, remixed, built on, etc. That kind of data both informs listeners’ choices but also potentially suggests dialog with the creators in helping them make new work. Audiences have always played a role in the creative process; AI suggests potential for a new level of co-creation and collaboration.

I spent part of my weekend playing with AI music-creation apps — there are now hundreds of them. Most give you a few parameters you can adjust, and the results are pretty unsatisfying. But one, Udio lets you interact with it with voice prompts as you would with ChatGPT. You can describe an idea, a scenario, a mood, genre, instrumentation, style — pretty much any description you can imagine — and the AI will create music in response.

You can endlessly tweak the description to modify the music, querying the AI in conversation. I tried creating an orchestral fanfare and it came up with some interesting ideas. I even gave it some technical direction — eight first violins rather than 16 and asked for more vibrato at the top of a long phrase. It’s still clunky, but it’s easy to imagine in the future that composers might find this a more powerful way to get the music from their brains to finished product than traditional notation.

We’ve been stuck in a crude technology era in which the notion that because something technically works (for example that all music has metadata), it functionally also works (music is easily found). Social media platforms and metadata structures have proved over and over that this is not true. It’s entirely possible we’ll come to think of our crudely imperfect metadata tagging system of text descriptors as the Stone Age of content discovery as dynamically comparative metadata across not just music but images, video and everything else enabled by AI becomes the new standard.

Source link

About The Author

Scroll to Top