Daniel Levitin has a short article in Wired on the future of music, therapy and well-being. I do not like it.
Preliminary: Levitin is James McGill Professor Emeritus of Psychology and Neuroscience at McGill University, and the author of many books – I came to know him long ago for his This is Your Brain on Music. In terms of people who are very smart about the science of what is happening with our little grey cells when we listen to a song, he must rank as one of the top.
More preliminary: in the article, he says:
An emerging body of research allows us to take what had been anecdotes and place music on an equal footing with prescription drugs, surgeries, medical procedures, psychotherapy, and various forms of treatment that are mainstream and evidence-based. In the past two years alone, more than 8,000 papers have been published on the topic in peer-reviewed journals.
I do not doubt the scientific findings of this research. Can listening to music be a useful tool (amongst many) in helping people who suffer from chronic pain? It looks like it can. How about for alleviating stress? Indeed, let’s try it.
But.
It is a problem of modern times that findings from the domains of science are seamlessly turned into claims involving values. And Levitin moves from his 8,000 peer-reviewed papers into a view as to what music is, and what is important to the listener. He writes:
The future of music in health care extends from hospital to home, from illness to neurorehabilitation, mindfulness practices, and wellness. AI will help here—not in writing music, but in selecting the songs and genres that meet both an individual’s tastes and the desired therapeutic and wellness goals. By extracting key features from music and matching them to an individual’s preferences and needs, we can usher in a new age of personalized music medicine. In the same way that an individual’s DNA can guide decisions on treatment and which drugs are likely to be most effective, AI may one day extract the DNA of music to identify precisely what music will help meet an individual’s therapeutic needs
Consider all the information about you in the cloud—your search history, location, who you are with, calendar, contacts list, and the kinds of things you view on social media. Certain companies also know a lot about your music tastes—what you listen to, what you skipped, the time of day you listen, and where you are when you’re listening. Smart devices that read your biometrics know your heart rate, heart rate variability, blood oxygenation level, respiration rate, skin conductance, body temperature, blood pressure—as well as how they fluctuate as a function of time of day and what activities you’re engaged in.
And they know about those activities, too—whether you’re running, walking, climbing steps, driving in a car, or sleeping. Of course, when you are sleeping, they know what sleep stage you’re in and how long you’ve been asleep. (They know if you’ve been sleeping, they know if you’re awake, they know if you’ve been bad or good, so be good for goodness’ sake!). Soon, you’ll have the option to subscribe to music on demand where the “demand” comes from your own biometrics, serving you music to calm you down, invigorate you for an exercise workout, help you focus at work, or treat ailments such as chronic pain, depression, Parkinson’s, and even Alzheimer’s.
There is a chasm between “different types of music have different effects on you” (we know that) to “we can (and, it is pretty clear that he is saying we should) have a music streamer track our biometrics to give us music designed to have a specific effect on us, based on what the algorithm thinks you need right now.” It treats music – and the implication would apply to all genres of art, no? – as a thing to meet our biological needs first and foremost. And when you look at what he thinks we might want from music, it is a narrow set of needs: calm, exercise, focus at work.
Would we ever be told to listen to a stressful piece of music? To spend time contemplating a disturbing painting?
Would we be told to go find a record we bought when we were seventeen, and haven’t listened to in ages, but which you just came to think about as you remembered a long lost friend from that time?
Or to listen to music as expression – an artist trying through music to connect with us?
Levitin writes:
AI will help here—not in writing music, but in selecting the songs and genres that meet both an individual’s tastes and the desired therapeutic and wellness goals.
But he evades the obvious question: why not have AI write the music? If the songs are meant to be so carefully targeted in terms of their biometric contribution, wouldn’t it be the case that AI could write the music much better than any human could? Why would AI choose “I’ll Be Around” for me when it has the capability of creating something much more finely honed to my DNA and current physical state than The Spinners could manage.
If this is how we will listen to music, we don’t actually need musicians (or poets or painters) anymore. Just our data.
Neuro aesthetics can tell us a lot of things, and no doubt there is more to be discovered. But making our mix-tapes is a different thing altogether, beyond the psychologist’s ken.
Cross-posted on Substack: https://michaelrushton.substack.com/