Posts Tagged ‘research’

h1

Music hacks and research questions

October 16, 2010

I’m spending part of this weekend at the Boston Music Hack Day in Cambridge, MA. Like lots of people, my ideas far outstrip my abilities and (especially!) the amount of time I have, so I thought I’d put some of them up here, in the hope that they may spark some ideas in other people.

Hacks:

Neophile: how musically alive are you? After the nth disappointment with online music licensing, and realizing that their power of the legacy record labels lies in their back catalog, I started to wonder what proportion of music that people listen to is old and how much is new. It occurred to me that you could plot a histogram of ‘number of listens’ against release date, and different people would have different distributions. For example, some people might have a peak centred around the music that came out when they were 21, whereas people who seek out new music might have a curve that’s flat or increasing with time (Paul Lamere dubbed these people ‘musically dead’ and ‘musically alive,’ respectively). The Musicbrainz database includes release years for a number of songs; with that and Last.fm scrobble data, it’s feasible to build this. I’d love to see it.

Ransom Note: I really want a ‘musical ransom note’, where you can piece together the lyrics of a song using cut-up bits of other songs. While the MusiXmatch lyrics API now provides half of that equation. I’m not sure that you can parse songs by lyrics quite yet, so this one might have to wait a bit.

Research questions:

Whose Telephone Is It? I was listening to “Teenage Kicks” (1978) by The Undertones recently, and I was struck that Feargal Sharkey sings about “the telephone” because Lady Gaga, for example, sings about “my telephone.” Somewhere in the last decade or so, telephones went from being communal property to being individual property, and this is reflected in lyrics. So I idly wondered about using the MusiXmatch lyrics API to search for instances of the word “telephone,” and to plot the frequency of the preceding article (‘the,’ ‘my,’ ‘your’) over time. This is kind of a silly example, but there is real, interesting research to be done in analysing the corpus of music lyrics using digital methodologies, for example to track social change (my friend Jo Guldi, a historian at the University of Chicago and Harvard University, does this kind of work with historical documents).

My projects:

I’ve just started re-learning how to code after many years of being strictly an experimentalist, so I have some bite-sized projects of my own that I’m working on. If you’re at the Hack Day and you’re interested in helping a Python n00b figure stuff out (like how to install matplotlib when I have Python 2.7, not 2.6) please feel free to find me.

What Makes a Music Geek? The distribution of musical knowledge. Paul Lamere of The Echo Nest created Namedropper, an online ‘game with a purpose’ that let people test their musical knowledge via their familiarity with artists across a range of genres. I have a hypothesis (half in jest, I admit) that there a very few people who know an enormous amount about music and lots of people who just know a little: in other words, that the musical knowledge in a population isn’t a normal distribution about a mean, but rather a Pareto distribution (shown above). Paul was kind enough to send me his dataset from running Namedropper, and I’m planning to plot a histogram of the scores to test this hypothesis (and yes, I know that it has pretty significant methodological limitations!)

Pandora’s Redemption: A friend of mine recently tweeted, “Pandora just put John Mayer‘s “Daughters” on my 90s grrl rock station. There is no “thumbs down” button large or ironic enough.” So I’m pretty sure my first ‘real’ music hack will be to use the Echo Nest Remix APIs to try to recreate John Mayer songs out of 90s riot grrl bands like Bikini Kill and L7.

Screaming Death Metal: My technical background is in materials science, not music, and it was suggested to me that I could try to merge the two (thanks, Brian). I’ve done a lot of work with the mechanical properties of materials: putting samples of different substances (like metals, ceramics, glass and, in my case, human bone) into a machine that pulls or pushes on them and measures the amount of force required to deform and eventually break the sample, to create a stress-strain curve. The shape of the curve is characteristic of the material, and it should be possible to create an audibilization of the data that captures some of the features that are interesting to a materials scientist in a way that is discernible to the ear: in other words, creating the scream that a material makes when it’s stressed to failure.

h1

Engineering and music at the Frontiers of Engineering

October 4, 2010

A couple of weeks ago, I was fortunate to be able to attend the National Academy of Engineering’s Frontiers of Engineering symposium on behalf of my day job. One of the sessions focused on Engineering and Music, organized by Daniel Ellis of Columbia University and Youngmoo Kim of Drexel University, and some of my notes are below. (Links in the titles go to PDFs of short papers by each speaker.)

Brian Whitman, The Echo Nest: Very Large-Scale Music Understanding

What does it mean to “teach computers to listen to music“? Whitman, co-founder and CTO of The Echo Nest, talked about the path to founding the company as well as some of its guiding principles. Whitman discussed the company’s approach to learning about music, which mixes acoustic analysis of the music itself with information gleaned by applying natural language processing techniques to what people are writing on the Internet about the songs, artists or albums. He shared their three precepts: “Know everything about music and listeners. Give (and sell) great data to everyone. Do it automatically, with no bias, on everything.” Finally, he ended on a carefully optimistic note: “Be cautious what you believe a computer can do…but data is the future of music.” Earlier this year, Whitman gave a related but longer talk at the Music and Bits conference, which you can watch here. [A disclosure: Regular readers of z=z will be aware that The Echo Nest is a friend of the blog.]

Douglas Repetto, Columbia University: Doing It Wrong

I felt a little for Repetto, who presented a short primer on experimental music for an audience of not-very-sympathetic engineers. He started with Alvin Lucier‘s well-known piece, “I am sitting in a room” and then played a homage made by one of his students, Stina Hasse. In Lucier’s original, he iteratively re-records himself speaking, until eventually the resonance takes over and only the rhythms of his speech are discernible (more info). For Hasse’s take, she did the re-recording in an anechoic chamber; the absence of echo damped her voice and her words evolved into staticky sibilant chirps, probably as a result of the digital recording technology. Repetto presented several other works of music, making the case to the audience that there was a commonality of experimental mindset: for both the artists whose work he was presenting and the researchers in the room, the basic strategy was to interrogate the world and see what you find out. Creativity, he argued, stems from a “let’s see what happens” attitude: “creative acts require deviations from the norm, and that creative progress is born not of optimization, but of variance.”

Daniel Trueman, Princeton University: Digital Instrument Building and the Laptop Orchestra

Trueman, a professor of music, started off by talking about traditional acoustic instruments and the ‘fetishism’ of mechanics. Instruments are not, he stressed, neutral tools for expression: the physical constraints and connections of instruments shapes how musicians think, the kind of music they play, and how they express themselves (for example, since many artists compose on the piano, the peculiarities of the instrument colour the music they create). But in digital instruments, there is nothing connecting the body to the sound; as Trueman put it, “It has to be invented. This is both terrifying and exhilarating.” Typically, the user performs some actions, which are transduced by sensors of some sort, and then converted into sound. But the mapping between the sensor inputs and the resultant audible output is pretty much under the control of the creator. Trueman presented some examples of novel instruments and techniques for this mapping, as well as some challenges and opportunities: for example, the physical interfaces of digital instruments tend to be a little ‘impoverished’ (consider how responsive an electric guitar is, for example), but these instruments can also communicate wirelessly with each other, for which there is no acoustic analog.

Elaine Chew, University of Southern California: Demystifying Music and its Performance

Chew, with a background in both music and operations research, presented a number of her projects which use visualization and interaction to engage non-musicians with music. The MuSA.RT project visualizes music in real-time on a spiral model of tonality (see image above), and she demonstrated it for us by playing a piece by the spoof composer PDQ Bach on a keyboard and showing us how its musical humour derived in part from ‘unexpected’ jumps in the notes, which were clearly visible. She also showed us her Expression Synthesis Project, which uses an interactive driving metaphor to demonstrate musical expressiveness. The participant sits at what looks like a driving video game, with an accelerator, brake, steering wheel and a first-person view of the road. The twist is that the speed of the car controls the tempo of the music: straightaways encourage higher speed and therefore a faster tempo, and tight curves slow the driver down. As well as giving non-musicians a chance to ‘play’ music expressively, the road map is itself an interesting visualization of the different tempi in a piece.

Some quotes from the panel discussion:

Repetto on trying to build physicality/viscerality into digital instruments: “Animals understand that when you hit something harder, it’s louder. But when you hit your computer harder, it stops working.”

Trueman on muscle memory: “You can build a typing instrument that leverages your typing skills to make meaningful music.”

Chew on Rock Band and other music games: “It’s not very expressive: the timing is fixed, with no room for expression. You have to hit the target—you don’t get to manipulate the music.” More generally, the panelists agreed that the democratization of the music experience and communal music experiences were a social good, regardless of the means.

Things I never expected to write on this blog: I am grateful to the National Academy of Engineering, IBM, and Olin College for sponsoring this post, however inadvertently.

h1

Pay-for-downloads and self-fulfilling prophecies

June 15, 2010

A recent Guardian Music article discussed the evolution of payola for the social media age: companies who promise Facebook friends or Twitter followers to musicians in exchange for cash, or pay people to download songs:

One of the worst examples of a company taking advantage of desperate artists is a new Australian venture called Chartfixer (the clue is in the name). For $6,000, Chartfixer will crowd-source 1,000 downloaders to each buy a digital copy of an artist’s track from iTunes. After purchasing the track, the downloader can claim the cost back and obtain a reward of one dollar. In Australia, 1,000 sales can get you into the top 80, whereas 5,000 sales (which would cost $25,000) can buy you a potential top 20 hit.

While this makes any music lover fume, here’s the problem: it might work.

Earlier this year, Clive Thompson wrote an article for Wired describing the work of Duncan Watts and Matthew Salganik, at Yahoo Research, who performed a series of elegant experiments to address the question of whether songs in a social environment become popular due to their intrinsic merits, or due to luck. Briefly, they created a pocket universe: a music site to which they uploaded 48 songs by unknown bands, which participants in the study could rate and download. They ran the experiment repeatedly, with new groups of people (nearly 13,000 in total) listening, rating and downloading the songs. Watts and Salganik found that certain songs would often rise to the top, and certain would fall to the bottom, but for most songs, their final ranking was unpredictable: the swirl and flow of ratings and social pressures would deposit them high for one test run, and low for another. They concluded that about half of a song’s success could be attributed to its intrinsic appeal, but the other half was due to it randomly quirking up or down, which was then amplified by the social environment into a self-fulfilling prophecy.

Where does this leave Chartfixer and a band who hires them? Well, for a market as small as Australia,  a relatively modest amount of money might be enough to nudge the song upwards. Assuming the song isn’t completely terrible, this kind of pay-for-download scheme might bring it to the attention of listeners, at which point the self-fulfilling prophecy of success can take over. Whether this is the case in practice, of course, has yet to be shown. And the body behind the Australian charts has, unsurprisingly, already come out against the scheme.

Interestingly, Watts and Salganik followed this study with an even cleverer one that suggests that manipulating the ‘charts’ in this way does kind of work, but results in fewer downloads in total. As Clive put it: “If you lie about the merits of your product, you might suppress demand across your entire sector.” Gee, I wonder if that’s ever happened in music…?

Go read Clive’s full (and fascinating!) article on the Watts and Salganik studies here. Or download the research papers here and here.

Image: Australian Currency by Flickr user Krug6, reposted here under its Creative Commons license.

h1

Music and mood

February 20, 2010

If you follow me on Twitter, you may have seen this tweet about a week ago:

Warning: Listening to @TheMagFields‘ “All the Umbrellas” can induce psychosomatic cardiac fracture, even in asymptomatic individuals.

followed a day or so later by this:

I am declaring a temporary moratorium on the Magnetic Fields, the National, and the Mountain Goats for the sake of my emotional health.

One of the reasons why I love pop music is the perfect fusion of lyrics and music to create an enormous emotional impact. And these three artists are absolute masters: the cello strokes underlining the chorus in the aforementioned “All the Umbrellas in London,” the way John Darnielle’s voice reaches for and breaks on the high notes in “Woke Up New,” the world-weary timbre of Matt Berninger’s baritone in “Slow Show.” But my decision to take a break from three of my favourite artists was prompted by these words by that other aficionado of the three-minute pop song, Nick Hornby:

What came first, the music or the misery? People worry about kids playing with guns, or watching violent videos, that some sort of culture of violence will take them over. Nobody worries about kids listening to thousands, literally thousands of songs about heartbreak, rejection, pain, misery and loss. Did I listen to pop music because I was miserable? Or was I miserable because I listened to pop music?

(Some of you may remember this as the opening soliloquy in the film version of High Fidelity.)

I decided that, much as I love these three artists, I was on track to test out that second hypothesis. And I figured it wasn’t the kind of experiment that would get ethical approval.

In terms of music and mood (‘affect regulation’), there are two general approaches: to listen to music that aligns with your mood, or to listen to music to distract you or change your mood. There’s some evidence of gender differences: women may be more likely to listen to music that allows them to focus on their negative mood, while men may be more likely to choose music that lets them overcome it.  But it doesn’t seem to be terribly well-understood right now. A new iPhone app, MoodAgent, classifies your music and allows you to create playlists based on mood. It’s only been out for a month or so, and already a psychology professor has announced that he’s planning on using it as a tool to examine these relationships between music and emotions.

(focus) MP3: The Magnetic Fields – All the Umbrellas in London [buy]

(distraction) MP3: Mission of Burma – 1, 2 ,3, Partyy! [buy]

h1

Upcoming: Dark Was the Night

January 7, 2009

the-national

So looking forward to this. Bryce and Aaron Dessner, of The National, curated a 2-CD release called Dark Was the Night. It’s being released by the “Red Hot” organization, who’ve put out about a dozen or so albums to benefit AIDS research and related causes. The lineup looks amazing – as well as an unreleased National track, it includes a collaboration between Feist and Ben Gibbard, Spoon, the New Pornographers, the Kronos Quartet, and more. The release date is set for February 17th on Beggar’s Banquet.

MP3: Kirsty MacColl and the Pogues – Miss Otis Regrets [from Red, Hot and Blue]