Posts Tagged ‘future of music’

h1

Rethink Music: the structure of revolutions

April 25, 2011

In The Structure of Scientific Revolutions, Thomas Kuhn famously wrote about science undergoing “paradigm shifts”: that scientific change occurs in sudden upheavals. It’s normally not all that dramatic, even. What I’ve observed to happen is something like this: at a conference, someone will present evidence for an alternative explanation of data. Some people will listen, some will scoff, and some will go off to do more experiments. The next year, more people are on the side of the ‘novel’ explanation. Repeat for another year or two, and everyone is on board with the new idea.

Watching the music industry evolve and struggle and try to reinvent itself, on the other hand, reminds me of what Kuhn wrote about the humanties. “[A] student in the humanities has constantly before him a number of competing and incommensurable solutions to these problems, solutions that he must ultimately examine for himself.”

The Rethink Music conference, starting today in Boston, aims to give “creators, academics, and industry professionals” a chance to think and discuss some of these solutions for the music industry.  A collaboration between the Berklee College of Music, Harvard’s Berkman Center for Internet and Society, and MIDEM, Rethink Music’s goal is to foster a dialogue between the ‘traditional’ music industry and the artists, researchers, and entrepreneurs who are exploring a musical universe that’s not a holdover from moving around shiny silver discs. The high-powered speaker lineup suggests that Rethink Music is on track: it includes artist management, lawyers, researchers (including Lawrence Lessig and Nancy Baym), CEOs of a host of companies including SonicBids and The Echo Nest, Kickstarter founder Yancey Strickler, and RIAA head Cary Sherman sharing a stage with Google’s senior copyright counsel Fred Von Lohmann, formerly of the EFF (I have high hopes for a deathmatch).

Rethink Music is quite unusual in how it’s bringing people from across the spectrum together. As a counterexample, at SXSW Interactive this year, I went to two panel discussions around metadata: the first featured researchers from UC Berkeley, and the second was organized by a representative of NARM (the music industry trade organization). Even though both panels were nominally on the same topic, they were worlds apart: one group was talking about things like crowdsourcing taxonomies of musical knowledge, and the other group was talking about linking MP3s with the release dates of albums. So I’m excited to see Larisa Mann, one of the researchers from Berkeley, on the Rethink Music lineup.

Unsurprisingly, perhaps, there is already some evidence of friction in this uneasy alliance of interests. Wayne Marshall, a DJ and a researcher in ethnomusicology at MIT, withdrew from the conference over the boilerplate language of the speaker contract (you can read his letter to the conference organizers here). Articles on Hypebot and Mashable took issue with the planned release of an ‘instant album’ by Amanda Palmer, Neil Gaiman, Ben Folds, and Damien Kulash of OK Go (Palmer’s response is here). But of course, the tensions are likely to be what makes Rethink Music an interesting few days.

h1

Wanted: a way to aggregate streaming tracks

December 3, 2010

I’ve decided that I really want a mashup of exfmShuffler.fm and delicious, with a dash of smart playlisting thrown in.

Here’s the problem: Every day I find cool streaming music in lots of different places. Soundcloud. YouTube. Tumblr. (that’s a piece of my Tumblr dashboard, above). But for most of it, I listen to it once. At most. Because listening to streaming music in an atomized form is a pain. Having to choose and click on a new song every three minutes might be fine for an ADD teenager, but I don’t want my music listening to be completely interrupt-driven. I just want a continuous stream of music I like (and judging by the continuing popularity of online and terrestrial radio, and the love for Shuffler, I’m not alone).

In an MP3-centric world, I’ve dealt with the increasingly decentralized creation and distribution of music by, in essence, centralizing it: by downloading MP3s into my library, and using that as an aggregator. And exfm, which I just started using, is pretty good at getting around the downloading issue. But as more and more music is straight-up streaming, how do we make those tracks into part of our ‘virtual library,’ so that we can find them, embed them into playlists, and otherwise listen at will?

What I really want to be able to do is this: Every time I find a streaming track I’m interested in (whether in Tumblr, YouTube, SoundCloud or anywhere else), I flag it as part of my ‘library’, like delicious does for bookmarks or exfm does for MP3s. Note that, unlike delicious, I don’t want to manually tag it. Because, well, I’m lazy. But also because I either know the song, and I can classify it ways I can’t easily articulate into a folksonomy, or I don’t know it, and can’t classify it at all. So I’d really like some tools to automagically organize it into playlists in a range of ways. And then I’d like to just be able to listen to a Shuffler-like continuous stream that pulls together my flagged streaming tracks, my own MP3s, tracks from streaming services like last.fm, and more.

Oh, and I’d also like a pony. Or maybe a unicorn.

What do you think?

This post is the result of a conversation this morning with Jason Herskowitz, prompted by a question from Mark Mulligan.

h1

Brian Whitman, “Music in the Time of Data”

November 23, 2010

Brian Whitman, the co-founder and CTO of The Echo Nest, gave a great talk at Olin College in Needham, MA last week, as part of the Technology and Culture Seminar Series.* His talk was a combination of personal narrative, a recent history of computer-generated music, and a look into the future of the interaction of music and technology.

*For those of you who only know me through this blog or Twitter: I’m on the faculty of Olin College and an organizer of the seminar series, and that’s me doing the intro. And yes, I have the best job ever.

h1

Engineering and music at the Frontiers of Engineering

October 4, 2010

A couple of weeks ago, I was fortunate to be able to attend the National Academy of Engineering’s Frontiers of Engineering symposium on behalf of my day job. One of the sessions focused on Engineering and Music, organized by Daniel Ellis of Columbia University and Youngmoo Kim of Drexel University, and some of my notes are below. (Links in the titles go to PDFs of short papers by each speaker.)

Brian Whitman, The Echo Nest: Very Large-Scale Music Understanding

What does it mean to “teach computers to listen to music“? Whitman, co-founder and CTO of The Echo Nest, talked about the path to founding the company as well as some of its guiding principles. Whitman discussed the company’s approach to learning about music, which mixes acoustic analysis of the music itself with information gleaned by applying natural language processing techniques to what people are writing on the Internet about the songs, artists or albums. He shared their three precepts: “Know everything about music and listeners. Give (and sell) great data to everyone. Do it automatically, with no bias, on everything.” Finally, he ended on a carefully optimistic note: “Be cautious what you believe a computer can do…but data is the future of music.” Earlier this year, Whitman gave a related but longer talk at the Music and Bits conference, which you can watch here. [A disclosure: Regular readers of z=z will be aware that The Echo Nest is a friend of the blog.]

Douglas Repetto, Columbia University: Doing It Wrong

I felt a little for Repetto, who presented a short primer on experimental music for an audience of not-very-sympathetic engineers. He started with Alvin Lucier‘s well-known piece, “I am sitting in a room” and then played a homage made by one of his students, Stina Hasse. In Lucier’s original, he iteratively re-records himself speaking, until eventually the resonance takes over and only the rhythms of his speech are discernible (more info). For Hasse’s take, she did the re-recording in an anechoic chamber; the absence of echo damped her voice and her words evolved into staticky sibilant chirps, probably as a result of the digital recording technology. Repetto presented several other works of music, making the case to the audience that there was a commonality of experimental mindset: for both the artists whose work he was presenting and the researchers in the room, the basic strategy was to interrogate the world and see what you find out. Creativity, he argued, stems from a “let’s see what happens” attitude: “creative acts require deviations from the norm, and that creative progress is born not of optimization, but of variance.”

Daniel Trueman, Princeton University: Digital Instrument Building and the Laptop Orchestra

Trueman, a professor of music, started off by talking about traditional acoustic instruments and the ‘fetishism’ of mechanics. Instruments are not, he stressed, neutral tools for expression: the physical constraints and connections of instruments shapes how musicians think, the kind of music they play, and how they express themselves (for example, since many artists compose on the piano, the peculiarities of the instrument colour the music they create). But in digital instruments, there is nothing connecting the body to the sound; as Trueman put it, “It has to be invented. This is both terrifying and exhilarating.” Typically, the user performs some actions, which are transduced by sensors of some sort, and then converted into sound. But the mapping between the sensor inputs and the resultant audible output is pretty much under the control of the creator. Trueman presented some examples of novel instruments and techniques for this mapping, as well as some challenges and opportunities: for example, the physical interfaces of digital instruments tend to be a little ‘impoverished’ (consider how responsive an electric guitar is, for example), but these instruments can also communicate wirelessly with each other, for which there is no acoustic analog.

Elaine Chew, University of Southern California: Demystifying Music and its Performance

Chew, with a background in both music and operations research, presented a number of her projects which use visualization and interaction to engage non-musicians with music. The MuSA.RT project visualizes music in real-time on a spiral model of tonality (see image above), and she demonstrated it for us by playing a piece by the spoof composer PDQ Bach on a keyboard and showing us how its musical humour derived in part from ‘unexpected’ jumps in the notes, which were clearly visible. She also showed us her Expression Synthesis Project, which uses an interactive driving metaphor to demonstrate musical expressiveness. The participant sits at what looks like a driving video game, with an accelerator, brake, steering wheel and a first-person view of the road. The twist is that the speed of the car controls the tempo of the music: straightaways encourage higher speed and therefore a faster tempo, and tight curves slow the driver down. As well as giving non-musicians a chance to ‘play’ music expressively, the road map is itself an interesting visualization of the different tempi in a piece.

Some quotes from the panel discussion:

Repetto on trying to build physicality/viscerality into digital instruments: “Animals understand that when you hit something harder, it’s louder. But when you hit your computer harder, it stops working.”

Trueman on muscle memory: “You can build a typing instrument that leverages your typing skills to make meaningful music.”

Chew on Rock Band and other music games: “It’s not very expressive: the timing is fixed, with no room for expression. You have to hit the target—you don’t get to manipulate the music.” More generally, the panelists agreed that the democratization of the music experience and communal music experiences were a social good, regardless of the means.

Things I never expected to write on this blog: I am grateful to the National Academy of Engineering, IBM, and Olin College for sponsoring this post, however inadvertently.

h1

Thinking about playlists

September 5, 2010

I love playlists. I live and die by them, and make new ones almost daily. My car doesn’t have an MP3 input and I have a daily commute, so a good chunk of my music listening is in the form of burned CDs—de facto sub-75-min playlists. And I realize it’s antediluvian, but I still trade mix CDs with many of my friends (via snail mail, no less; I think we all love the charm of the hand-made packages in the post), and those CDs are one of my favourite modes of music discovery.

Almost all the playlists I make are custom, largely by necessity: the songs are usually hand-selected, and they are frequently also hand-ordered. In time for this weekend’s London Music Hack Day, The Echo Nest debuted a powerful and flexible set of tools to algorithmically generate playlists, and I did a little gedanken experiment to compare the playlists you can currently generate with these tools with the kinds of playlists that I make.

Here are some examples of playlists I’ve made or updated recently, ordered roughly from least to most amenable to automating:

New music: I have a playlist called ‘Current’ where I throw recently downloaded music for further listening.

Albums: If I download an entire album, I’ll keep it together, at least for the first few listens (and then I decide that I really only like “Sprawl II” off the new Arcade Fire album).

Artists: Today I will listen to every Elliott Smith song I own.

Playlists by geography: I have a playlist called ‘CanCon‘ that I made for a friend of mine who just moved to Canada. Amazingly, this looks reasonably easy to do with Echo Nest’s new APIs, although it might require a bit of careful tweaking to include my hometown of Toronto, since it’s well south of the 49th parallel.

Workout playlists: Recently, I’ve been doing musical sprint intervals: moderately-high tempo songs intermixed with short, loud, fast songs by punk bands like the Ramones or Pansy Division.

Playlists of bands with upcoming shows: Boston-based concert tracking service, Tourfilter, has a monthly residency at a local bar, at which I DJ’ed a few months ago. All of the songs are by artists that are playing in the Boston-area in the next month or so.

Songs I can play on bass: Sadly, a very short and slowly-growing list right now (“Green Onions,” “Seven-Nation Army,” and a handful more).

My friends’ bands: A playlist of music by people I know.

Playlists for other people: Playlists or mix CDs made I’ve made for friends of music that I think they’ll like, based on what I know of their tastes.

Playlists by mood: Usually not just ‘happy’ or ‘sad,’ though. I have a recent playlist I made as a soundtrack when I was feeling melancholy and restless (lots of Waterboys, Sea Wolf, Frightened Rabbit).

Playlists by theme: As an example, I made a playlist of ‘embarrassing’ music for a friend of mine, which was mostly songs at the intersection of nerdy, funny and bawdy (think The Bloodhound Gang’s “The Bad Touch”).

‘Best of’ lists: Like most music geeks, I like making lists of the stuff that I like best (although I guess if I was a real music geek, I’d describe it as ‘the best music’)

What do I feel like?: Quasi-random concatenations of whatever I feel like listening to on a given day.

The first half are pretty straightforward. The second half get a little tougher—some of them are nearly algorithmic, but only if you happen to be me. The thought processes behind the last two are opaque even if you are me. Coming up with those last few seems very close to a musical Turing Test;  not that I’d put that beyond the ability of people like the Echo Nesters, although there might be a few existential crises along the way.

h1

Women, digital distribution, and visual image

August 26, 2010

Another crosspost, this one from Music Think Tank Open; it was written as a companion to the zed equals zee post, “Women in Music: the lost generation.”

As a fan, I’ve been excited for the rise of digital distribution and for the direct interaction of artists and listeners because it means I’m more likely to hear great music that I like. It means that I get to decide what I want to listen to, rather than having a slew of A&R folks and radio programmers make the decisions for me.

But lately, I’ve been thinking about how record labels are not only gatekeepers for the music itself, but also for the visual image of artists.

I get it. Artists are performers, and looks matter.

But it’s pretty clear when you look at Top 40 artists that the standards for successful female artists and successful male artists are not the same. Music industry executives are predominantly male, and their professional tastes are, frankly, boring. So female artists have to be conventionally attractive, but male artists can look like Nickelback—middling-attractive guys (whose videos are then stuffed full of women in bikinis).

Deviate from these norms, and you face opposition. Roadrunner tried to get Amanda Palmer to re-edit her “Leeds United” video; because it contained a shot of her exposed belly that didn’t conform to the taut, airbrushed Britney-Beyonce-Lady Gaga standards. (She and her fans rebelled, and ultimately won. If you haven’t seen the video, go watch it. Amanda Palmer is undeniably hot, whatever her former label thinks.)

How many awesome female artists are there that didn’t get signed or supported because they didn’t fit the narrow visual criteria of the guy on the other side of the desk? Janet Weiss, of Sleater-Kinney, talks about how photographers wanted the band to look playful and sweet, and to dress them up like they were dolls. She says, “We wanted to look like the Stones, to be cool, to be tough, to be heroes. Why don’t women get to be heroes?”

I want female artists to be heroes. Or anything else they want to be. And I’m delighted that it might finally happen.

This post is adapted from one at zed equals zee, a music, technology and culture blog. debcha is a music fan, academic, and geek (not necessarily in that order). She also writes the zed equals zee companion Tumblr, and you can follow her on Twitter.

h1

SXSW Interactive 2011: music panels worth checking out

August 24, 2010

Crossposted from Hypebot. This post complements the previous zed equals zee post, which focuses on more technically oriented panels.

Thinking of heading to Austin in March? Before the South by Southwest Music Festival, there’s also South by Southwest Interactive, a conference that focuses on technology, media, marketing and culture. It’s no surprise, therefore, that the evolution of the music industry is a hot topic at SXSWi.

The program is partially crowdsourced: people who are interested in presenting at the conference submit proposals, which are then made available to the public to vote on and to provide feedback. Voting opened last week, and is open until August 27th (you do need to register to vote, but it’s quick and easy).

Here are six music / tech panel proposals that are intriguing:

Digital Strategies for Optimizing the Fan / Artist Connection
Pretty much what it says on the package: this panel will focus on the tools to measure and ‘optimize’ fan engagement.

Neither Moguls nor Pirates: Grey Area Music Distribution
Heitor Alvelos, of the University of Porto, argues that music distribution is typically seen as bipolar: music is either legal and paid for, or it’s piracy. Alvelos looks at other models of music creation and distribution besides these two.

Free Is Dead. Fan Experiences are Priceless
This is a topic that’s close to my heart (I wrote a related MTT post, “What Are Music Fans Willing to Pay For?“). Chris McDonald of Indiefeed focuses on the ‘experience economy’: providing unique experiences to fans, that they’re willing to pay for.

Caching in on Collaboration: Allee Willis and Pomplamoose
Heather Gold moderates a discussion between artists Allee Willis and Pomplamoose, who collaborate on both songwriting and visuals.

A Digital Rolling Stone: Disruptive Technology & Music
This panel has a pretty broad brief: to “analyze the current digital ecosystem and reveal creative and innovative solutions to utilize digital technologies in music that progress with and reflect culture,” but the proposer adds that they plan to present research as a case study, so that might make it a little more focused.

The Positive Effects of Music Tech
Samantha Murphy, of The Highway Girl, plans to discuss ways in which the independent artists have been empowered by new technologies around music, particularly those that simplify tasks like tour planning on clearing rights for cover songs.

I’ve highlighted another eight panels that are more technically oriented over at my own blog, zed equals zee.

Want more? Try searching the list of Interactive panel proposals using ‘music’ as a keyword. Know of a panel that belongs in this list? Feel free to add it in the comments.

Hope to see you in Austin!

Deb Chachra is a music fan, academic, and geek (not necessarily in that order). She writes zed equals zee, a blog focusing on the interaction of music, technology and culture, as well as the zed equals zee Tumblr. She’s debcha on Twitter, Last.fm, and elsewhere.

Follow

Get every new post delivered to your Inbox.