Posts Tagged ‘the echo nest’

h1

Best of 2010: debcha’s tops in [Boston music] tech

December 23, 2010

 

Cross-posted from Boston’s best local music blog, Boston Band Crush.

 

Boston is a music town. And Boston is a tech town. So it’s hardly surprising that Boston and Camberville produce an enormous amount of interesting stuff at the intersection of music and technology. Here are five of my favorite examples from this year:

1. Mashup Breakdown

Benjamin Rahn, of Cambridge, created an addictive site that visualizes the use of samples in songs, and launched it with (of course) Girl Talk’s new album, All Day. As you play tracks from the album, each of the samples used is highlighted and identified. The best part? It’s an ongoing project. So if there’s a song that you’ve always been curious about, go to the site and find out how you can participate.

2. The Swinger

It’s been a great year for Somerville’s The Echo Nest, a music intelligence company. They closed on a major round of funding, and provided the brainpower behind a host of great projects, like MTV’s phenomenal Music Meter site (yes, I know you’re thinking “MTV? Doing something worthwhile with music. Really?” Yes, really. Check it out.) And they had a viral hit on their hands with The Swinger, a bit of computer code that can make any song swing by automagically time-stretching the first half of each beat and shortening the second. Check out some examples here.

3. The Toscanini Gestural Interface

Boston hackers Lindsey Mysse and Robby Grodin showed off the Toscanini Gestural Interface (named after the conductor, not the ice cream) at the most recent Boston Music Hack Day. It’s a watch that turns movement into music (via Max/MSP commands). See it in action in the video here.

4. Dance Central and Rock Band 3

Across from the Middle East, an unremarkable office building houses the giant of music games, Harmonix. While they face an uncertain future, they released not one, but two incredible games this year. You’re probably already spending your evenings rocking out or getting down in your living room.

5. Another Green World (33 1/3 Books)

What possible relationship can a book about a 35-year-old Brian Eno album have with the Boston music/tech scene? Geeta Dayal, a Boston-based arts critic and MIT grad, wrote a short but brilliant book that investigates Eno’s 1975 album, which is deeply rooted in technology and technical concepts, especially in the field of cybernetics (defined and named by MIT professor Norbert Weiner in 1948). Even if you’re not a techie, it’s worth a read for how it illuminates one artist’s creative process.

 

 

Deb Chachra (debcha) writes zed equals zee, a Cambridge-based blog about music and technology, and curates the associated Tumblr. You can catch up with her at shows around the city, or you can just follow her on Twitter.
h1

Brian Whitman, “Music in the Time of Data”

November 23, 2010

Brian Whitman, the co-founder and CTO of The Echo Nest, gave a great talk at Olin College in Needham, MA last week, as part of the Technology and Culture Seminar Series.* His talk was a combination of personal narrative, a recent history of computer-generated music, and a look into the future of the interaction of music and technology.

*For those of you who only know me through this blog or Twitter: I’m on the faculty of Olin College and an organizer of the seminar series, and that’s me doing the intro. And yes, I have the best job ever.

h1

Engineering and music at the Frontiers of Engineering

October 4, 2010

A couple of weeks ago, I was fortunate to be able to attend the National Academy of Engineering’s Frontiers of Engineering symposium on behalf of my day job. One of the sessions focused on Engineering and Music, organized by Daniel Ellis of Columbia University and Youngmoo Kim of Drexel University, and some of my notes are below. (Links in the titles go to PDFs of short papers by each speaker.)

Brian Whitman, The Echo Nest: Very Large-Scale Music Understanding

What does it mean to “teach computers to listen to music“? Whitman, co-founder and CTO of The Echo Nest, talked about the path to founding the company as well as some of its guiding principles. Whitman discussed the company’s approach to learning about music, which mixes acoustic analysis of the music itself with information gleaned by applying natural language processing techniques to what people are writing on the Internet about the songs, artists or albums. He shared their three precepts: “Know everything about music and listeners. Give (and sell) great data to everyone. Do it automatically, with no bias, on everything.” Finally, he ended on a carefully optimistic note: “Be cautious what you believe a computer can do…but data is the future of music.” Earlier this year, Whitman gave a related but longer talk at the Music and Bits conference, which you can watch here. [A disclosure: Regular readers of z=z will be aware that The Echo Nest is a friend of the blog.]

Douglas Repetto, Columbia University: Doing It Wrong

I felt a little for Repetto, who presented a short primer on experimental music for an audience of not-very-sympathetic engineers. He started with Alvin Lucier‘s well-known piece, “I am sitting in a room” and then played a homage made by one of his students, Stina Hasse. In Lucier’s original, he iteratively re-records himself speaking, until eventually the resonance takes over and only the rhythms of his speech are discernible (more info). For Hasse’s take, she did the re-recording in an anechoic chamber; the absence of echo damped her voice and her words evolved into staticky sibilant chirps, probably as a result of the digital recording technology. Repetto presented several other works of music, making the case to the audience that there was a commonality of experimental mindset: for both the artists whose work he was presenting and the researchers in the room, the basic strategy was to interrogate the world and see what you find out. Creativity, he argued, stems from a “let’s see what happens” attitude: “creative acts require deviations from the norm, and that creative progress is born not of optimization, but of variance.”

Daniel Trueman, Princeton University: Digital Instrument Building and the Laptop Orchestra

Trueman, a professor of music, started off by talking about traditional acoustic instruments and the ‘fetishism’ of mechanics. Instruments are not, he stressed, neutral tools for expression: the physical constraints and connections of instruments shapes how musicians think, the kind of music they play, and how they express themselves (for example, since many artists compose on the piano, the peculiarities of the instrument colour the music they create). But in digital instruments, there is nothing connecting the body to the sound; as Trueman put it, “It has to be invented. This is both terrifying and exhilarating.” Typically, the user performs some actions, which are transduced by sensors of some sort, and then converted into sound. But the mapping between the sensor inputs and the resultant audible output is pretty much under the control of the creator. Trueman presented some examples of novel instruments and techniques for this mapping, as well as some challenges and opportunities: for example, the physical interfaces of digital instruments tend to be a little ‘impoverished’ (consider how responsive an electric guitar is, for example), but these instruments can also communicate wirelessly with each other, for which there is no acoustic analog.

Elaine Chew, University of Southern California: Demystifying Music and its Performance

Chew, with a background in both music and operations research, presented a number of her projects which use visualization and interaction to engage non-musicians with music. The MuSA.RT project visualizes music in real-time on a spiral model of tonality (see image above), and she demonstrated it for us by playing a piece by the spoof composer PDQ Bach on a keyboard and showing us how its musical humour derived in part from ‘unexpected’ jumps in the notes, which were clearly visible. She also showed us her Expression Synthesis Project, which uses an interactive driving metaphor to demonstrate musical expressiveness. The participant sits at what looks like a driving video game, with an accelerator, brake, steering wheel and a first-person view of the road. The twist is that the speed of the car controls the tempo of the music: straightaways encourage higher speed and therefore a faster tempo, and tight curves slow the driver down. As well as giving non-musicians a chance to ‘play’ music expressively, the road map is itself an interesting visualization of the different tempi in a piece.

Some quotes from the panel discussion:

Repetto on trying to build physicality/viscerality into digital instruments: “Animals understand that when you hit something harder, it’s louder. But when you hit your computer harder, it stops working.”

Trueman on muscle memory: “You can build a typing instrument that leverages your typing skills to make meaningful music.”

Chew on Rock Band and other music games: “It’s not very expressive: the timing is fixed, with no room for expression. You have to hit the target—you don’t get to manipulate the music.” More generally, the panelists agreed that the democratization of the music experience and communal music experiences were a social good, regardless of the means.

Things I never expected to write on this blog: I am grateful to the National Academy of Engineering, IBM, and Olin College for sponsoring this post, however inadvertently.

h1

Thinking about playlists

September 5, 2010

I love playlists. I live and die by them, and make new ones almost daily. My car doesn’t have an MP3 input and I have a daily commute, so a good chunk of my music listening is in the form of burned CDs—de facto sub-75-min playlists. And I realize it’s antediluvian, but I still trade mix CDs with many of my friends (via snail mail, no less; I think we all love the charm of the hand-made packages in the post), and those CDs are one of my favourite modes of music discovery.

Almost all the playlists I make are custom, largely by necessity: the songs are usually hand-selected, and they are frequently also hand-ordered. In time for this weekend’s London Music Hack Day, The Echo Nest debuted a powerful and flexible set of tools to algorithmically generate playlists, and I did a little gedanken experiment to compare the playlists you can currently generate with these tools with the kinds of playlists that I make.

Here are some examples of playlists I’ve made or updated recently, ordered roughly from least to most amenable to automating:

New music: I have a playlist called ‘Current’ where I throw recently downloaded music for further listening.

Albums: If I download an entire album, I’ll keep it together, at least for the first few listens (and then I decide that I really only like “Sprawl II” off the new Arcade Fire album).

Artists: Today I will listen to every Elliott Smith song I own.

Playlists by geography: I have a playlist called ‘CanCon‘ that I made for a friend of mine who just moved to Canada. Amazingly, this looks reasonably easy to do with Echo Nest’s new APIs, although it might require a bit of careful tweaking to include my hometown of Toronto, since it’s well south of the 49th parallel.

Workout playlists: Recently, I’ve been doing musical sprint intervals: moderately-high tempo songs intermixed with short, loud, fast songs by punk bands like the Ramones or Pansy Division.

Playlists of bands with upcoming shows: Boston-based concert tracking service, Tourfilter, has a monthly residency at a local bar, at which I DJ’ed a few months ago. All of the songs are by artists that are playing in the Boston-area in the next month or so.

Songs I can play on bass: Sadly, a very short and slowly-growing list right now (“Green Onions,” “Seven-Nation Army,” and a handful more).

My friends’ bands: A playlist of music by people I know.

Playlists for other people: Playlists or mix CDs made I’ve made for friends of music that I think they’ll like, based on what I know of their tastes.

Playlists by mood: Usually not just ‘happy’ or ‘sad,’ though. I have a recent playlist I made as a soundtrack when I was feeling melancholy and restless (lots of Waterboys, Sea Wolf, Frightened Rabbit).

Playlists by theme: As an example, I made a playlist of ’embarrassing’ music for a friend of mine, which was mostly songs at the intersection of nerdy, funny and bawdy (think The Bloodhound Gang’s “The Bad Touch”).

‘Best of’ lists: Like most music geeks, I like making lists of the stuff that I like best (although I guess if I was a real music geek, I’d describe it as ‘the best music’)

What do I feel like?: Quasi-random concatenations of whatever I feel like listening to on a given day.

The first half are pretty straightforward. The second half get a little tougher—some of them are nearly algorithmic, but only if you happen to be me. The thought processes behind the last two are opaque even if you are me. Coming up with those last few seems very close to a musical Turing Test;  not that I’d put that beyond the ability of people like the Echo Nesters, although there might be a few existential crises along the way.

h1

Music and tech news roundup

March 31, 2009

djhero

First off, the zed equals zee happy hour was a rousing success, with lots of terrific conversation. It was fantastic to meet so many Boston musicians and bloggers face to face, including some of the people behind Boston Band Crush, The Limits of Science, Electric Laser People, and Paul Lamere of Music Machinery and his colleagues at The Echo Nest. It’s a measure of how friendly the crowd was that there was waaaaaaay too much money on the table at the end of the night; if you came out last night, join us for the next zed equals zee happy hour in a few months and the first round is on us.

More news:

Activision and Red Octane have announced that DJ Shadow is signed up to help develop and test the hardware for DJ Hero, set for release later this year. The turntablist may also appear as a playable character. There are not-terribly-substantiated rumours (which I’ll happily spread) that Daft Punk may also be involved in the new game. [via Resident Advisor]

On a related note, MTV reports that Rock Band has sold over 40 million songs, for nearly a billion dollars in revenue.

According to a recent report, live music has now overtaken recorded music in revenue in the UK (£904 million vs £896 million). Although that includes neither sponsorship revenues nor digital licensing, which makes me wonder a bit about the author’s job title of Chief Economist. No word on whether the numbers include Rock Band downloads. [NME] [EDIT: removal of unwarranted snark; see comments for details]

Mission of Burma is blogging the recording of their new album! [via @clickyclicky]

DIY donk. Remix any track into the Northern England sound of bouncy techno. Music Machinery’s donkified version of The Postal Service’s “Such Great Heights” made me laugh out loud, although I still had to turn it off after about 15 seconds.

DIY…keybass? bass keytar? Whatever, it’s pretty awesome.

MP3: Amanda Palmer – Such Great Heights* [more Amanda Palmer]

*the anti-donk version