I’ve been giving Google Music a hard time. That’s only because I thought that they’d have a lot more solved by now and perhaps they’ve at least solved one important issue. The mix tape issue.
You see, Apple’s Genius isn’t such a Genius. When you ask it to gather a playlist together it will do an okay job of gathering like sounding artists, but it typically depends on the artists similarities instead of the actual song. Ya dig?
But Music Beta by Google, has their version of Genius, the Instant Mix, a playlist generator developed by Google Research. The difference you see is that:
Instant Mix uses machine hearing to extract attributes from audio which can be used to answer questions such as “Is there a Hammond B-3 organ?” (instrumentation / timbre), “Is it angry?” (mood), “Can I jog to it?” (tempo / meter) and so on. Machine learning algorithms relate these audio features to what we know about music on the web.
Because we combine audio analysis with information about which artists and albums go well together, we can use both dimensions of similarity to compare songs.
If you pick a mellow track from an album, we will make a mellower playlist than if you pick a high energy track from the same album. (more here)
The Google music mix feature does a far superior job of creating seamless mixes than the Genius does. It’s essentially adding a deeper layer of filtering data to recommendations engines. Kind of like if Netflix let you search not just by genre, but by what CAMERA they used to shoot the film. Saweeet.
Music Consciousness
Voyno
No Comments