I'm looking forward to AI models that can describe, classify, and recommend music. Obviously there's Shazam and things like Cyanite (https://cyanite.ai/) but hopefully there will be some good open source models too shortly.
You could imagine Ishkur's Guide V3 is constructed automatically by AI models, and places every track somewhere on the map, listing all the genres and influences, etc.
I dunno, I like the human touch; music genres aren't an exact science (is this blackened death metal or deathened black metal? Is there a difference?). I mean the data is there; see https://everynoise.com/, probably linked elsewhere, for a very big list of music genres + examples that IIRC come from Spotify's data lake. That can be used to train a model, I suppose.
I'd like to see a recommendation program that trains on the many, many aspects of music itself that a person listens to. Rather than artists/labels/style slots/norms/years of release/charts ... it'd analyze your preferences in tempo, melody, harmony, instruments (type and number), vocal styles, novelty. (Getting started would take a while, say, over a month.)
Then to refine its model, it'd have you rate some picks (old and new based on what it knows so far), and analyze how your responses fit its model. Would it get better faster for people who like a wide variety, or those with a few specialities?
That is Pandora's entire business model. They (used to?) have music majors review every track that came in on a bunch of criteria.
Sadly it never seemed to really scale and even to this day Pandora doesn't recommend the same broad swaths of music that Spotify does, but Pandora's recommendations tend to be more on point.
There was a brief window when Pandora was available in Canada (long ago now), but I still remember how good their recommendations were. Nothing matches it even today
What you want is the model to be so advanced that it would probably be capable of just generating music. And actually if all you want is just similarly sounding music streaming into your ears then automatic generation is the natural continuation after automated recommendations.
Pandora was largely the first LLM/trained AI. They just used music "genes" (words) and tokenized on that and their relationship. The difference was that the success model it used was based on a variable/semi-arbitrary input (personal tastes) with weighting vs an objective model (dictionaries/documents; or, for things like game playing AI, a goal to reach).
this really depends who you are listening to, eg compare some archetypical bigroom corporate techno performer like Charlotte deWitte vs some crate digger like Om Unit, or since we talk about techno, uhm, Tommy Fourseven
im big on the UK continuum - it gets a lot of UK Bass music from boiler room sets - all the time. My music tastes are very obscure...and it works incredibly well for me. I dont know about techno/house.
I personally am pretty fine with using Ishkur's guide on my own, searching by myself what I like and filtering out what I don't like, and knowing what's what I don't like and why. I've been going back to it for the last 3 years and I'm not even half the road, yet.
I'm looking forward to AI models that can describe, classify, and recommend music. Obviously there's Shazam and things like Cyanite (https://cyanite.ai/) but hopefully there will be some good open source models too shortly.
You could imagine Ishkur's Guide V3 is constructed automatically by AI models, and places every track somewhere on the map, listing all the genres and influences, etc.