I do wonder why AI music is so lame. Every previous technological advancement in music produced amazing new sounds and styles but AI music seems to just be emulating lowest common denominator pop sludge. Where's the Bruce Haack or Kraftwerk of AI? Surely there's a previously unimaginable sound palette out there that we could be pulling from. Why is it all so BAD?
1. Platforms like Suno lack the granular control that can make a song distinctive and interesting. A prompt is an all-or nothing paradigm. There is no gradual build towards a final result like in normal creative processes. Yes you can supply lyrics but that’s hardly a substitute. And on top of that it’s painfully slow due to the nature of the technology.
2. As a result of 1, experienced music producers (familiar with regular DAWs) don’t want to use it. They probably prefer something with instant feedback. Tweak a synth param, you can instantly hear its effect. And changing one instrument doesn’t randomly affect unrelated things.
3. As a result of 2, the majority of AI generated music is throwaway and or created by amateurs who don’t have the ear for what makes a song good.
I have a background in painting but this is 100% the same problem in that domain. Without the ability to make precise isolated changes it's not very useful. I expect most creative software to integrate gen AI eventually, but it needs to be in a way that gives you a lot of precision
I think that you're probably right here - it's the granularity. We'll probably see that improving as things move along and hopefully get something interesting out of it at some point. Quite frankly the alternative is too depressing to think about.
But (semi) jokes aside, I think that AI music tools are just not fully developed yet. The current approach is more or less one shotting a track. Whereas I think a system that allows one to generate layers would bring in many new sounds. Something like a "speak to instument" system, where you can hum melodies and then generate instruments to play those, then compose a track with all those individual parts.
This is exactly what everybody said about rap and drum machines and sampling when I was growing up in the mid to late 80s.
They were right and wrong. A lot of it was really formulaic bullshit, and much of it doesn't hold up at all. But it also spawned one of the most creative and exciting periods in music history.
Will this be the same? It feels like it won't, but that's how things feel in general because I'm old. So who knows?
Not a valid comparison, I feel. I may be hindsight, but rap and electronic music came from vibrant underground scenes and to many critics and music fans of the time was seen as at least interesting and at best ground breaking.
AI music on the other hand comes not from the underground, but from corporations. You'll be hard pushed to find any critics or music connoisseurs singing its praises.
It's hindsight. At the time it was a pretty mainstream opinion that it wasn't music at all, just talking over fake drums and stolen copies of other people's songs.
There's something about the transformer model where you stick a lot of data in and output tokens that kicks out stuff which is kind of average without much underlying understanding. I'm looking forward to AI that goes beyond that but so far it's kind of bland.
As a classical music fan there's a kind of variation in depth and complexity that probably peaks with late Bach and Beethoven and I always thought it would be interesting if tech could go beyond that but so far it's not close to equaling it.
I think it's because its main use in this particular context is to produce results that the creator does not have the skill to produce and/or does not want to invest the time to produce.
I guess you could argue that drum machines offered simplification/automation when they first appeared compared to the option of a human drummer, but also, those machines opened up all sorts of creative and stylistic possibilities that simply couldn't be done by sitting someone at a traditional drum kit. Using AI to make music doesn't do this -- it's a shortcut that has no argument in its favor whatsoever except that it saved the person making it time (and/or enabled them to generate something they couldn't have produced through their own work). That's why it is fundamentally uncool in a musical context, and always will be.
Actually not! Lyrics aside, it's a banger.l, especially towards the end. If this was crated by any tech house/electro flash producer, Tiga for example, it would be huge.
The bar or two of synth right at the end? The part they cut off almost as soon as it began, that happened right after the lazy lyrics about "British tube with a middle east zest"?
Yeah, I heard that. It was so "good" that dude-man here stopped playing the track when it showed up. ;)
If there is something good to this composition (which, again, is inclusive of its lyrics), then that goodness is the mechanical dryness of it all -- which is absurdly profound.
And to be clear, I'm very OK with mechanical music and also with absurd music. These things can be a lot of fun.
The absurdity-level mechanality could have been a hook in and of itself with a bit of occasional embellishment. There was an obvious play-on-words about "British tube meat" that wasn't explored at all.
"British tube meat at its absolute best"
and then, for the next break
"My British tube meat is a dripping mess"
Lines like these basically write themselves. They would help to complete a hilariously absurd mental image, and yet they were somehow just beyond the grasp of the bot.
So the song's got no hook. It can't even succeed at being background noise because the missed opportunities shine so brightly that they're impossible to ignore.
2/10. Needs work. (In this way, it's very similar to the vibe-coded stuff I fuck around with.)
I don't know about music, but there are plenty of pioneers of AI art who were pretty interesting in my opinion. Mario Klingemann, Tom White, Memo Akten and Samim Winiger are some names I remember who made a lot of cool stuff. I admit I haven't kept up at what they're doing today, though (maybe because I left Twitter, and I think many of them did too).
Because people don't put the effort in. A lot of electronic music can be considered lazy - just press button, turn a knob, boom you have music. Right? But then you have someone like Aphex Twin and they make something unique out of these easy machines.
I'm sure someone can make unique or passable music with the help of AI tooling, but they can't do it by just saying "make me this music", no matter how much effort they think they have put into the prompt.
It is the same with anything else. I use AI to write a lot of code - but I'm constantly tell it to fix some things - often the same type of error I told it yesterday (things that a junior engineer would have learned a few months ago it is still getting wrong)
I don’t think it’s just about effort. It’s the nature of the technology.
If you practice piano, you will get better in some predictable way, even if it takes a long time.
If you spend more and more time tweaking a prompt, you will be pulling songs from some distribution of possible songs but you will never have the level of control that conventional music producers have.
When modern DAWs like FL Studio started democratizing music production, there was immediate backlash in the music production community. I know this because I lived through it. Music made with FL Studio was considered garbage, not by serious musicians, amateur. "FL Studio users are incapable of making good music", etc. Of course now well-respected musicians like Tyler the Creator and Porter Robinson use FL Studio and there isn't really a question. This is a common theme every time some new method of creating music comes around - just look at how they called Dylan "Judas" when he debuted electronic guitar, etc.
"Every previous technological advancement in music produced amazing new sounds and styles" is classic hindsight bias. In retrospect, once everything has sorted out, and all the good music has risen to the top, it's easy to look back in history and point to the highlights. But when you live through it, it looks a lot more like a mess with no redeeming qualities.
It's easy to apply the same pattern of "people hated it, then liked it" but I think something's different about AI. I think a lot of the kneejerk reactions are subconscious but I don't think that means they're unfounded or invalid, they just haven't articulated the reason yet.
When AI image generation was a thing that hobbyists were messing around with (before it became good sometime in 2023) a lot of the creative-type people that abhor AI today were interested in it. Same thing with LLMs and stuff like AI Dungeon. ( I don't think AI music generation had a similar hobbyist era but not sure. )
I think the main thing that changed was how big and commercial it became. There's nothing counter-cultural about AI anymore, it's become the polar opposite. Nobody was making billions selling synthesizers & convincing investors it would replace 99% of musicians.
FL Studio was absolutely a massive commercial success. I mean, sure, nothing compared to AI, but in the music community bubble, it was enormous - and still is. It did what AI is doing today - it made a very expensive and time consuming process before (buy a thousand dollar guitar or other expensive instrument or synthesizer, rent a studio, get a producer, blah blah, etc) extremely cheap. This then led immediately to complaints - why is it that all music made with FL Studio so lame?
If we are going to say that the knee-jerk reaction to AI is somehow different I'd be curious to know what the difference is.
FL Studio has advanced a long way since it first came out. The software professionals are using today is nothing like it was 00's. The name at the time "FruityLoops" also didn't help its image as a pro tool.
It's probably not all so bad. There probably are people out there intentionally creating things with AI assistance that sounds pretty rad.
But the idea of being able to just create endless music with low effort is too compelling for too many, so the good stuff is drown out by the mass amounts of low-effort slop being produced.
It is really easy to produce slop. However humans generally don't inflict it on the rest of the world when that make it. Trust me, you don't want to hear my latest efforts on my 4-track recorder (which is why only I've heard it via headphones, and it will stay that way until/unless I get a lot better. I learned a lot about what I need to study, but fixing rhythm is not easy)
AI music is not a technological advancement in music. AI music is just a side effect of advancements in generative AI. This can be extended to other areas where generative AI gains traction. AI generated texts are not literary advancements in literature, AI generated art are not advancements in art etc.
As for the lowest common denominator part, it is true for all types of generative AI. AI will almost always spit out the lowest common denominator of the medium it tries to generate.
If an AI improves developer productivity so much, why would maintainers of an OSS project want unknown contributors to sit in between the maintainer and the LLM? They'd be typing these queries into Claude Code themselves. To quote my colleague:
> We do not need a middleman to talk to AI models. We are not bottlenecked by coding.
maybe you are not bottlnecked by coding. but there is high probability that you will be bottlenecked by verifying the correctness of LLM-generated code.
Crazy how this doesn't register in people's heads. Has the real bottleneck ever been code written and not the review of code and everything involved? Understanding the nuance and implications behind design decisions; strategy.
In any REAL, workload, with good processes, code review makes speed of code generated a moot point. You still move as fast as you can review the code, and no, I won't debate that you can rely on LLMs, a deterministic language predictor, to determine the correctness of code; in the context of the business, and technical implications.
If you are a responsible maintainer you need to verify the correctness of the contribution wether you used an LLM to generate it or wether someone else did.
Having someone else be the AI-middlemen, just introduces additional complexity and confusion.
I'm almost not using AI, but a possible scenario is that the contributor spend like 20 hours in total.
Something like using the AI to get an initial bad version, make some tweaks to the prompt, make some manual fixes, ask the AI to fox something else, noticing some new related feature and asking the AI to add it, making some benchmarks and deciding to remove a small feature, or perhaps deciding between two similar implementations, add a few more manual fixes here and there, run the extended version of the automatic test and find a weird bug in the unusual setup, make a few fixes with the AI and manually. So after 20 hours of work, the final version has only 50 lines that have been rewriten like 5 times each. Now the mantainer can review only the final version in 1 hour or so.
This is very different to spending 5 minutes asking the AI to write a patch, that has 1000 lines that does not even compile and sending it to the maintainer without looking at it.
I'm finding that AI, when successful, gives me 2-3x speedup. It's not the kind of thing I can give high-level instructions to like I can to a human.
I suspect the people who claim that AI works by only giving it high-level instructions are mostly working on "mindless" projects where a developer in the weeds wouldn't need to think very much.
> If an AI improves developer productivity so much,
You're not suggesting the only metric of productivity is lines of code are you? And that the only benefit of using LLMs is for generating code you're too lazy to type yourself?
This. GP is just old. I'm 51 one now and almost everyone I know seems to think the same thing. Unless you work at it, music later in life will never evoke the same emotions as those from when you were in your late teens/early 20s. Thing is though, it's really not true and if you work at it you realise pretty quickly that music today is just as good as it was at any time in the last 50 years (though I will concede that we'll probably never get the highs of the late 60s and early 70s ever again - if you were a teenager then, ok. Music now is definitely better than in the 80s though dude).
We're very aware that we need to balance our need to make money with the need to make Avalonia accessible for everyone. For this reason, for 12.0 we've made our VS Code extension totally free to use with no account needed and no usage restrictions.
What is the problem with submodules? I like to use them because it means the code I need from another repo remains the same until I update it. No unexpected breaking changes.
>Or maybe ask yourself why are you doing open source in the first place?
I, like everyone started work on OSS because it's fun. The problem comes when your project gets popular - either you try to make it your job or you abandon the project, because at a certain point it becomes like an unpaid job with really demanding customers.
That makes sense but doesn't answer "why do open source" though. In fact, it only shows that there is little incentive to pursue a serious open-source project and just stick to hobby projects while ackowledging it'll never go anywhere. I struggle to answer that myself.
Lol, I never in a million years expected my project to get 100 users never mind the tens of thousands it now has. Sometimes others make the decision for you ;) it's still your baby though.
I also made a horrible life decision in starting a company around developer tools, and I agree. Taking one of the comments from the PR:
> It's insane to blame everybody else for not being able to create a viable business model from an OSS project. Everybody who is using Tailwind is actually SUPPORTING Tailwind. Everybody who is reporting bugs properly is SUPPORTING Tailwind. Everybody who is collaborating and PRs changes is SUPPORTING Tailwind.
> Tailwind grew a lot due to community acceptance and support, and collaborations.
> The only person to blame here is the CEO/Main maintainer of Tailwind. They've made bad decisions, hired coders without knowing how to make enough money to pay them.
> If you want to monetize a free service, you either know what you do or you make mistakes and lose what you've built. It was always a risk; we are not at fault.
> @adamwathan I respect you for everything you've done, but you need to take a few breaths, take a walk, think, sleep, and come back, ask apologize of the community, and start working on solutions/crisis management.
And you always know that when you open the GH profile of people saying such things, you'll see an empty timeline. This particular user has a single repository which he's committed to a handful of times over the last year and has setup a GitHub sponsorship for it.
I try to remind myself that these types of people are a (loud) minority but it's absolutely soul destroying.
Yep. I almost edited my comment to include that one as well! "Insane", indeed.
As you note, the tire-kickers were the worst -- people who forked the Linux kernel (with no additional commits) trying to process the entire repo on a free plan, for example, then complaining (loudly) when cut off.
A lesson for the ages: that cultured (or not) rich person over there isn't any more intelligent or prescient than your neighbour or colleague, and most certainly no more than your partner. They just have more money.
Seriously though, it’s a bit of an amusing coincidence that the Leibniz biscuit and the fig Newton were both independently invented in 1891 (at least according to Wikipedia).
reply