>> Man pushes slot machine lever 5,000x and edits together best slot machine sounds to make song. Proceeds to beat chest about being a music maker.
While you may have meant that as a criticism, I don't think that is an inaccurate description of the process and do hope my post describes it as such. Yes sampling from the latent-space is an exercise in curating and harnessing randomness.
>> ...the mental justifications in the blog post are painful. I think you may find musicians bristle at your claim that all art is circular,
Yes I many/most do. This is not a method of "making music" that could have been contemplated 5 years ago (harnessing randomness = learning an instrument for 10+ years = wtf!!!??). But it exists today, and there are a lot of folks like me who are enabled by it. There will be some among us (not me!) who come along who are far more talented, and when this tech is shown to enable their talent, then I think the bristling will lessen.
>> [various issues about copyright]
I write my own lyrics from my meat-brain, so I may own the copyright to the words (maybe?). But I've chosen to release the words out into the world with no expectation of keeping that. And yes I understand I do not own the copyright to the songs/music/etc (I have read Suno's rights and recent Udio/UMG deals etc). In any case, I don't much care about that, nor am I looking to make $$$ off this. I wrote (words) for a long time... and can now put them to music. If there are any others of you out there who would like to do that, I would suggest you try these paths as well.
Just try not to push the button 1x... do it 5,000x! It's the effort and vision that keeps you from "slop" - and maybe you'll be the one who'll make it great fucking art!
I think it’s worth separating what feels like songwriting from what’s actually considered songwriting - legally. Curating randomness or generative tools can be a really cool way to explore ideas or turn poetry into something musical, and I totally get the appeal there!
That said, the “vibes on = songwriting” idea doesn’t quite line up with how authorship is defined. The earlier Suno support article spells it out pretty clearly:
You’re generally not considered the owner of the songs, since the output is generated by Suno.
In the U.S., copyright law requires meaningful human authorship. Music made entirely with AI doesn’t qualify, and writing a prompt alone isn’t considered composing the music or lyrics.
If you wrote the lyrics yourself, you do own those outright and can copyright them independently.
I know this is mostly for fun, but once tracks are distributed to Spotify or other DSPs, they start earning money, it does get into riskier territory.
--
All that said, I appreciate the encouragement to experiment with new tools. Check out my handle on Spotify/Youtube and you'll see why this is a big part of my life ;)
Totally agree with you on the copyright aspect: that was not an area I touched upon in the post, so thank you very much for pointing it out, since other readers may certainly want to know. Yes I am on Spotify, but that's more because it seems like its the only viable way to even share my stuff with friends-and-family given how completely it's become the base mode of distribution! Certainly haven't seen a $ from it, nor will I ever.
In general, I actually thought the entirety of your comment was totally on point and I'm very glad you made it. Look I know I'm not "making music" in any sort of traditional manner and I know how it's perceived. But it is something I've been devoting more time to, and getting a lot of pleasure out of. Ultimately I want to pass the inside of my brain on to other humans, and feel like this is a process that might help me do that given the particular circumstances of my genetic lottery (that is, vs using my vocal chords to sing or my fingers to play a violin). But I know this whole endeavor sounds a bit fake/slop... still I'd rather not put up euphemistic descriptions of my "new band" on twitter/insta but rather call it what it is. After all "songxytr" is pronounced "songshitter."
And thanks for your handle... I had missed that. Given I was sitting in Goa *yesterday... EDM is very much the mood.
Well I do believe it is "music" but whether it's any "good" I guess idk! I do enjoy the process though and believe there would be others who would as well.
Tools like Suno are fundamentally enabling. I'm about 40 years old and never "had the music" - not for lack of trying (music lessons at a young age)... but could never carry a tune or keep rhythm. I suppose its what being dyslexic feels like. If I were educated in a culture where music was fundamentally as important as reading or math, I suppose would have spent enough hours on it to eventually be passable... but I got frustrated, the music lessons stopped. But that doesn't mean I stopped appreciating or wanting to make music!
And then comes Suno (and OpenAI's jukebox before that), and it felt like my brain exploded... like the classic scene in a superhero movie when the power was given to me. Is my music good? No - but I spent years writing and fashioning poetry and all of a sudden can put that to music... hard to explain how awesome that feels. and i love using the tools and it's getting better and it's been fundamentally empowering. I know it's easy to say generative art is generative swill... but "learning Suno" is no different than "learning guitar".
If you enjoy it, I'd leave it at that - that's all that matters.
It's a pretty absurd claim to say that learning Suno is no different than learning a musical instrument. My 8 year old nephew was cranking out "songs" in Suno within an hour of being introduced to it. Reminds me of when parents were super impressed that their 3-year old could use an iPad.
Generative tools (visual, auditory, etc.) can serve as powerful tools of augmentation for existing creators. For example, you've put together a song (melody/harmony) and you'd like to have AI fill out a simple percussive section to enrich everything.
However with a translation as vast as "text" -> to -> "music" in terms of medium - you can't really insert much of yourself into a brand new piece outside of the lyrics though I'd wager 99% of Suno users are too lazy to even do that themselves. I suppose you can act as a curator reviewing hundreds of generated pieces but that's a very different thing.
I always get a little confused when I hear non-musicians say that something like Suno is empowering when all they did was type in, "A Contrapuntal hurdy-gurdy fugue with a backing drum track performed by a man who swallowed too many turquoise beads doing the truffle shuffle while a choir gives each other inappropriate friendly tickles".
> My 8 year old nephew was cranking out "songs" in Suno within an hour of being introduced to it. Reminds me of when parents were super impressed that their 3-year old could use an iPad.
You imply "it is Prompt -> Song" but in reality it is "Prompt -> Song -> Reflection -> New Prompt -> New Song.." It is a dialogue. And in a dialogue you can get some places where neither of you could go alone.
As software developers we know that multiple people contribute to a project, inside a git repo, and if you take one's work out it does nothing useful by itself. Only when they come together they make sense. What one dev writes builds on what other devs write. It's recursive dependency.
The interaction between human and AI can take a similar path. It's not a push-button vending machine for content. It is like a story writing itself, discovering where it will end up along the way. The credit goes to the process, not any one in isolation.
It’s really not. It’s like having interdimensional Spotify where you can describe any song and they will pull it up from whatever dimension made it and play it for you. It may empower you as a consumer but it does not make you a creator.
I dunno, based on Spotify's recommendation engine, AI is absolutely sufficient to make anyone a creator ;P
Almost all naturally-generated music is derivative to one degree or another. And new tools like AI provide new ways to produce music, just like all new instruments have done in the past.
Take drum and bass. Omni Trio made a few tracks in the early 90s. It was interesting at the time, but it wasn't suddenly a genre. It only became so because other artists copied them, then copied other copies, and more and more kept doing it because they all enjoyed doing so.
Suno ain't gonna invent drum and bass, just like drum machines didn't invent house music. But drum machines did expand the kinds of music we could make, which lead to house music, drum and bass, and many other new genres. Clever artists will use AI to make something fun and new, which will eventually grow into popular genres of music, because that's how it's always been done.
You can do exactly what you describe with interdimensional Spotify. People can describe all kinds of fun and interesting things that can be statistically generated for them, but they still didn’t make anything themselves unlike in your other examples of using new tools.
Japanese oldies became a trend for a while - the people who found and repopularised the music dont get to say they created it and how it’s so awesome to have mastered the musical instrument of describing or searching for things. Well, of course they can, but forgive me if I don’t buy it.
Maybe when there is actual AGI then the AI will get the creative credit, but that’s not what we have and I still wouldn’t transfer the creative credit to the person who asked the AGI to write a song.
> Maybe when there is actual AGI then the AI will get the creative credit, but that’s not what we have and I still wouldn’t transfer the creative credit to the person who asked the AGI to write a song.
When artists made trance, the creative credit didn't go to Roland for the JP-8000 and 909, even though Roland was directly responsible for the fundamental sounds. Instead, the trance artists were revered. That's good.
> Japanese oldies became a trend for a while - the people who found and repopularised the music dont get to say they created it and how it’s so awesome
I'd bet there are modern artists who sampled that music and edited it into very-common rhythm patterns, resulting in a few hit songs (i.e. The Manual by The KLF).
> Take drum and bass. Omni Trio made a few tracks in the early 90s. It was interesting at the time, but it wasn't suddenly a genre. It only became so because other artists copied them, then copied other copies, and more and more kept doing it because they all enjoyed doing so.
Musicians not just copy but everyone adds something new; it's like programmers taking some existing algorithm (like sorting) and improving it. The question is, can Suno user add something new to the drum-and-bass pattern? Or they can just copy? Also as it uses a text prompt, I cannot imagine how do you even edit anything? "Make note number 3 longer by a half"? It must be a pain to edit the melody this way.
> Musicians not just copy but everyone adds something new
Not everyone. I've followed electronic music for decades, and even in a paid-music store like Beatport, most artist reproduce what they've heard, and are often just a pale imitation because they have no idea of how to make something better. That's the fundamental struggle of most creatives, regardless of tool or instrument.
I haven't tried Suno, but I imagine it's doing something similar to modern software: start with a pre-made music kit and hit the "Randomize" button for the sequencer & arpeggiator. It just happens to be an "infinite" bundle kit.
Sampling is not just cutting a fragment from a song and calling it a day. Usually (if you look at Prodigy's tracks for example) it includes transformation so that the result doesn't sound much like the original. For example, you can sample a single note and make a melody from it. Or turn a soft violin note into a monster's roar.
As for DJ'ing I would say it is pretty limited form of art and it requires lot of skill to create something new this way.
Yes, that's what people are doing with AI music as well. Acting like there's some obvious "line" of what constitutes meaningful transformation is silly.
Well I made songs with my lyrics that brings tears and memories to my audience. Don’t know what other creator things you are talking about, but this to me is creating.
Well, at least you asked a computer over a series of requests to statistically generate a work based on it having previously ingested lots of works by actual creators and your audience liked it, and I won't take that from you.
clicking on a button until you like what you hear is not "making music". I have nothing against these tools, but the hubris of the people using them is insane
What button? Again with the vending machine idea. No, it's language prompting, language has unbounded semantic space. It's not one choice out of 20, it's one choice out of countless possibilities.
I give my idea to the model, the model gives me new ideas, I iterate. After enough rounds I get some place where I would never have gotten on my own, or the model gotten there without me.
I am not the sole creator, neither is the model, credit belongs to the Process.
So if I have a melody in my head, how do I make AI render it using language? Even simpler, if I can beatbox a beat (like "pts-ts-ks-ts"), how do I describe it using language? I don't feel like I can make anything useful by prompting.
You record yourself whistling it and out it in as an input.
I've been recording myself on guitar and using suno to turn it into professional quality recordings with full backing band.
And I'm not trying to sell it, I just like hearing the ideas in my head turned into fully fleshed music of higher quality than I could produce with 100x more time to invest into it
This is more closer to actually creating rather than generating music. However this cannot be done with a text prompt, which the comment above claimed, is expressive enough.
Actually having an "autotune" AI that turns out-of-key poor singing into a beautiful melody while keeping the voice timbre, would be not bad.
Well then I have news for you... That's what Suno is. You can generate from simple text prompt, you can describe timings and chord progressions and song structure. You can get very detailed, even providing recordings
Yes the barrier for entry is low, but there is a very high ceiling as well
I just tried it out because of the discussions on this thread, and I got to say I land squarely on the side of this is neat, but it is not artistry. Every little thing I generated sounded like things I've heard before. I was trying hard to get it to create something unique, using obscure language or ideas. It didn't even get close to something interesting in my opinion, every single output was like if you combined every top 40 song ever made and then only distilled out the parts relevant to certain keywords in a prompt.
These tools will probably be great for making music for commercials. But if you want to make something interesting, unique, or experimental, I don't think these are quite suited for it.
It seems to be a very similar limitation to text-based llms. They are great at synthesizing the most likely response to your input. But never very good at coming up with something unique or unlikely.
That's a testable hypothesis. Go sample every AI output until you find one that doesn't have a previously created 1:1 analogue.
Unless of course you mean "original" as in, some kind of wishy washy untargetable goal that's really some appeal to humanity, where any piece of information that disagrees with your hypothesis is discarded because it is unfalsifiable. Original might as well mean "Made by a human and that's it" which isn't useful at all.
I messed around with Udio when it first cam out, and it wasn't just writing a prompt, and there's your song.
You got 30 seconds, of which there might have been a hook that was interesting. So you would crop the hook and re-generate to get another 30 seconds before or after that, and so on.
I would liken it more as being the producer stitching together the sessions a band have recorded to produce a song.
This is a very old argument within artistic communities.
In cinema, authorship has resoundingly been awarded to the director. A lot of film directors go deep in many creative silos, but at its core the process is commissioning a lot of artists to do art at the same time. You dont have to be able to do those things. Famously some anime directors have just been hired off the street.
In comics things went the other way. Editors have been trying to extract credit for creative work for a long time. A lot of them have significant input in the creative process, but have no contractual basis for demanding credit for that input. It frustrates them. They can also just commission work, or they can have various levels of input in to the creative process, up to and including defining characters entirely.
Really then, in your example, theres clearly a point where you have had enough of a creative input in the creation to be part of the artistic endeavor. One judge in china ruled in favour of the artist after they proved that they had completed 20 odd revisions of the artwork, before watermarking it.
That is of course, assuming we only follow your strict, reductionist argument. Even for AI art, most generators these days take more than text input. You can mask areas, provide hand drawn precursor art and a lot of other things. And that also assumes no post processing.
Not all AI generated items will be art. But what I find offensive, is the judgement that as a class nothing touched by AI could be considered art. Mostly because I lived through "Digital Art is not Art" and "Computer Games are not Art" proponents of both got overtaken by history and rightly shamed.
I never claimed you can't use AI tools. I never claimed Digital art is not art. Don't imply I should feel shame for questioning the world around me. You can stop with the trying to silence your critics and position yourself as superior.
If I ask a comics guy their favorite comic artist they aren't giving me back editors names. They will have favorite editors, or even editor artist pairs, but the artist remains distinct from that.
I simply posited that commissioning a piece of work does not make you an artist. Having art generated for you to your taste is not 'making art'. Hiring an interior decorator to decorate my house does not mean I decorated. Ordering off a menu and requesting extra cheese does not make you part chef.
A better blurring for your argument would be the use of session musicians. If I say I love The Beach Boys, how much of what I love is session musicians work versus Brian Wilson's? Is he the artist that I enjoy? But that gets back to it, doesn't it. We as humans want to connect art with it's creator. Why? Because art is some reflection of something. Art is 'life is a shared experience'. AI 'art' is not part of that shared experience. I want to connect with Brian Wilson. But I don't connect with some music critic who writes about Brian Wilson's music even though we both connected with the same artistic work, even if I learned about the work though the critic making my relationship to them just as important (I wouldn't know it without them). There being an artist in the middle improves/transforms it/means something (what it means is what is up for discussion).
A pretty crystal is just as pretty as a piece of art, but it is not a piece of art. AI art might be more like the crystal. It might contain beauty/interest/capture attention. But it's not connecting with someone's creation, with intention. I have a local museum and I love exhibits that a specific curator there has focused on more than ones they didn't touch. But that doesn't make them an artist. AI 'artists' fall into that category.
No but its the same genetic fallacy. Some digital works arent art. Therefore all Digital art is not art. These people were rightly ridiculed.
Suggesting that because some people put no effort into AI Art, that AI art as a category cannot be art is also a silly genetic fallacy.
>If I ask a comics guy their favorite comic artist they aren't giving me back editors names. They will have favorite editors, or even editor artist pairs, but the artist remains distinct from that.
Correct. Because the authorship debate in that space settled in the opposite direction. If Comic Editors succeeded and were treated like film directors, they would have headline billing on comics and they would be a household name. But it went the other way, and instead Editors who try to claim credit for artistic works, even with receipts, get laughed at.
>I simply posited that commissioning a piece of work does not make you an artist.
Right, but the implication there is that is all people using AI generators do.
>Hiring an interior decorator to decorate my house does not mean I decorated.
Right, but if you are giving the interior decorator creative input, like, "No that sucks this should be red" and revising their output hundreds of times, you are actually involved in the decoration process. And if that decorator is just, hanging up exactly what you tell them to, then they might just be a dogsbody and you the interior decorator.
>I have a local museum and I love exhibits that a specific curator there has focused on more than ones they didn't touch. But that doesn't make them an artist. AI 'artists' fall into that category.
Some do. But the vast majority put a lot more effort in than simple curation. I remember seeing people, when Midjourney first became viable, simply generating 12 images with a single prompt, and sharing all 12 on facebook to pages that wanted nothing to do with them. Thats not art. But its also not the done thing anymore.
Trying to convince some tech people about how artistic creation works, and why it's more than just the right amount of "optimization" of bits for rapid results, is about as pointless as trying to make a chimpanzee understand the intricacies of Bach. The reductiveness of some of you is amusing, but also grotesque in the context of what art should mean for human experience.
I don't think you really understood what I was saying, or what you're even talking about. I've got nothing to "gatekeep" and a defense of skill over automated regurgitation in creating things certainly isn't gatekeeping. People can use whatever tools they like, but they should keep in mind what distinguishes knowing how to create something from having it done for you at the metaphorical push of a button.
No, I understand the insults and ad hoc requirements just fine. And I can point you back to the decades and decades of literature about how anyone can be an artist and how anything can be art. The stuff that was openly and readily said until the second people started making art with AI. As for "push of a button", Visarga has already done a decent job of explaining how that's not actually the case. Not that I have any issue with people doing the metaphorical button push either.
If you're too lazy to put effort into learning how to create an art so you can adequately express yourself, why should some technology do all the work for you, and why should anyone want to hear what "you" (ie: the machine) have to say?
This is exactly how we end up with endless slop, which doesn't provide a unique perspective, just a homogenized regurgitation of inputs.
Yeah and it worked great until industrial agriculture let lots of people eat who had no skill at agriculture. In fact, our entire history as a species is a long history of replacing Skill with machines to enable more people to access the skill. If it gives you sad feelings that people without skill can suddenly do more cool things, thats entirely a you problem.
Again, I wholly reject the idea that there's a line between 'tech people' and 'art people'. You can have an interest in both art and tech. You can do both 'traditional art' and AI art. I also reject the idea that AI tools require no skill, that's clearly not the case.
>nature
This can so easily be thrown back at you.
>why should anyone want to hear what "you" (ie: the machine) have to say?
So why are we having this discussion in the first place? Right, hundreds of millions are interested in exploring and creating with AI. You are not fighting against a small contingent who are trying to covet the meaning of "artist" or whatever. No, it's a mass movement of people being creative in a way that you don't like.
• I didn't say there's a line between "tech people" and "art people". Why would there be?
• We're having this discussion because people are trying to equate an auto-amalgamation/auto-generation machine with the artistic process, and in doing so, redefining what "art" means.
• Yes, you can "be creative" with AI, but don't fool yourself-- you're not creating art. I don't call myself a chef because I heated up a microwave dinner.
• The other guy certainly did. And your subsequent reply was an endorsement of his style of gatekeeping, so. I mean, just talk to some of the the more active people in AI art. Many of them have been involved in art for decades.
• If throwing paint at a canvas is art (sure, why not?) then so is typing a few words into a 'machine'. Of course many people spend a considerable amount more effort than that. No different than learning Ableton Live or Blender.
I have claves, which are literally two sticks. I've also got a couple egg shakers, a couple tambourines.
Do you have ANY IDEA how hard these things are to play well.
I don't care if haphazard bashing of sticks with intent to make noise counts as 'music'. I do care if this whole line of discussion fundamentally equates any such bashing with, say, Jack Ashford.
I would be surprised if the name meant anything to you, as he's more obscure than he should be: the percussionist and tambourine player for the great days of Motown. Some of you folks don't know why that is special.
Maybe you need to refresh the context - 99.99% of AI generated music, images or text is seen/heard only Once, by the AI user. It's a private affair. The rest of the world are not invited.
If I write a song about my kid and cat it's funny for me and my wife. I don't expect anyone else to hear or like it. It has value to me because I set the topic. It doesn't even need to be perfect musically to be fun for a few minutes.
You seem to be the one who doesn't understand how special it is if you think good music is so simple that AI can zero shot it.
People are mixing and matching these songs and layering their own vocals etc to create novel music. This is barely different from sampling or papier mache or making collages.
People made the same reductionist arguments you're making about electronic music in the early days. Or digital art.
Dumping money into a company until desired results is not "building a company". I have nothing against capital, but the hubris of the people investing is insane. /s
Look, sarcasm aside, for you and the many people who agree with you, I would encourage opening your minds a bit. There was a time where even eating food was an intense struggle of intellect, skill, and patience. Now you walk into a building and grab anything you desire in exchange for money.
You can model this as a sort of "manifestation delta." The delta time & effort for acquiring food was once large, now it is small.
This was once true for nearly everything. Many things are now much much easier.
I know it is difficult to cope with, because many held a false belief that the arts were some kind of untouchable holy grail of pure humanness, never to be remotely approached by technology. But here we are, it didn't actually take much to make even that easier. The idea that this was somehow "the thing" that so many pegged their souls to, I would actually call THAT hubris.
Turns out, everyone needs to dig a bit deeper to learn who we really are.
This generative AI stuff is just another phase of a long line of evolution via technology for humanity. It means that more people can get what they want easier. They can go from thought to manifestation faster. This is a good thing.
The artists will still make art, just like blacksmiths still exist, or bow hunters still exist, or all the myriad of "old ways" still exist. They just won't be needed. They will be wanted, but they won't be needed.
The less middlemen to creation, the better. And when someone desires a thing created, and they put in the money, compute time, and prompting to thusly do so, then they ARE the creator. Without them, the manifestation would stay in a realm of unrealized dreams. The act itself of shifting idea to reality is the act of creation. It doesn't matter how easy it is or becomes.
Your struggle to create is irrelevant to the energy of creation.
It doesn’t even have to be art. If someone told me they were a chef and cooked some food but in reality had ordered it I’d think they were a bit of a moron for equating these things or thinking that by giving someone money or a request for something they were a creator, not a consumer.
It may be nice for society that ordering food is possible, but it doesn’t make one a chef to have done so.
In ordering a meal from someone else who makes it, I think that the relationship is rather well defined. One person is asking another person to use their skills to make a meal.
With AI, there is a vision and there is a tool executing it. This has a recursive loop involving articulation, refinement, repetition. It is one person using a tool to get a result. At a minimum, it is characteristically different than your comparison, no?
To add, my original statement was concerning going into a grocery store and buying ingredients. That was once a much more difficult process.
As an aside it reminds me of a food cart I would go to regularly in Portland. Sometimes the chefs would go mushroom foraging and cook a lunch using those fresh mushrooms. It was divine. If we ever reach a time when I can send a robot out to forage for mushrooms and actually survive the meal, I would celebrate that occasion, because it would mean we all made it through some troubling times.
I enjoy this take. Funding something is not the same as creating it. The Medicis were not artists, Michelangelo, Botticelli, Raphael, etc were.
You might not be a creator, but you could make an argument for being an executive producer.
But then, if working with an artist is reduced to talking at a computer, people seem to forget that whatever output they get is equally obtainable to everyone and therefore immediately uninteresting, unless the art is engaging the audience only in what could already be described using language, rather than the medium itself. In other words, you might ask for something different, but that ask is all you are expressing, nothing is expressed through the medium, which is the job of the artist you have replaced. It is simply generated to literally match the words. Want to stand out? Well, looks like you’ll have to find somebody to put in the work…
That being said, you can always construct from parts. Building a set of sounds from suno asks and using them like samples doesn’t seem that different from crate digging, and I’d never say Madlib isn’t an artist.
Michelangelo had apprentices and assistants, many of which did a significant portion of the work. You could model him as the executive artist, directing the vision. Is this so different from prompting? Whose name is attached to all those works?
I will say Michelangelo was particularly controlling and distrusting of assistants, and uniquely did more work than other master artists of the time, but the point remains. The vision has always been the value.
Assuming that 1. food is free and instant to get, and 2. there are infinite possibilities for food - then yes, if you ordered such a food from an infinite catalog you would get the credit.
But if you ordered 100 dishes iterating between designing your order, tasting, refining your order, and so on - maybe you even discover something new that nobody has realized before.
The gen-AI process is a loop, not a prompt->output one step process.
I disagree with the characterization as “absurd” to equate AI to an instrument. As you just said, it is a powerful tool. I would equate basic Suno prompting to a beginner on an instrument, as instruments are tools like anything else. Just because you get music out, it doesn’t mean it is actually “good” any more than if I smash random keys on a piano.
Controlling that flow of generation, re-prompting, adjusting, splicing, etc. to create a unique song that expresses your intention is significantly more work and requires significantly more creativity. The more you understand this “instrument”, the more accurate and efficient you become.
What you’re comparatively suggesting is that if a producer were to grab samples off Splice, slice them and dice them to rearrange them and make a unique song, that they didn’t “actually” make music. That seems like it would be a more absurd position than suggesting AI could be viewed as an instrument.
Tools like Suno make people feel like “their own music” is good and they have accomplished something because they elevate the floor of being bad at a tool (like all technological improvements do). They feel like they have been able to express their creativity and are proud, like a kid showing off a doodle. They share it with their friends, who will listen to it exactly one time in most cases and likely tell them it is “really good” and they “really like it” before never listening again.
That type of AI use is akin to a coloring book, but certainly doesn’t make for “good” music. When a kid shows off their badly colored efforts proudly, should we yell at them they aren’t doing “real art”, that their effort was meaningless, and that they should stop acting proud of such crap until they go to art school and do it “properly”?
> learning suno is no different than learning guitar
It most definitely is different and you’ve proven it with your own post. Guitar takes a long time to get to a place where you can produce the sounds you hear in your head. Suno gives you instant gratification.
Look, if it gives you pleasure to make Suno music then you should do it, but if you think having an ai steal a melody and add it to your songs counts is the same as creating something, you’re kiddo by yourself. At best you are a lyricist relying on a robocomposer to do the hard part. You could have achieved the same thing years ago by collaborating with a musician like Bernie Taupin did with Elton John.
There are drawbacks to being a skilled (trained/practiced) musician. You specialize in one instrument, and tend to have your creativity guided by its strengths/weaknesses.
I think that soon, some very accomplished musicians will learn to leverage tools like Suno, but they aren't in the majority yet. We're still in the "vibe-coding" phase of AI music generation.
We saw this happen with CG. When it started, engineers did most of the creating, and we got less-than-stellar results[0].
Then, CG became its own community and vocation, and true artists started to dominate.
Some of the CG art that I see nowadays, is every bit as impressive as the Great Masters.
We'll see that, when AI music generation comes into its own. It's not there, yet.
>Some of the CG art that I see nowadays, is every bit as impressive as the Great Masters.
Really? Except the minor part in which a great master spent months to years creating one of his works, instead of a literally mindless digital system putting it together (in digital, no pigments here) instantly.
The technology is impressive, sure, but I see nothing artistically impressive about it, or emotionally satisfying about the utter lack of world and life of creation it lacks.
If you're an actual artist, who's taken the time to paint and learn its intricacies, yet you're still just as impressed by an automated CG rendering of a work in Old Master style vs. one really done by a dedicated human hand, then you either hate the thing you learned because something about it frustrated you, or you have no clue about forming qualitative measurements of skill.
Also, "old-fashioned"? This to imply that someone rendering painterly visuals in seconds with AI is some new kind of artist? If so, then no, what they do isn't art to begin with. That at least requires an act of effortful creation.
It might be enlightening to find out a bit about the process of creating CGI; especially 3D scenes. Many works can definitely take over a year.
I spent some time, making CG art, and found it to be very difficult; but that was also back before some of the new tools were available. Apps like Procreate, with Apple Pencil and iPad Pro, are game-changers. They don't remove the need for a trained artist, though.
But really, some of the very best stuff, comes quickly, from skilled hands. Van Gogh used to spit out paintings at a furious pace (and barely made enough to live on. Their value didn't really show, until long after his death).
I fail to see how you're disagreeing with me if you say this, or maybe we're at mixed signals. I'm specifically arguing against being impressed by a visual of some kind that was sludged out automatically by an LLM, my argument isn't against digital art by itself (I know how hard CGI can be, and there's nothing to be dismissed about it because it doesn't directly use physical materials), or against artists who refine their craft to such a point that they can create visual marvels in no time. Both of those require effort. They require a combination of effort with learning, exploring and to some extent also talent I'd say.
Briefly instructing an image model to imitate an Old Master and having it do so in seconds fulfills none of those needs, and at least to me there's nothing impressive about it as soon as I know how it was created (yes, there is a distinction there even if at first glance at a photo of a real old master and an AI-rendered imitation, it might be hard to note a difference)
The latter is not art, and the people who churn it out with their LLM of choice are not artists, at least not if that's their only qualification for professing to be such.
Well, I’m still not interested in arguing, so I’m not really “disagreeing,” as I think that we’re probably not really talking about the same thing, but I feel that I do have a fairly valid perspective.
When airbrushing became a thing, “real” artists were aghast. They screeched about how it was too “technical,” and removed the “creativity” from the process. Amateurs would be churning out garbage, dogs and cats would be living together, etc.
In fact, airbrushes sucked (I did quite a bit of it, myself), but they ushered in a new way of visualizing creative thinking. Artists like Roger Dean used them to great effect.
So people wanted what airbrushes gave you, but the tool was so limited, that it frustrated, more than enabled. Some real suckass “artists” definitely churned out a bunch of dross.
Airbrushing became a fairly “mercenary” medium; used primarily by commercial artists. That said, commercial artists have always used the same medium as fine artists. This was a medium that actually started as a commercial one.
Airbrushing is really frustrating and difficult. I feel that, given time, the tools could have evolved, but they were never given the chance.
When CG arrived, it basically knocked airbrushes into a cocked hat. It allowed pretty much the same visual effect, and was just as awkward, but not a whole lot more difficult. It also had serious commercial appeal. People could make money, because it allowed easy rendering, copying, and storage. There was no longer an “original,” but that really only bothered fine artists.
This medium was allowed to mature, and developed UI and refined techniques.
The exact same thing happened with electric guitars, digital recording and engineering, synthesizers, and digital photography. Every one of these tools, were decried as “the devil’s right hand,” but became fundamental, once true creatives mastered them, and the tools matured.
“AI” (and we all know that it’s not really “intelligence,” but that’s what everyone calls it, so I will, too. No one likes a pedant) is still in the “larval” stage. The people using it, are still pretty ham-handed and noncreative. That’s going to change.
If you look at Roger Dean’s work, it’s pretty “haphazard.” He mixes mediums, sometimes using their antipathy to each other to produce effects (like mixing water and oil). He cuts out photos, and glues them onto airbrushed backgrounds, etc. He is very much a “modern” creative. Kai Krause is another example. Jimi Hendrix made electric guitars into magical instruments. Ray Kurzweil advanced electronic keyboards, but people like Klaus Schultze, made them into musical instruments. These are folks that are masters of the new tools.
I guarantee that these types of creatives will learn to master the new tools, and will collaborate with engineers, to advance them. I developed digital imaging software, and worked with many talented photographers and retouchers, to refine tools. I know the process.
Of course, commercial applications will have outsized influence, but that’s always the case. Most of the masters were sponsored by patrons, and didn’t have the luxury to “play.” They needed to keep food on the table. That doesn’t make their work any less wonderful.
We’re just at the start of a new revolution. This will reach into almost every creative discipline. New techniques and new tribal knowledge will need to be developed. New artists will become specialists.
Personally, I’m looking forward to what happens, once true creatives start to master the new medium.
> Guitar takes a long time to get to a place where you can produce the sounds you hear in your head. Suno gives you instant gratification.
Wrong. It takes extremely long before you can make the sounds in your head fit into the scale and recognize them, however with Suno it is impossible.
I would compare Suno to a musician-for-hire. You describe what you want, some time later he sends you the recording, you write clarifications, and get second revision, and so on. Suno is the same musician, except much faster, cheaper and with poor vocal skills. Everything you can do with Suno today, you could make before, albeit at much higher price.
The fact people still think this is how these models work is astonishing
Even if that were true, sampling is an artform and is behind one of the most popular and succesful genres today (hip hop). So is DJ'ing or is that also not a skill?
The same puritanism that claimed jazz wasn't music, then rap wasn't music, then EDM wasn't music, blah blah
Gatekeepers of what is and isn't art always end up wrong and crotchety on the other side. It's lame and played out.
I actually make a lot of sample based music, and it’s as much an art as you make it. Downloading a couple of loops from splice and layering them is lame, actually chopping and repurposing samples is not.
I never said Suno wasn’t “art”. The opposite is true. If you want to put your name on something that took no effort or skill and call it art, more power to you. You could do the same in other areas, and lame, low effort “art” proceeds AI by millennia. You are as welcome as anybody to call yourself a creator, however lame that effort may be.
But man the chutzpah of comparing that low effort drivel with people pushing genre boundaries.
Yes, sampling is an artform. We are on the same page here.
But your original comment implies that using Suno would be like sampling.
Therefor I mentioned that you need to properly credit the usage of samples, what Suno is not doing, Suno is steeling from real artist.
Hope it is more clear now.
defending the idea of complex, nuanced effort for the sake of coherent creation being a demonstration of skill is gatekeeping?
I'd love to see programmers reactions to having the measure of their work reduced in such a way as more people vibe code past all the technical nonsense.
Live programming music using tools like SuperCollider is/was a (very niche) thing. Someone is on stage with a laptop and, starting from a blank screen that is typically projected for everyone to see, types in code that makes sounds (and sometimes visual effects). A lot of it involves procedurally generated sounds using simple random generators. Live prompting as part of such shows would not seem entirely out of place and someone might figure out how to make that work as a performance?
SuperCollider enthusiast here, I think you missed the "is no different than" part. Working with SuperCollider is very different from playing any instrument live, and I doubt that'll change.
Where playing an instrument means balancing the handling of tempo, rhythm and notes while mastering your human limitations, a tool like SuperCollider lets you just define these bits as reactive variables. The focus in SuperCollider is on audio synthesis and algorithmic composition, that's closer to dynamically stringing a guitar in unique rule-based ways - mastering that means bridging music- and signal-processing theories while balancing your processing resources. Random generators in audio synthesis are mostly used to bring in some human depth to it.
The year is 2027. A 16 year old at a house party pulls out his laptop and asks his friends to gather round. He starts typing “a song about a wonderful wall” and completely original music starts playing. A girl in the corner, hearing the heartfelt melody, starts to fall for the boy.
I think "learning guitar" is different from "learning Suno" because with guitar you have control over what you play. I also love music, and making music, and have no natural musical talent, but I see no interest in generating a song without me deciding every aspect and choosing every note. It's like taking the most interesting and creative part from me.
Personally for me I wouldn't be able to reconcile the fact that these generated stems are basically the same as generated AI images--built from the digital bits of existing tracks/music/recording that someone else spent the time and hard work making and then sharing only to have it unexpectedly hoovered up by these corporations as part of their giant data training set.
> I know it's easy to say generative art is generative swill... but "learning Suno" is no different than "learning guitar".
all the discourse with this remark is quite fascinating as an observer. similar remarks used to be said about electronic music or just use of conventional daw when they were new.
to those who have dedicated years into their craft: one must not mix self-expression from the mechanics of getting there. it is very respectable to dedicate one's life to the analogue way. but if something lets you get there in a different way, allow it.
I’ll agree that you+ai is creating a pleasant sequence of sounds ie music. And I don’t think anyone has the right to say (within reason) what is music or isn’t.
But we might need new vocabulary to differentiate that from the act of learning & using different layers of musical theory + physical ability on an instrument (including tools like supercollider) + your lived experience as a human to produce music.
Maybe some day soon all the songs on the radio and Spotify will be ai generated and hyper personalized and we’ll happily dance to it, but I’ll bet my last dollar that as long as humans exist, they’ll continue grinding away (manually?) at whatever musical instrument of the time.
I see it as like having the answer key to every homework assignment for a course. It's easy to convince yourself that it doesn't hurt learning -- but there's probably a reason the answers aren't given to you. The struggle, the experience of "being stuck", the ability to understand why things don't work -- may be necessary precursors to true understanding. We're seeing pretty discouraging results from people who are learning to "vibe code" without an understanding of how they would write the code themselves.
You may wish that learning Suno is no different than learning guitar, but I think the effects of AI may be a bit pernicious, and lead to a stagnation that takes a while to truly be felt. Nobody can say one way or the other yet. That said, I'm happy you can make music that you enjoy, and that Suno enables you to do it. Such tools are at their best when they're helping people like you.
I guess it’s similar to learning by watching masters on YouTube - I’m convinced that passively watching them causes the illusion in the viewer that they are also capable of the same, but if they were to actually try they miss all the little details and experience that makes their performance possible. Watching a chess GM play, for example, can make you feel like you understand what’s happening but if you don’t actually learn and get experience you’re still going to get beat all the time by people, even beginners, who did. But as long as you never test this, you get to live with the self-satisfaction of having “mastered” something yourself.
Of course, nothing wrong with watching and appreciating a master at work. It’s just when this is sold as the illusion of education passively absorbed through a screen that I think it can be harmful. Or at least a waste of time.
It gets very real very quickly with skateboarding. You can watch all the YouTube and Instagram you want about how to do an Ollie or a kickflip in 30s; now go out and try.
The learning is in the failing; the satisfaction of landing it is in the journey that put you there.
> but "learning Suno" is no different than "learning guitar".
You downplay the training it takes to actually use your body to output the notes. With the guitar your fingers have to "learn" as much as your brain on a scale that no prompt input will ever match. And I say that as a musician that use mostly sequencers to compose.
Love the passionate replies!
I think I especially agree with this comment:
>> think that soon, some very accomplished musicians will learn to leverage tools like Suno, but they aren't in the majority yet. We're still in the "vibe-coding" phase of AI music generation. We saw this happen with CG. When it started, engineers did most of the creating, and we got less-than-stellar results[0]. Then, CG became its own community and vocation, and true artists started to dominate.
Hey its likely not going to be me, but let's be real - any user of this technology who has gone beyond the "type in a prompt and look i got a silly song about poop" stage will probably agree - someone's going to produce some bangers using this tech. It's inevitable and if you don't think so it's likely you haven't done anything more than "low-effort" work on these platforms. "Low effort" work - which a majority of AI swill us - is going to suck, whether its AI or not.
And while I have the forum, I do want to make another point. I pay more for a month for Suno than Spotify ($25 vs $9). Suno/Udio etc: do what you need to to make sure the artists and catalogues are getting compensated... as an user I would pay even more knowing that was settled.
guys I think we're being too hard on this guy, why are we so upset that songeater is now Jimi Hendrix because of Suno? I know I'm jealous, I've been beating on my guitar for decades and I'm still pretty meh, but its because I lack the true creative genius required to type suno.com into my browser, not everyone is cut out to be a literal GUITAR GOD like songeater here. Lets give him the props he deserves for the massive investment of backbreaking labor over the past decades^w years^w weeks^w days^w hours^w halfhourmaybe it took for him to learn pseudo-guitar.
I apologise for this comment of mine that I am replying to, it seems like its sort of an unpopular opinion, but in the spirit of AI, please let me allow for some personal back propogation with my own low poly blob of neural networkedness and use this useful training to adjust my personal weights (known to low IQ normies as "values") to better fit in with the community here.
The intent appears to be the manipulation of the physical space at the time of capture. A double exposure of your subject and then a second exposure of the dove or other image would be no better than photoshopping the image after
Written poetry all my life, of varying badness. Have never had the ear or the talent for music though and – unfortunately – always felt I wanted to write “songs” not “poems.” Since 2017, I’ve been trying to “set my poems to music” using the machine. Started with my own algos in 2017, got going in earnest in 2020 with openAI’s jukebox; then last year a friend turned me to Suno.
Take the first poem talked about in OP’s article and one of the comments: “humans working hard to prove that they can make art that’s somehow even worse than AI slop.” I see this sort of comment a lot and I’m not saying that’s wrong at all – undoubtedly the vast vast majority of AI “content” is truly “slop.”
But I’ve also believed that genAI could be thought of like an instrument. Most music played on a piano or a synth or a guitar is slop; but it undoubtedly allows for music to be made that would otherwise not exist. I hope the same can be said of Suno (or whatever – hopefully opensourced alternative - follows).
My understanding is that this is a (somewhat) open-source project that does music generation. I haven't read the license to see how permissive it truly is, but being someone who has been involved in this space for a while, I can say that we definitely need more open-source projects. Suno is great but completely walled off.
So yeah YueAI team... if you're really going to keep this project open... don't listen to the haters and keep going.
Shouldn't the "sonic boom" here provide good data as to the existence of dark matter (akin to the Bullet cluster)? Anyone on hn with good background care to comment? Don't see anything in the article about it, but would think is one of the most significant experimental goals from detecting these sorts of collisions.
My understanding is that, yes, the way matter in a galaxy merger behaves acts as strong evidence for the existence of dark matter and the theory that it's made of something that interacts weakly with normal matter.
I don't know...
It's like claiming that Samsung "enhanced their phone camera abilities" when they replaced zoomed-in moon shots with hi-res images of the moon.
I think that's meaningfully different. If you ask for chess advice, and get chess advice, then your request was fulfilled. If you ask for your photo to be optimized, and they give you a different photo, they haven't fulfilled your request. If GPT was giving Go moves instead of Chess moves, then it might be a better comparison, or maybe generating random moves. The nature of the user's intent is just too different.
It's cheating to the extent that it misrepresents the strength and reasoning ability of the model, to the extent that anyone is going to look at it's chess playing results and incorrectly infer this says anything about how good the model is.
The takeaway here is that if you are evaluating different models for your own use case, the only indication of how useful each may be is to test it on your actual use case, and ignore all benchmarks or anything else you may have heard about it.
It represents the reasoning ability of the model to correctly choose and use a tool... Which seems more useful than a model that can do chess by itself but when you need it to do something else, it keeps playing chess.
Where it’ll surprise people is if they don’t realize it’s using an external tool and expect it to be able to find solutions of similar complexity to non-chess problems, or if they don’t realize this was probably a special case added to the program and that this doesn’t mean it’s, like, learned how to go find and use the right tool for a given problem in a general case.
I agree that this is a good way to enhance the utility of these things, though.
It doesn't take much to recognize a sequence of chess moves. A regex could do that.
If what you want is intelligence and reasoning, there is no tool for that - LLMs are as good as it gets for now.
At the end of the day it either works on your use case, or it doesn't. Perhaps it doesn't work out of the box but you can code an agent using tools and duct tape.
Do you really think it's feasible to maintain and execute a set of regexes for every known problem every time you need to reason about something? Welcome to the 1970s AI winter...
Sure, but how do you train a smarter model that can use tools, without first having a less smart model that can use tools? This is just part of the progress. I don't think anyone claims this is the endgame.
I really don't understand what point you are trying to make.
Your original comment about a model that might "keep playing chess" when you want it to do something else makes no sense. This isn't how LLMs work - they don't have a mind of their own, but rather just "go with the flow" and continue whatever prompt you give them.
Tool use is really no different than normal prompting. Tools are internally configured as part of the hidden system prompt. You're basically just telling the model to use a specific tool in specific circumstances, and the model will have been trained to follow instructions, so it does so. This is just the model generating the most expected continuation as normal.
"Is gpt-3.5-turbo-instruct function calling a chess-playing model instead of generating through the base LLM?"
I'm absolutely certain it is not. gpt-3.5-turbo-instruct is one of OpenAI's least important models (by today's standard) - it exists purely to give people who built software on top of the older completion models something to port their code to (if it doesn't work with instruction tuned models).
I would be stunned if OpenAI had any special-case mechanisms for that model that called out to other systems.
When they have custom mechanisms - like Code Interpreter mode - they tell you about them.
I think it's much more likely that something about instruction tuning / chat interferes with the model's ability to really benefit from its training data when it comes to chess moves.
It should be easy to test for. An LLM playing chess itself tries to predict the most likely continuation of a partial game it is given, which includes (it has been shown) internally estimating the strength of the players to predict equally strong or weak moves.
If the LLM is just pass through to a chess engine, then it more likely to play at the same strength all the time.
It's not clear in the linked article how many moves the LLM was given before being asked to continue, or if these were all grandmaster games. If the LLM still crushes it when asked to continue a half played poor quality game, then that'd be a good indication it's not an LLM making the moves (since it would be smart enough to match the poor quality of play).
LLMs have this unique capability. Yet, every AI company seems hell bent on making them... not have that.
I want the essence of this unique aspect, but better, not this unique aspect diluted with other aspects such as the pure logical perfection of ordinary computer software. I already have that!
The problem with every extant AI company is that they're trying to make finished, integrated products instead of a component.
It's as-if you just wanted a database engine and every database vendor insisted on selling you a shopfront web app that also happens to include a database in there somewhere.
If that's what it does, then it's "cheating" in the sense that people think they're interacting with an LLM, but they're actually interacting with an LLM + chess engine. This could give the impression that LLM's are able to generalize to a much broader extent than they actually are – while it's actually all just a special-purpose hack. A bit like putting invisible guard rails on some popular difficult test road for self-driving cars – it might lead you to think that it's able to drive that well on other difficult roads.
Calling out to some chess-playing-function would be a deviation from the pure LLM paradigm. As a medium-level chess player I have walked through some of the LLM victories (ChatGPT 3-5-turbo-instruction); I find it is not very good at winning by mate - it misses several chances of forced mate. But forced mate is what chess engines are good at - can be calculated by exhaustive search of valid moves in a given board position.
So I'm arguing that it doesn't call out - it should gotten better advice if it did.
But I remain amazed that OP does not report any illegal moves made any of by LLMs. Assuming training material includes introductory texts of chess playing and a lot of chess games in textual notation (e.g. PGN) I would expect at least occasional illegal moves since the rules are defined in terms of board positions. And board positions are a non-trivial function of the set of moves made in a game. Does an LLM silently perform a transformation of the set of moves to a board position? Can LLMs, during training, read and understand board-position diagrams of chess books?
> But I remain amazed that OP does not report any illegal moves made any of by LLMs.
They did (but not enough detail to know how much of an impact it had):
> For the open models I manually generated the set of legal moves and then used grammars to constrain the models, so they always generated legal moves. Since OpenAI is lame and doesn’t support full grammars, for the closed (OpenAI) models I tried generating up to 10 times and if it still couldn’t come up with a legal move, I just chose one randomly.
I don't think it is, since OpenAI never mentions that anywhere AFAIK. That would be a really niche feature to include and then drop instead of building on more.
Helping that along is that it's an obvious scenario to optimize, for all kinds of reasons. One of them being that it is a fairly good "middle of the road" test for integrating with such systems; not as trivial as "Let's feed '1 + 1' to a calculator" and nowhere near as complicated as "let's simulate an entire web page and pretend to click on a thing" or something.
Why would they only incorporate a chess engine into (seemingly) exactly one very old, dated model? The author tests o1-mini and gpt-4o. They both fail at chess.
Because they decided it wasn't worth the effort. I can point to any number of similar situations over the many years I've been working on things. Bullet-point features that aren't pulling their weight or are no longer attracting the hype often don't transition upgrades.
A common myth that people have is that these companies have so much money they can do everything, and then they're mystified by things like bugs in Apple or Microsoft projects that survive for years. But from any given codebase, the space of "things we could do next" is exponential. That defeats any amount of money. If they're considering porting their bespoke chess engine code up to the next model, which absolutely requires non-trivial testing and may require non-trivial work, even for the richest companies in the world it is still an opportunity cost and they may not choose to spend their time there.
I'm not saying this is the situation for sure; I'm saying that this explanation is sufficient that I'm not going "oh my gosh this situation just isn't possible". It's definitely completely possible and believable.
Based on looking at the games at the end of the post, it seems unlikely. Both sides play extremely poorly — gpt-instruct is just slightly less bad — and I don't see any reasonable engine outputting those moves.
If the goal is to produce a LLM-like interface that generates correct output, then sure, it's not cheating..... but is it really a data-driven LLM at that point? If the LLM amounts to a chat-frontend that calls a host of human-prepared programs or draws from human-prepared databases, etc, it's starting to sound a lot more like Wolfram Alpha v2 than a LLM, and strikes me as walking away from AGI rather than toward it
I would note that "hybrids" in China (where plug-in-hybrids have gone to 16% of market share, up 700bps y/y) is a fundamentally different architecture than hybrids in the west. In China (see Li Auto [1] for example) hybrids are Battery Electric vehicles (ie no gearbox, fully electric motor) with a small gasoline generator and tank to recharge the battery. This is "best of both worlds"... you get the electric motor, which is much more efficient / cheaper than an ICE transmission. Then the gasoline generator is just tuned to maximize efficiency (~44% efficiency to electric, vs the mid-30s on an ICE motor) so the net efficiency of the hybrid is far superior to a Prius-plug-in type structure. These are termed Electric Range Extender Vehicles ("EREV"s), which is type of a Plug-in Hybrid since you can charge by plugging in the battery or by filling in gas.
Really surprised haven't seen these EREVs in the west, although Hyundai is supposed to launch in US in 2026. Could be game-changer when that happens...
Yes! Difference is the current-gen EREVs are in a different league of performance with the advancement in electric powertrain and software. Think change between a Model 3 and a Chevy Bolt...
While you may have meant that as a criticism, I don't think that is an inaccurate description of the process and do hope my post describes it as such. Yes sampling from the latent-space is an exercise in curating and harnessing randomness.
>> ...the mental justifications in the blog post are painful. I think you may find musicians bristle at your claim that all art is circular,
Yes I many/most do. This is not a method of "making music" that could have been contemplated 5 years ago (harnessing randomness = learning an instrument for 10+ years = wtf!!!??). But it exists today, and there are a lot of folks like me who are enabled by it. There will be some among us (not me!) who come along who are far more talented, and when this tech is shown to enable their talent, then I think the bristling will lessen.
>> [various issues about copyright]
I write my own lyrics from my meat-brain, so I may own the copyright to the words (maybe?). But I've chosen to release the words out into the world with no expectation of keeping that. And yes I understand I do not own the copyright to the songs/music/etc (I have read Suno's rights and recent Udio/UMG deals etc). In any case, I don't much care about that, nor am I looking to make $$$ off this. I wrote (words) for a long time... and can now put them to music. If there are any others of you out there who would like to do that, I would suggest you try these paths as well.
Just try not to push the button 1x... do it 5,000x! It's the effort and vision that keeps you from "slop" - and maybe you'll be the one who'll make it great fucking art!