I'm trying to learn music production with a DAW, sometimes I wonder if I'm wasting my time. Part of my reason for trying this was reading how creative endeavors can be therapeutic (I'm dealing with burnout/depression/cptsd).
I'm at the stage where sometimes I make something that sounds good (to me) but I know it requires work (in the "not fun" sense) to finish it and even then, it will likely never be appreciated by anyone but myself.
Which isn't a problem if the process itself is joyful, but I have to admit I've always struggled to enjoy anything that doesn't involve other people in some way (shared goal or approval of some form).
None of these problems are "new", but I feel like AI is making this question of "why do it" or "what is worth doing" even more urgent. Kind of wondering how others are affected by all this, if at all.
You're not wasting your time, my friend. But you've got to be very certain and honest as to why you want to learn that.
If your goal is being heard and appreciated, well, you better reconsider.
If you're doing it for your own pleasure and pure love of art, absolutely do go on, without any expectations. It may or may not take off, but the samurai must not care.
Can't agree with this more. I also started learning guitar and producing music very recently. I have no interest in getting heard and appreciated (on most days atleast).
It has been a tremendously rewarding journey to create new music and see myself improve. 10/10 would do again.
I 100% agree with this and have found it to entirely be my own drive for learning and creating.
For me it is beyond trying to make money or become famous, it is simply to enjoy the journey and the creativity that comes with creating music.
> For me it is beyond trying to make money or become famous,
To clarify, when I speak of "approval", I'm not imagining a successful career or financial success. It's much more basic, i.e. having a few people tell me they genuinely like something I created would do that.
> it is simply to enjoy the journey and the creativity that comes with creating music.
It's unfortunately not simple for me (again, context of long term burnout / depression etc). If I only go by enjoyment, I will watch TV and maybe read and go on bike rides until the end of my days. But that is not fulfilling in the long term. I have a creative drive, but it's rather intermittent and not enough to consistently want to do the work involved. I'm trying to nurture it.
> the samurai must not care
Definitely recommend to OP to explore the modern warrior philosophy drawing from bushido.
Some people just never found what that thing is for them. And usually you find those things doing them the hard way while you suck. And then the reward is people will see what you do and recognize the work you put in. But if suddenly every person with a prompt does the exact same thing with zero effort, it does take away from the joy of doing it. At least if the joy of doing it is related to the feeling of liking to do "hard things" or liking to think of oneself as one that "does hard things". And I'd say that includes a lot of people and a lot of activities.
I bet a lot of accountants in the old days were really good at basic math, and proud of being fast and accurate and now there's calculators and the amount of people that work on mental math just for the love of the game is probably super small in comparison to when it was a core skill of many more people's jobs.
> Part of my reason for trying this was reading how creative endeavors can be therapeutic (I'm dealing with burnout/depression/cptsd).
This is the reason why a lot of us make music. Writing orchestral pieces is my own meditation. I don't share most of them, and replacing them with AI would defeat the purpose.
Please keep learning it! The world needs more musicians, even if we never hear them.
I have two answers for you:
1. All that AI really does is a (partially) randomized exploration of the space that has been spanned by existing music. AI creativity, as far as it can be said to exist, is limited by this. You, on the other hand, are human and not bound by any of these limitations. You are free to explore wild things that no AI can do. Just as a completely random example, you could go out, record noises your environment (even if it's just with the smartphone), grab interesting parts, chop them up, process them and turn them into unique new instruments. Bang on random stuff that has a nice ring to it. Record background hums, apply filters and envelopes to them etc. And there are so many other ways to produce unique creations.
2. Most importantly, music is a form of human expression. It is able to capture the human condition in a unique way. As a human, you can express these things genuinely through your own emotions, experiences, memories etc. AI systems can only produce hollow facsimiles of this. Regardless of whether you are conscious about it, every piece of music that you create is a reflection of you: your thoughts, your emotions, your process. And that imparts the true value on your creations.
I'd wager most people who make music are making it for the sheer joy of expression. Like .001% of people who make music get any kind of meaningful monetary return on it, and I think anyone who goes into it looking for monetary return is doing it for the wrong reasons. In my view, AI changes nothing where it matters it music.
Maybe you could see if there's someone who finds the "not fun" part fun and you could collaborate with them. That would solve two problems at the same time.
Either way, I strongly encourage you to keep using a DAW if that brings you joy. Using AI to create art is a different skill set, just like using acoustic instruments is a different skill set from using either. Each option appeals a different amount to different people, and you should just do what brings you the most joy.
You have the answer to the "why do it" in the first part of "why you are doing it". Just because something may be created at the click of the button doesn't mean it fulfills the goals you are looking for. People knit even though there are machines that do that for you. You are doing it for you.
[delayed]
don't sell your work short. it has value because it comes from you, and struggling to finish is unbelievably common in the creation process - not to mention frustration and a lack of joy.
if you're creating because you feel a drive to create, you are making art and that has intrinsic value to yourself and others. if however you are performing the act of musical creation as a means to an end, what you are doing may be better considered work and not art. the work of others can also be appreciated but it is different.
keep at it though. you are asking good questions and unlike many you are also personally engaging with them.
I have done this with music, writing, and many other things. AI doesn’t make any of these things less enjoyable for me because the process of creation itself is the part that I enjoy.
I have a very low bar for what I consider to be a successful creation: it just needs to be enjoyable for myself in the future. Anyone else who happens to enjoy the content I make is a bonus. I have several songs on SoundCloud that I have produced in the past and I still enjoy listening to them.
You are clearly just at the beginning of your musical journey. I am happy for you. Yes, music for me too makes no sense without other people. This means, I suggest, that you must go out and find other people to collaborate with. The more you do it the more you realize just how many people out there are in the same boat as you. And once you find that right person or group, it’s like nothing else. And let’s be clear, this will take you far far outside your normal social circle. The type of people who like the same music as you may be completely different in every other way. It is important to actively seek out the right people and along that journey, define exactly what that person is, as well as who you are. This is the thing I care most about and yes I hope that more tools, AI or not come out to reduce that work that you have to do to make something polished, so everyone can focus on being creative.
I love making music, and got into it as a venue to be away from the computer. I still do post-production in Ableton, but everything else happens with gear not even connected to a computer. I've tried to make music with a DAW, but it feels so sterile and boring, compared to actually using hardware to make it.
Maybe get a second-hand Novation Circuit to start with, or some similar "groovebox" that lets you make songs on one device, and see if you actually still do enjoy making music, yet haven't found the right process for you yet.
I don't think you're wasting your time, as long as you're having fun, regardless of what happens in the rest of the world. Sure, AI could probably make "better" (by some definition of "better") music than me, but AI couldn't make my friends smile at me as I play them my music I've made, that's quite literally priceless.
> AI couldn't make my friends smile at me as I play them my music I've made, that's quite literally priceless.
Can I ask how you share music with friends? I guess this is part of my problem, I don't really have anyone I could share with or collaborate with. The few people in my life don't listen to the type of music I like.
I ask them to come over, then I ask them "Hey, mind if I play this for you and you tell me what you think?" basically. Alternatively, send it as a .mp3 via Whatsapp/Telegram and ask for feedback, but that's almost never as fun or useful.
Best way to meet like-minded people is to go to music events where those people are, always a ton of music makers around those, usually also by themselves, sometimes in the back or on the side to the speakers. Most people in such events are OK with being approached by strangers :)
I'm having fun building elaborate software that meets my needs precisely and nobody else's. I mean, maybe it would meet other's needs but that production would take away from the fun and learning I have building it, and would also reduce its utility for me personally.
Same applies to any creative hobby. Do it for yourself. I guess you can still share to your social circle. Others can still appreciate it.
I am hopeful that in the near future once AI has saturated as much of everything that it can that it will actually become even more worth it to do things. At least for me the only reason to experience art in any form is that there was human intent behind it. Thus making human generated content more valuable compared to the flood of empty AI content.
It's not a waste of time. Every time some new thing comes out and becomes popular, everyone everywhere says everything must be that thing because it's the future. In America, everyone wanted pure white sliced bread. People wanted frozen TV dinners. People wanted chain fast food restaurants. People wanted no effort reality TV. People wanted endless superhero movies.
Now people want actual food and they want stuff made with human hands and they want to know what's in it. People want TV shows with a proper story. People are beyond done with cookie cutter superhero movies.
The slop wave is going to pass. AI can make stuff that sounds super polished and perfect, but people will want the rough and crude touch of something hand made. They'll want to see videos of musicians showing behind the scenes of how they made something. They'll want to go and see a musician perform. Interest in 100% AI generated music will fade into the background and it'll be relegated to soulless Muzak used for ambiance in soulless chain restaurants too cheap to pay for actual music and too afraid to play any songs that might offend or annoy someone.
It's worth doing. Those "no AI" mixes on youtube are doing great, though the vast majority of people is clueless and will happily digest any slop.
Create for yourself, and for those that seek the human effort and passion. There's an increasing number of us.
I'm the biggest doomer on this site, yet I'm certain human art will become even more valuable, and appreciated, than it has ever been before in history. Just don't expect to make billions out of it, or to reach out to the masses that are quite content with industrial-scale mediocrity.
The not-fun work isn’t on the song, it’s on you. Improving the song is a byproduct. This only really becomes apparent over time but you’ll realise you were working on yourself all along.
It depends what you want to get out of it, and what you think art itself really is.
If it's nothing but an end product, that needs to fit a specific aesthetic, with a specific sound, then I probably agree. AI is making that "pointless" in a way.
Almost everyone I know who's been an artist for years though, has come to a similar realization: What you set out to create, and what it turns into through the process of creating it are different things. The meaning, truly is found along the way.
You can always be better, there's always more to learn. Nothing is ever truly perfect, or "complete"
If you write harmony, there's always a different way it could be written, that might fit better, or be more interesting. If you do sound design, whether that's with getting different guitar tones, synth programming, unique recording techniques, there's always more to learn, or a different way to approach it.
If the only point is an end result, then AI can deliver a simulacra of that.
For everyone I know that loves music, or working with DAWs, the end result is an ever shifting target as you learn more, and understand music in a different way.
Ultimately, there are no shortcuts to making something new, because the practice of trying to make things is what results in what your art becomes. Tools and technology can shape what that thing ends up being, but they (traditionally) don't replace the process of creating it, and the feedback loop between who you are and the decisions you make along the way.
Stripping all of that out, and jumping to a "finished" product, is, well very product focused, but to me completely devoid of art or musicianship.
Some people seem to compare this to sampling, but anyone who's ever actually worked with sampling in a creative way will realize how hollow that comparison is. Almost all good sampling still requires a good deal of active feedback, between the person working with it and the way THEY hear what's going on.
Remove the person from that loop, replace the decisions with a general vague notion, and you end up with something that sounds "like" music, but that feedback loop is broken.
I see the same thing with all the AI UI design that's coming out. It's all generally quite competent, and exactly the same. Great for a business tool, where maybe the velocity and an acceptable MVP is the only point, but terrible for actual design and novel thought.
TL;dr: Why do it? Because you want to, and you think that with enough time engaging with something you'll change, just as it does, and the result isn't something you could have ever predicted when you started. It changes you, and that's the point. Just like learning an instrument, or learning to code. It's not purely about the produced result, and that very result fundamentally is changed by you actively engaging with whatever the medium is.
> Stripping all of that out, and jumping to a "finished" product, is, well very product focused, but to me completely devoid of art or musicianship.
This hits very close to the philosophical core of the AI debacle.
All hardcore fans of AI just want things done. The process is of no interest to them.
This is truly an eschatologic problem of desire. Consider:
Some people want to grab their result, attain satiation, have orgasm, and die, right now.
Others would much rather enjoy the process, the meal itself, indulge in gentle act of love in tune with the partner, and just keep on living their lives, continuously.
Thanks, I like this answer. I think part of my problem is more general, a struggle to enjoy something when I can tell I'm not good at it. It's kind of a circular problem, I will need to spend more time on it to get better and I need to conjure confidence that I could do so out of the ether.
I have experienced the process you're talking about, although to some degree I feel it's symptomatic of a lack of skill. I start out with some kind of inspiration in mind, but end up with a compromise between what I can do and what sounds good when I fiddle around with things. Part of me feels dissatisfaction that I don't know which knobs to turn to get what I want, but I suppose that's just the normal learning process (albeit less structured than those I have gone through in the past, which is its own obstacle sometimes).
I’ve also struggled with the “not enjoying something because I suck at it” problem, and it’s a tough one. The answer is to remove expectations, but much easier said than done.
That said, I wonder if doing it with other people who suck would help. I started playing ice hockey as an adult, and the thing that got me over the initial hump of being completely useless was doing lessons with other newbies in my exact shoes (or skates) rather than trying to go right to full speed games.
I've been working hard at this over at SubmitHub, developing a way to detect AI songs: https://www.submithub.com/ai-song-checker
These days roughly 20% of the songs coming through our platform for promotion are AI-generated. Roughly 75% of them are honest and declare their AI usage - but another 25% try to hide it. Some of them are actually writing scripts to "clean" their audio so that it can bypass detection.
do you have any idea on what percentage of musicians use AI to create the song and then also create the sheet music so they can play it themselves? That seems like a decent workflow, use AI to get the song right, and then record yourself playing it with you're own creative tweaks. That's kind of how I do AI assisted coding.
This is an aside, but thank you for doing this work! As a musician who plays real instruments and submits real songs to Submithub, it's nice to know that hard work is going into validation and prevention of scammers passing off AI as their own talent. Keep fighting the good fight.
"AI detectors" are fun like horoscopes are fun, until they flag your music as AI generated, and distribution channels blacklist you and your label sues you. On the bright side, you can sue the creator of the AI detector in return.
I've had my digital art flagged a few times for various reasons (automatic copyright infringement and NSFW filters) -- so this is nothing new (in particular the artwork blocked the upload for some artist songs). The only thing is to have a reasonable appeal process. In all cases we got an automated approval after appeal, but it can put an untimely delay.
Honestly I hope that the AI filter would be much better in terms of false positive than the aforementioned one, if only because it should be easier via statistical methods.
The only reason you're saying that is because you haven't tried to build such a detector yourself. It's not like text where it's impossible to tell reliably if something's AI generated or not, from a technical perspective it's very trivial to detect anything coming straight out of a Suno/Udio prompt.
Nobody open sourced their detection algorithm as that would just trigger a cat-and-mouse game between Suno/Udio and a detection platform (and Suno/Udio have way more VC money than you do), but plenty are being sold as a service and work very reliably.
Not sure what algorithm Deezer is using, but Benn Jordan is a fairly tech savvy musician who talks about ways to id AI generated music by looking for compression artifacts used by the training data.
This is apparently how Deezer is doing it.
Most of the videos uploaded to YouTube are worthless.
AI simplifies the creation, doesn't mean it's good and will be listened to. And if it will, then what's the problem?
You can talk about ethics, IP, etc. but we're not even there yet.
I dunno; there has always been shit videos on YouTube, obviously, but there used to be a sort of natural filter of videos that had nice transitions and decent narration and dialog that was more or less grammatically correct that made it so that I would mostly watch videos I enjoyed.
Now that AI has cargo-culted these traits I'm getting a lot of recommendations of videos that will initially seem "ok", and then I realize after about a minute that the narration will have some weirdness, and the script will have a lot of the typical ChatGPT "tells", and of course the video comes off as pretty low effort after that.
My YouTube recommendations have become increasingly useless, which honestly might be a good thing because it's made it so that I have less desire to use YouTube.
The weirdness is creeping in to regular Youtube content too. For example, I like to watch Ryan Hall's stream during extreme weather (tornado season in the US). In his forecast videos he has to start with something weird to prove to the audience they're not watching a fake AI generated channel, like eat a banana or apple while talking and wave the fruit around. It was very strange until i realized what he was doing. He also started wearing a suit which is very out of character for him, that must also confuse AI trained on his previous videos.
> AI simplifies the creation, doesn't mean it's good and will be listened to. And if it will, then what's the problem?
From this attitude you might as well get your entertainment from spam or ads.
a lot of gen ai is essentially a pollution machine creating digital single use plastics. Whoever can identity and sift it for value will be the after ai heroes.
I discovered a new band some weeks ago (Hexxenmind) through Spotify. Really liked them, then checked concert dates only to find out it’s AI generated.
Honestly couldn’t tell in the moment but now that I know it’s generated it somehow feels “cheaper” and I dislike listening to them.
AI creation kills cultural sharing.
People who create AI music are largely not sharing it with others for any reason other than to create a revenue stream. They are also not consuming new AI music to be able to develop influences and synthesize new ideas. The system builds brick walls where there was once osmosis.
How can art evolve under these conditions?
Why are some crafts more sacred than others?
>They are also not consuming new AI music to be able to develop influences and synthesize new ideas.
If not they most definitely are listening to other music that influences them. If you have proof that such a producer listens to 0 music feel free to share it.
That is theoretically how one would think it would play out but that’s not what happens in reality. Instead it becomes like blog spam where it becomes impossible to actually find what you’re looking for because you’re wading through so much crap you don’t want.
Also, a lot of us value the fact that music is made by a person. Digital tools have been around for a long time and people have bickered about that, but ultimately they still require a person with some knowledge to sit down and actually produce the music, to do the thing. Writing prompts until you get something interesting can be fine, but what people are doing is carpet bombing us with whatever nonsense comes out because they have a financial incentive to do so.
I have plenty of experiments back when I did more digital music where I would mess with frequency modulators and such until I just found something interesting. I don’t see the harm in activities like that. But that’s not really what’s happening here. It’s deliberately lazy, corner cutting work to spam music platforms for profit. Yes there is a gray area between these two scenarios but that gray area isn’t the problem.
Honestly I think the thing that most humans appreciate is effort. Using AI tools is not inherently "bad", but these very-literally mass produced AI songs are almost by definition low-effort and as a result pretty bland and unlikeable.
Digital music has always been fine to me, as long as the song being produced feels like it took a human some amount of effort.
This is a much more concise and effective way of communicating my thoughts ha
Yeah I agree with that nuance, as I personally enjoy making AI covers of songs I like in genres that I can't produce myself (old vintage blues covers of 80s new wave songs if you must know). It's a fair amount of work prompting and curating (and editing in some cases). I think they are cool and have shared a few, but they do tend to get lumped in with "ai slop" and some people take offense.
I think a lot of people make an assumption that problems like this are fixed-sized; that by making getting a song easier, that that's the end of the the line.
In my mind the better mindset is to think that the problems are not fixed size, and instead these tools can allow for bigger and cooler projects, and/or projects that wouldn't be possible (or at least would be infeasible) without some kind of technological assistance.
AI tools can be used to create slop that is either "bad" or extremely bland at an effectively-infinite speed. It could also be used to make some really cool and interesting stuff if a person is really willing to spend time and effort to make it cool. Usually this requires more than just "prompting" though.
So we'll be going back to publishers as curators. Good for the publishers, I guess.
On a similar note I recently deleted a whole bunch of automated tests because if the AI is going to write most of the code then I should test it to make sure it's good! This won't work for all projects, but for my indie games it's a good idea.
> I recently deleted a whole bunch of automated tests because if the AI is going to write most of the code then I should test it to make sure it's good!
??
You say you deleted the tests, because you "should test it"? The logic seems inconsistent.
Sanity checking LLM-generated code with LLM-generated automated tests is low-cost and high-yield because LLMs are really good at writing tests.
I think LLMs are really bad at writing tests. In the good old days you invested in your test code to be structured and understandable. Now we all just say "test this thing you just generated".
I shipped a really embarrassing off-by-one error recently because some polygon representations repeat their last vertex as a sentinel (WKT, KML do this). When I checked the "tests", there was a generated test that asserted that a square has 5 vertices.
> ...because LLMs are really good at writing tests.
No, they're absolutely shit at writing tests. Writing tests is mostly about risk and threat analysis, which LLMs can't do.
(This is why LLMs write "tests" that check if inputs are equal to outputs or flip `==` to `!=`, etc.)
As an user I wouldn't mind as long as it's attributed and I can skip it.
Pisses me off on YouTube - it's really hard to find something genuine in the sea of the AI written, AI subbed, AI generated and AI published - it's a scourge not because it's there, but because the channels are lying about it AND because 99.99999% of what I encountered it's not worth the waste heat processing a "publish 100 catchy videos about current affairs".
this seems like a pattern seen across industries when it comes to AI
even more consolidation and lock in
Why? HN isn’t “curating” the wave of AI-written tech article slop. Unclear if they should, readers here love it!
Hard to believe these models won’t get better and better at producing music that humans want to listen to.
The problem has never been that AI music doesn't sound good.
From the press release, it's not all that clear what Deezer is doing about it. 44% of uploads getting less than 1% of non-fraudulent streams seems like a pretty strong reason to outright ban AI generated submissions.
For the non-fraudulent listens, I'm very curious how many of these are part of auto-generated playlists. Are people just being served this music as part of a feed, or are they actually seeking it out? I'd be very surprised if it was the latter.
Tangent:
I assume this “AI-generated” music is created the same way an LLM generates text: use samples from a corpus strung together into a new [derivative] output.
But it seems plausible that algorithmic generation can be used at any stage of the process. How much disclosure do we (listeners) require? At what point is it unacceptable “AI-generated” music?
The answers are going to be subjective. And human. And dealing with this, I think, is going to take a direction like the “typewriters in college” headline from a few days ago - human involvement, low automation … things that don’t scale.
> use samples from a corpus strung together into a new [derivative] output.
That’s kind of how the music industry produces music these days. There are a few song writers that write for most artists, music producers who sample other music to string together songs for most artists etc. That’s why most music sounds the same and why AI generated music can be indistinguishable from mainstream music.
I mean, it was how Beethoven did it with dice, too. This is just much quicker and more comprehensive.
My understanding is music generation is more like stable diffusion. It generates a waveform as an image, then turns it into an audio file.
They do use diffusion models, but I don't think they would make a detour via images. They can just generate audio directly with audio diffusion rather than image diffusion.
There technically was one experiment early on to trick Stable Diffusion into generating spectrograms that could be converted into audio. And, it worked surprisingly well.
https://web.archive.org/web/20230314190913/https://www.riffu...
https://huggingface.co/riffusion/riffusion-model-v1
But, I'd expect everything in the past 3 years to diffuse the audio waveform directly.
I wonder if this will lead to a sort of "open sourcing" of music, where the reputation of what one produces will be improved by releasing the raw DAW files/tracks/etc. Even if AI is used to generate the constituent parts of a manually-assembled track, it would still demonstrate to listeners that there was significant human involvement in the process.
Touring, merch, etc will also serve as good "proof of give-a-shit".
Youtube got hit by massive downfall in quality by this as well. It's absurd.
Similar stats for podcasts - https://www.listennotes.com/podcast-stats/#growth
Deez what?
I wonder how much of this even matters. Sounds like it doesn't (aside from taking up space on Deezer's drives).
> The consumption of AI-generated music on the platform is still very low, at 1-3% of total streams, and 85% of these streams are detected as fraudulent and demonetized by the company.
Even pre-AI, music has always been a winners-take-most business. Per an article from 2022, the vast majority of artists have fewer than 50 monthly listeners[0], which I suspect is far lower now due to the flood of AI.
Not sure about Deezer, but for Spotify there is some kind of minimum to get you into any algorithmic rotation. People try to game this with bots, i.e. botted streams, but the problem with bots is that the accounts are bots, so the recommendations just become music for other bots, hence the part where 85% of the streams are botted. So it doesn't actually work, and you have to rely on old-fashioned promotion to get into any algorithmic playlists.
So 44% of uploads being AI-generated sounds bad, but it's extremely unlikely anyone will ever encounter them naturally, the same way that people don't naturally discover random, non-AI artists with 10 monthly listeners and tracks with less than 1000 plays. This isn't a defense of AI music slop, by the way; it's more pointing out that the "making a song" part only takes you about 20% of the way to becoming an artist people want to listen to. A harsh lesson our friends in /r/SunoAI are learning.
[0] https://www.musicbusinessworldwide.com/over-75-of-artists-on...
> …it's extremely unlikely anyone will ever encounter them naturally…
"Extremely unlikely", you say? https://www.theguardian.com/technology/2025/nov/13/ai-music-...
This is the very definition of unnaturally: the creators of those AI songs spent a ton of money promoting them, for whatever reason.
If that were true, then I agree that, "it's extremely unlikely anyone will ever encounter them naturally unless the creators spent a ton of money promoting them" would be true. But if you're on MusicTok or using any other popular music discovery channels today, you are encountering at least some AI-created music naturally even if you don't realize it.
44% of uploads are probably not created by 44% of "artists". The core of people who are looking to exploit the system are going to be good at gaming the recommendation algorithm — they're specialists in it solely for the money who don't need to trouble themselves with artistic concerns.
I'm not saying it's impossible, but at a minimum it's extremely hard to game the recommendation algorithm (primarily talking about Spotify, maybe Deezer's is less sophisticated). The best way to "game" the recommendation algorithm, to kickstart a new/less-established artist profile, is to get onto popular playlists. However these playlists either have actual quality barriers (so they won't put AI slop music on) or they take $$ (so this doesn't really work with the "mass generated AI slop" approach).
however you might feel about AI generated media, flooding platforms with unlabeled slop is nothing but scammer behavior and we should take serious measures to disincentivize it for both the uploaders and service providers.
I do suspect we are in for a lot of verified-human platforms where your fee goes to supporting establishing an artist or author's humanity beyond a reasonable doubt.
Is it free to upload these files? Maybe a 1¢ fee would be enough to kill a majority of the spam.
Effectively yes. There are plenty of music distributors that have “no fee” distribution where they simply take a percentage of any royalties.
I suspect we are going to see that model quickly go out of favour though.
really? there are ways of putting music on Spotify that don't involve paying a fee upfront at any time?
Yes, last I checked at least LANDR, Amuse and Tunecore all had plans where you release for free and they take a % of royalties.
Yes, lots.
What does human verification look like when you grant that it’s impossible to tell?
I don't grant it. if you mean it is impossible to tell from the music itself, perhaps. but there are other means of verification.
A human can still upload tons of AI generated music though?
I don’t see how verifying that the author is a human helps in any way.
I also don’t think it’s a big problem but that’s another discussion
sounds like you don't really care about this honestly, so i'll reflect your apathy
What are any reasonable examples of how you can verify a song wasn't AI-generated?
e.g. Game speedrunners film the whole process to prove they did it themselves.
Presumably you had some ideas when you envisioned "human-verified platforms".
Follow musicians and bands that perform live would be my choice. If they write their music with AI and I still like it then that's ok by me. Obviously this doesn't scale if you are a platform operator but that's not my situation.
music can be performed live and in person. many musicians work with other musicians, labels, studios etc. a web of trust can be built and verification via performance is a compelling option. not complete but it's certainly an option.
would you as a label sign an artist you'd never seen perform? maybe there is value in a platform working under similar constraints.
With the internet and modern platforms, we democratized music so that you can make music in your bedroom and publish it without collaborating with a record label, produces, etc. So you'd have to put some of that cat back into the bag, but for what?
I guess there could exist a Spotify that is limited to music performed live for people who like that. Or simpler: a checkbox you can click to filter it to music known to have been performed live.
But that doesn't sound like something I'd want imposed on all music on a platform. Scrolling through my SoundCloud favorites right now, less than half of them perform live at all, and a lot of it is remixes that are never performed live. And most of them are pseudonymous. I'd lose more than half of my music if the platform required music to have been performed live. A lot of music isn't even performable live.
>But that doesn't sound like something I'd want imposed on all music on a platform.
that's fine. there's room for multiple platforms. personally I would pay for the thing I describe, sounds like you wouldn't. but the question is not whether you or i would, but whether enough people would to make it a viable business - whether it's the platform, or the method, or a label that licenses its music in a certain way, or what.
This shit is so dark. I mean, popular music has always been pretty formulaic, and prone to imitation and trite bullshit, but at least when humans were making it, you'd occasionally get some spark of genius, real originality, even in the most mundane forms.
I use LLMs for code every day, but if I could flip a switch to turn it all off and prevent this shit from happening to the arts, I probably would.
"Probably"? I'd hit that a thousand times just to be sure.
Don't understand how one can experience anything but infinite dread when confronted with the effects of these models on the arts.
Maybe I am getting old. But I don't think so...
If I had to guess, I'd say that it's actually more of a young person thing to want to get rid of all AI. I've only ever seen older people wearing a shirt with an AI generated image on it.
I would absolutely push that button a thousand times as well.
I mean, there are some positive uses for the technology, some will likely save lives and advance the frontier in medicine and science. The ways it's able to automate research tasks is a pretty big deal. And, even though I know that, I think with the harm it's doing to our humanity with all this slop overwhelming everything (the web is now more slop than human, YouTube and every music streaming app soon will be), it's maybe not worth it. I don't know how to balance those two things. And, I don't know how you'd regulate it to make it safe, even if we had politicians anywhere who wanted to.
Butlerian Jihad vibes are building.
Do any of the major streaming platforms have a stance against AI generated music?
No, not really. Spotify is trialling a voluntary “AI Credits” thing where people can highlight use of AI when they release music.
https://support.spotify.com/lc/artists/article/ai-credits/
The problem is that subjective judgements by streaming platforms on where an AI line is drawn in music production is difficult.
If you human-write a song but use AI to produce a synth stem or bass stem and then mix it down and use AI mastering is that better or worse than if you use AI to help you write something but record with human musicians and a bit of AI assist?
And what if you use AI entirely to write and compose but use human performers to record?
And what if the AI is trained only on licensed content?
There's a whole spectrum from sfw to nsfw but we don't give up and allow porn on every platform because drawing the line is "difficult". We can use common sense and taste, with all their flaws.
I wouldn't say that porn is not allowed on every platform. basically every mainstream "content posting" platform (fb, ig, tw, tiktok, etc) allows softcore porn, and in fact pushes it on users, both on content an on advertising. if the same was true with AI music I wouldn't bother with the platform
Honestly, debating these corner cases feels like a distraction tactic. The reality is that the bulk of that 44% is total AI slop: one-sentence prompts entered into Suno to generate 1,000 tracks and extract money from subscribers who stream in the background.
It's the same thing with writing. No one cares that you asked a chatbot to help you reword a paragraph in your essay. The problem is zero-effort slop delivered by the truckload to your social media feed.
Of course ~nobody wants low-effort "I pasted a one-line prompt into Suno and got this out" in their feed. If they did they'd be listening on Suno and not Spotify. The problem is there's no objective, let alone automated, way to tell the difference between that and the corner cases. Artistic quality is an inherently subjective metric, not something that can be enforced via rules.
It’s not a corner case when you have to enforce it.
Someone will end up in the middle and then you’ll be responsible for accepting or rejecting it.
The bulk is obvious but the debate isn’t for the obvious.
But it doesn't. We have a problem. We can focus on addressing the problem without pre-adjudicating every hypothetical corner case.
If your "work" is mostly AI, and if you don't disclose it, it goes to /dev/null. And yeah, you can get into a debate that it's unfair to reject 51% but allow 49%, but that's how the real world works - otherwise, nothing would ever get done. You also get a DUI for BAC of 0.08% but not 0.07%. That's not an argument for putting DUI laws on hold until we can figure out a more perfect approach.
What is “mostly” AI in the context of music?
I can assure you it’s not a corner case: this is one of the things that a lot of creators are concerned about. If a major streaming platform decides your music is not acceptable because you used some AI as part of your production process and blocks your song as a result that has pretty big consequences.
Spotify, for example, already said that any track that gets under 1000 streams will not get any money. What if it says “any track that uses more than a proportion of AI will not make any money” - but refuses to say how it makes those decisions so that people can’t game the system.
Who is listening to that crap anyway when you got literal decades worth of great music to choose from?
Lots of people are listening to it. There’s an AI brand named “Eddie Dalton” on Spotify right now with 589k monthly listeners and a couple of million streams on its top track. This is one of many.
Lots of people don’t care about whether the music they listen to is human created or not - just as lots of people don’t care about lots of other AI slop so long as they are entertained by it.
New great music is being released every day. What should I do, arbitrary decide to never consume any music made past today?
The same people who read AI-generated stories about AI. Which is, roughly, most of us. There are AI-generated blog posts on the front page of HN multiple times a day. Right now, I see "I prompted ChatGPT, Claude, Perplexity, and Gemini and watched my Nginx logs", which is AI slop. I'm sure there's more.
Well, even if you are absolutely deliberate against AI slop like me, you might well just fall asleep listening to an ambient album of your top rated human musician, and wake up to AI slop anyway in an hour or two in which your subscription money had been paying those fuckers' instant ramen.
But this can be easily fixed by turning the autoplay, the slop's best friend, off.
Me personally, I sniff AI on Spotify by empty "about" sections. Which is sad as I always held dear that it's the music that must speak for the author, not the vice versa.
If... If you're pulling from something called a feed ... Are you really surprised to get slop in it?
You're thinking of a feed trough, like for pigs. This use of feed comes from news services.
I remain happy with my decision to leave streaming behind and curate my own listening around artists I know, recommendations from people I trust and a complete absence of any and all of this worthless slop.
What a coincidence. Just today, someone on my high school alumni group just posted an album they "made", which is 100% AI generated music. They claim authorship because they created the prompts to the AI.
My feeling is that if the AI is this good, the audience will just prompt the AI themselves and cut out the middleman.
> My feeling is that if the AI is this good, the audience will just prompt the AI themselves and cut out the middleman.
I call this the instant imitator trap. If anything AI generates stands out from the slop, the slop generators will just imitate it, thus quickly making whatever standout quality from the "original" work also slop.
I wrote about it here: https://tombedor.dev/creativity/
I've had to change my video and music consumption habits, because I fall asleep fairly often with either music or videos playing in the background (bad habit, I know). I'm always sure to switch to a playlist running locally when I get tired, because I'll be damned if someone's slop is going to get monetized while I sleep and the algorithm starts sneaking that crap in.
I can see this being useful for solo game devs
Maybe useful for ensuring their reputation gets tarnished.
Ah yes, vibe coded games with AI slop soundtrack. The future of gaming couldn't look any brighter.
That might be true, but if you watch some of the youtube videos from solo game devs where they spend 5 years making a game, and come out with 28k in sales. Anything that brings a game concept to market faster I think is a win for everyone.
Great job guys! Almost halfway there! Keep working hard and we can make it to 100%!
Remember: AI use is mandatory and non-negotiable. Hopefully the Trump administration will be rolling out AI-use metrics for the whole population, so we can track progress against our goals.
This is incentivized by how streaming compensates artists. If these folks can also bot a bunch of "listens" to this slop they get paid out of everyone's monthly subscription payment. I want a streaming service where my money only goes to the artists I listen to - not to Taylor Swift and Suno artist #3141592.