Well, actually no, and that is where a misunderstanding as how LLMs work, which is quite common really. I am a software engineer, and where I don't have to worry too much about the future as I will probably be dead within the decade, you can't just generate code at the touch of a button (and actually there is no button, it is just autocomplete).
An LLM is very good as 'boiler plate' code, stuff you do over an over again, there is a lot of it, and it is good that it farms that out, so it does save time. However, an LLM as discussed here has no inteligence, it just has things it has copied from somewhere else. Its job isn't to solve a problem, its job is to show you 'what a solution to this problem would look like', and that is a huge difference. It is a language process, not a techical process. It doesn't understand the problem, just the overall look of the problem, which is why it is good at language and music. "What would a country song about a clam sound like" is an appearance issue, it doesn't have to know about what a clam is, or how it feels about anything, or why it cares about its truck breaking down or its dog dying.
When I first used it it made a complex function which seemed perfectly to do what I asked. When I looked closer I realised it would come out with the wrong results, but it is very hard to spot, and AI can't fix it because it doesn't understand how it works, just how it should look.
Its shown really clearly in the 'how many rs in a raspberry' problem that chat GPT had.
AI isn't writing about something, it is writing something that it thinks a song should sound like, and for 95% of music that is enough, and it probably will kill a lot of music just because people won't be able to have it as an income, because for a lot of people that sort of music is enough, meaning ultimately music will go back to a niche hobby, like it was in the past, somthing people did for themselves, not for profit, like the guy on the piano in a pub.