The horse has already bolted when it comes to AI in music. Tools like Suno can generate full songs, backing, melody, vocals, from a short text prompt, (and give you the stems) and they’re already in the hands of bedroom producers and ad agencies.
Using The Beatles as an example, because why not:
Imagine a system trained only on music up to 1966. Feed it the Beatles’ catalogue up to that point and say, “Write the next Beatles song.” What you’d get would sound far closer to something from the Red Album era than anything on the Blue Album.
That’s because these models learn patterns from existing material and recombine them in plausible ways. They’re excellent at imitation, pastiche, and interpolation, but they don’t experience the cultural shocks, new instruments, studio breakthroughs, or interpersonal dynamics that pushed the Beatles from early singles into the Sgt. Pepper/Abbey Road period.
From a business perspective, that’s not necessarily a problem. Plenty of genres run on “don’t scare the fans,” and production music for TV, film, and ads often just needs to hit a familiar brief. For that world, a machine that can churn out convincing, on‑brand material forever is close to ideal. AI is here to stay, and it will dominate the “we need something that sounds like X” space.
The real question is this: AI can remix what it has seen in novel combinations, but that’s not the same as being part of a scene, reacting to new technology, or four humans in a room pushing each other somewhere unexpected. Will these systems ever produce the equivalent of the Blue years, those left‑turns where a band invents a new sound rather than iterating on the old one? Imitation is easy. Evolution is the hard part.
TL:DR current AI excels at stylistic imitation rather than genuine artistic evolution.