Embedding the features of one image into another to create an illusion is a task I’d consider AI for, IF the artist performing that task can be propelled by using the output as a base. If it takes far more manual correction by artist to the point that it takes longer to make a finished piece, or if the time spent enjoying the process is diminished, it’s no longer worth it.
AI in art should be about automating the tasks that require scale or repetition, like how 3D graphics took much of the mathematical work from artists, letting them focus on sculpting their forms precisely.
Time freed from automating one task should be spent by the artist on another task, such that the work is done faster AND is appealing in a clear and obvious way.
The most “creative” way I’ve seen this done so far is using separate prompts for different 2d image elements in still painting, which appears to take longer to make less consistent results.
It feels like prompters rely on the divided tastes of the internet to convince people that their art looks good to someone, just not the current viewer.
Generating the image does not take much power, though training them does; I should have been clearer.
When I say “massive amounts,” I mean a company like Microsoft opening large data centers that require enough energy and water to disrupt local communities. Obviously this isn’t an AI issue, and Microsoft doesn’t train for image generation AFAIK, but the fact remains that training an AI model requires an order of magnitude of more resources than most consumer or corporate applications.
If AI models were only getting more efficient, I wouldn’t worry about this, but companies tend to scale up and use more resources to make larger models.