Sunday, 13 November 2022

AI Art: What I Think

 

One of these images was generated by Midjourney's version 4 technology and, while visually impressive in and of itself, has essentially no value whatsoever compared to the other three. But there's more to it than that.

 So-called AI art ... is it good? bad? is it even art? Perhaps it's the next big thing, or perhaps it's just theft.

 Either way, if you're using it to make money right now, were I you, I'd be really, really careful.

 Yes, yes, I admit I've dabbled. It's interesting, actually kind of fascinating, and it's fun to mess around with. For me, it's an intriguing way to brainstorm and noodle around with ideas which are rattling in my head. Whether you like it or not, using artificial intelligence to create imagery, writing, and music is erupting onto our cultural landscape like a volcano, but is it a Vesuvius or a Mauna Loa? Is there a pyroclastic flow barrelling towards the art industry, ready to bury it in a choking avalanche of molten rock and ash, or will this technology help artists to create a Hawaiian paradise? To cut a long story short, I think the way in which many people are using this new technology at the moment is at best misguided, even disingenuous, and at worst downright immoral. We could intellectualise as much as we want about AI art's merits, or lack thereof, but ultimately I think there's quite a simple explanation. Let me explain:

 It's trendy to say that we live in a world shaded with grey ambiguities and murky moral conundra which cannot be easily solved. However, that is not an excuse for us to avoid the attempt at an answer. Simply shrugging our shoulders and hoping that other forces will resolve the issue in a way that is favourable is utterly naïve. We have to think extremely carefully about how any new technology will affect our lives, and even more carefully about the legalities of using it. Just because there is no legislation against using something in a particular way, doesn't mean it's free from ethical concerns. Often, the law just hasn't caught up yet.

 As far as my research has informed me, current AI image generation software works by taking target images and applied labels or tags to create new images by 'diffusing' the original images by combining it with successive layers of gaussian noise until they're just fuzz. A pattern of diffusion from concrete pixellated image to random noise is thus created. The AI then reverse-diffuses gaussian noise based on a modifiable string of inputs (or prompts, as they're commonly known) based on the tags applied to original images to create something new, but the details of which are based on the reverse of the diffusion pattern it established earlier. I assume that most contemporary AI image generation software works in a similar way. The software doesn't literally cut and paste bits of existing artwork into a new image. It does something infinitely more sophisticated- it tries to establish more general rules which conform to the patterns in 'an oil painting' or 'a painting in the style of Rembrandt' for example. This is me trying to make sense of the model in my own words. I'm not a software engineer and I know almost nothing about machine learning, so please, if you have a more nuanced understanding, join the conversation.

 

To see a world in a grain of sand, or in this case a random collection of pixels...

 

 Technicalities aside, the crux of the matter stems from the developers of the software training their AI systems using content that exists on the internet. This seems like a very neat and tidy way of sourcing practically limitless material, but the really big problem comes when you realise that a lot of this stuff actually belongs to people--living people who depend on it for their livelihood and their career--and is not free for general use. Online portfolios where artists display their work can be exploited like this all too easily, especially when the source material for the generated images can't be traced back with a reverse image search. In my opinion, it's theft of intellectual property, and without the owners of the original images giving consent for their material to be used, I don't think it can be argued that it isn't theft, especially when people are using these generated images for commercial purposes. Furthermore, there's currently no way to give credit to artists whose work has been used to train the AI, because the systems are trained on so many images that it simply isn't possible to pin down which specific ones have been used for any given image (unless you're like this guy who shamelessly created AI knockoffs of a recently deceased artist's work and tried to claim they were the result of his own hard work). That way lies a whole lot of awful. Using the technology in this way, it's possible to imagine a world where industry leaders in media, films, games, TV, dispense with their artists entirely, only to steal their work and train an AI to reproduce similar pieces without having to pay the artists a single penny. If we take this to an extreme, you could see a future in which large bodies, corporations, those with the resources to pour into legal teams, harvest the internet looking for images to train their AIs on, and then create images, even entire films (it can already be done, albeit shakily for now) which they then protect behind iron-shod copyright. Small artists lose out and the massive corporations make even more money because they don't have to even employ artists any more. Using AI image generation in this way would destroy the art industry as it currently is, put thousands of people out of work, and devalue their skills in a very real way. And I don't think the argument that AI isn't good enough to do that holds any water at all in the long run. All it requires is the addendum of a perspicacious 'yet'. The tech only ever gets better, especially when there's money to be made.

 Because I am slow, I started writing this article in early November when an artist friend of mine urged me to think more carefully about my use of Midjourney for creating images to illustrate my last few articles. Honestly, I hadn't given it all that much thought before that moment. I thought it was cool, it was fun, and I could create bespoke pictures to make my writing less boring to wade through. However, in the last couple of days, almost as if to prove my point, I've become aware of a recent class action lawsuit that has been filed, citing violation of copyright and no attribution for creators. The lawsuit pertains to an AI trained to write code, but it looks likely that it will set a pretty groundbreaking precedent for the future of AI and machine learning.

 In my view, there is one more core argument against wholeheartedly supporting AI generated imagery in the art industry, similar to an argument in economics. On Twitter recently, I read a very interesting thread about how using AI to create art can be likened in quite a convincing way to the Grossman-Stiglitz financial paradox. In this model, it is assumed that in order to be the most efficient with one's investment, one should only invest in index funds, because they are the most reliable, easiest, and cheapest way to make returns on investment. They are a nice passive way to make money in the long term. You don't really have to know about the markets, or buy and sell your own stocks. In addition, in a perfectly efficient model, no one would ever want to conduct research into the accuracy of the market's movements, whether it was performing well, whether indexes are correctly priced, and so on, because this gives no trading advantage. Just invest in the index funds and you're fine. In this twitter thread, original market research is likened to human artists producing work--there is labour involved, it is more costly, and confers no inherent advantage. However, the index funds depend on original research to function correctly. If no one produced this work, if no one did the actual labour of creating it, there would be nothing to suggest that these index funds have worth, that the values which index fund models show are reflective of the real conditions of the market. Similarly, it is only because of human artists that AI image generation has any ability to imitate human art. If we look at the art industry from an economic perspective (and I argue that doing so is both helpful and necessary) it's a kind of parasitic way of producing cheap art. And in terms of longevity, further down the line, if everyone decided that AI art was the way to go and they stopped manually producing art altogether, we would end up training AI models to produce art by using AI-generated art. It's a spiral which only goes down.


Some Peripheral Points

 I do think that there are some good things about this technology. I think in many ways, it could make the art industry a more accessible place, it could help artists generate ideas, brainstorm, and iterate their work, ultimately being good for productivity and imagination. I know I've found it very useful to help me solidify certain ideas which have been floating around in my head rather vaguely. And I think I still will use it personally, but I can't imagine ever using it commercially. It feels wrong to do that.

 Plus, it's really interesting. I think the argument over whether AI generated imagery counts as art is really, really large, and not something I want to get into now. We could talk about Dada and Marcel Duchamp's LHOOQ, or his found pieces. We could talk about Damien Hirst and how he doesn't actually make any of his own art, he just employs others to make it for him. We could talk about how AI art could really empower people with disabilities to become creators in ways they never could have been before. We could talk about intentionality, the Chinese Room, the Turing test. We could talk about intrinsic and extrinsic value, about different types of abstraction. We could talk about a piece's cultural history, its place in time and society, the sociological value of human-created art vs. the mere simulacrum, the outward appearance which a computer creates. There are too many angles, too many ways to argue for and against. My main point is about the ethics of using it in particular ways, and I think I've said my piece.

 

That's all for now,

O

 If you're interested in reading more on this, see below for what I think are some interesting bits within the cultural milieu. There is some really interesting stuff out there. Much more interesting than mine:

https://www.gutenberg.org/files/64908/64908-h/64908-h.htm 

https://erikhoel.substack.com/p/ai-art-isnt-art?s=r
 
https://www.youtube.com/watch?v=6w43_WxH3tU
 
https://www.youtube.com/watch?v=x4Fzqvx1jxI
 
https://www.engadget.com/dall-e-generative-ai-tracking-data-privacy-160034656.html
 
https://twitter.com/Thuminnoo/status/1580311826352738304
 
https://www.artstation.com/blogs/stijn/B276/ai-sketches-with-vqgan-and-clip-for-concept-art
 
https://www.wired.com/story/when-ai-makes-art/
 
https://www.dexerto.com/entertainment/ai-art-vtubers-unclear-ethics-worry-artists-1952140/
 
https://www.techtarget.com/searchenterpriseai/feature/The-creative-thief-AI-tools-creating-generated-art