There’s an opinion piece in the Guardian by Naomi Klein which is absolutely correct in so many ways about Artificial Intelligence, its social impact and the reckless, unregulated pace at which it is being developed with oversight by a group of people who have routinely been shown mostly to be a bunch of libertarian sociopaths.
Take, for example, this quote:
“A world without crappy jobs means that rent has to be free, and healthcare has to be free, and every person has to have inalienable economic rights. And then suddenly we aren’t talking about AI at all – we’re talking about socialism.”
Exactly. If you’re going to fundamentally change society, and you’re going to do it rapidly, you need to do it intentionally and design it in such a way that you don’t ruin it for everyone. In fact, since you’re at it, why not actually address some of the problems that were already there.
Redistribution of wealth isn’t going to solve everything, but in the long list of things you could legislate to fix that would make a positive difference in the world, it’s in the top one.
However, I’d take issue with one aspect of the piece, and it’s this:
In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal. Why, for instance, should a for-profit company be permitted to feed the paintings, drawings and photographs of living artists into a program like Stable Diffusion or Dall-E 2 so it can then be used to generate doppelganger versions of those very artists’ work, with the benefits flowing to everyone but the artists themselves?
And, of course, the same arguments are being made about music. But here’s the thing: that’s not how training an AI works. It isn’t making replicas any more than a country singer-songwriter, who’s clearly listened to a lot of Johnny Cash, Dolly Parton and Merle Haggard, is in the business of generating duplicates.
The AI is, as best as we can ascertain, looking at images, listening to music, reading text and then creating something new.
Now, the words ‘looking’, ‘reading’, ‘listening’ and ‘creating’ are all doing a lot of heavy lifting here, but those are more accurate analogies than, say, forgery.
And while I have about as much disdain for the motivations behind the wholesale swallowing of all scrapeable material off the internet for the economic benefit of a handful of billionaire-in-waiting tech bros as Klein does, I would go so far as to suggest that until or unless the AI generates a ‘copy’, no infringement has taken place.
Yes, it’s learning from your hard work and everyone else’s. And yes, it seems unfair that the people who are going to make money off that aren’t you – but the same principle applies as it did in the recent Ed Sheeran court case.
Obviously, Ed Sheeran has heard the music of Marvin Gaye. He’s probably ingested every piece of scrapeable content that Marvin Gaye ever produced (and isn’t that a lovely turn of phrase, ‘scrapeable content’?) – and a lot more besides. But it wasn’t copyright infringement. It certainly wasn’t theft.
He came up with something new.
Forget ‘originality’ because, like authenticity, it’s a myth, and nobody has a decent working definition for it other than that it’s something apparently magical that humans do. Ed Sheeran didn’t copy Marvin Gaye or replicate his work. He made something that was not the work of Marvin Gaye, but had very clearly been influenced by it. As certified by law.
And that’s the problem. This is not and should not be the battleground. This is not where we’re going to achieve ‘fairness’.
To do that, we need to look back at that first quote. The issue is not that the robots are stealing our work and taking our jobs. The problem is simply how society has organised itself with respect to money, who gets it, and for what purpose.
More to the point, the problems with AI (and they are many and vast) primarily lie elsewhere. Which is, I think, another conversation for another day – or you could just read the rest of the Klein article.
But just so we know where we’re at – the image at the top of this post is (obviously) AI-generated. It’s based on the prompt “Renaissance painting of a group of robots playing a rock concert to a large audience”. I’m not sure which Renaissance artist the AI could be said to have plagiarised here. If the picture resembles anything, it resembles itself: an AI-generated image, clearly derivative of other AI images.
It’s very literally painting by numbers.