A.I. images – a storm in a political teacup?
Have you read about the storm in the political teacup this week? The New Zealand National Party used A.I. to generate an image for a set of social media posts (example pictured), and now they are being called ‘irresponsible’ and ‘confusing fact with fiction’. Apparently, the “majority of political parties currently in Parliament have vowed not to use A.I.-generated fake images in their social media campaigns.”
A recent Stuff article read, “Chris Hipkins [Labour Party Leader] has been clear he wants Labour to run an honest and upfront campaign New Zealanders can trust.”
National bit back with, “Yes, we have used A.I. to create some stock images. It’s an innovative way to drive our social media. As with all social media, we are committed to using it responsibly.” (Bravo, National!)
Frankly, I’m gobsmacked that this is even an issue, though it may be a simple case of casting shade on the opposition in the lead-up to the general election later this year.
I’d understand the commotion if the National Party had created deep fake images to deceive the public or imply that someone had done or said something they had not. However, their social media images were not remotely in that category.
If you want to define deception and irresponsibility, remember the Adnan Hajj photographs controversy (also called Reutersgate) in 2006 during the Lebanon War. Digitally manipulated photographs taken by Adnan Hajj, a Lebanese freelance photographer, were presented as part of Reuters’ news coverage of the 2006 Lebanon War. Reuters admitted that at least two were significantly altered before being published, and it was likely that they used Adobe Photoshop to manipulate the images. Photoshop – and its ability to mystify, alter and deceive – has been around for decades and used liberally worldwide.
If people want an ‘honest and upfront campaign’, then Photoshop should also be banned. But that, as we all know, is laughable. Why? Because Photoshop is a fantastic tool.
Throughout my design career, I used to joke that you should never believe anything you see in print...ever. Every bit of media or journalism or marketing will be retouched somehow, even if it’s just to colour balance or remove imperfections. So, if the world agrees to keep Photoshop (and it will), then I see no issue with A.I.-generated images if – as National puts it – the technology is “used responsibly”.
With the image above, would Labour have felt better if National had spent great wads of their marketing cash to hire real-life models, create that scene, and photograph it in the traditional way (and then retouch it with Photoshop, of course)? Or, would they have felt better if National had used a CCTV shot of an actual burglary taking place – maybe the public would have felt less scared that way. I think not.
A.I. is an extremely powerful tool and there are many issues surrounding it - security, privacy and responsibility being some of the big ones. While A.I. makes it easier and quicker to generate images that are insensitive, hyper-realistic or involve cultural appropriation, etc. advertisers, agencies, political parties and the like should already be acutely aware not to do these things – with or without A.I. (but that's a different conversation). A.I. is a tool, and it is the person who wields the tool that ultimately decides whether to use that tool for good…or for evil.