This Magazine

Progressive politics, ideas & culture

Menu
July/August 2023

Drawing a line

AI art can be fun to generate, but that doesn’t mean it’s ethical

Sarah Samuel

A white robotic hand with visible joints holds a pencil and draws a straight line

Illustration by Talaj

A picture may not be worth a thousand words anymore. Generative AI art tools like DALL-E, Midjourney and Stable Diffusion, which rely on artists’ existing work to generate images through textual prompts, became available to the public last year. Since then, conversations about how Artificial Intelligence (AI) will render creative jobs obsolete have gripped many North American writers (myself included).

I used to agonize over securing a well-paying, non- precarious job in a creative industry. Now, I must compete for jobs with professionals from non-art backgrounds, like coders, software developers and engineers, who may not have knowledge of ethics and art. Despite what they may think, though, what they’re creating with generative AI is far from original.

All existing generative AI art platforms are deep neural networks, a learning technology modelled loosely after the human brain which can recognize patterns. Engineers develop these software to imitate existing artwork that selects and scrapes large datasets of images, codes, text and music from the internet. The next stage involves feeding the software this ‘training data’ through neural networks. During this stage, the algorithms identify and extract specific features, including shapes and colours. Finally, once trained, the generators are ready to imitate art, sometimes even in the style of specific artists. In short, engineers and software developers train these generators on creatives’ artwork found online, often without their consent or knowledge.

I’m not against using AI tools. I edit my writing through an AI-based word editor, Grammarly, and use Adobe Lightroom to organize my photography. Recently, these companies introduced generative AI elements: GrammarlyGO will allow users to prompt Grammarly’s AI assistant to draft entire documents with a personalized voice, tone and clarity, and Denoise aims to reduce photographic noise and enhance details. And I will likely try these tools to assess their creative might.

Nevertheless, I feel critical of the AI arms race helmed by tech leaders who are hell-bent on enhancing creative arts but forget that these AI generators can erode trust in photography, the medium that sanctified truth-telling. Creating AI-generated images may seem entertaining, but plagiarizing artists’ work without consent or giving them due compensation is the furthest thing from art or creativity.

Chantal Rodier, STEAM projects coordinator and artist- in-residence at the University of Ottawa, says that while AI has the potential to inspire artists, giving it too much credit is dangerous.“[AI] can coherently present data, but it’s not reflective or creative. It is statistically based. It can’t distinguish between mis/disinformation. So what it presents can be garbage,” she says.

Historically, art has been the epitome of human originality, creativity, expression and refuge. While some proponents of AI art tools try their hardest to pit neural networks against the human brain, these models only serve as our second best. They interpret data that was fed to them by humans to mimic humanistic art. For some, incorporating AI in their work or art can be an emotionally fulfilling experience. But idolizing algorithms just to satisfy our techno-fetishist itch is unwise, and the AI art-generating process is riddled with complex ethical issues.

First, the pro-generative-AI crowd doesn’t necessarily regard the datasets on which software like Stable Diffusion, Midjourney and DALL-E were trained as bootlegging. Many say that the images are simply content scraped from the internet and amalgamated, and since there is no sole owner, there’s no infringement. The proponents of AI art tools often argue in favour of training these software on copyrighted data. In the U.S., this is covered by the fair use doctrine, which upholds the use of copyright-protected work to promote freedom of expression. While generative AI art users and developers often make this claim for fair use, the argument amounts to professional gaslighting. Canada’s Copyright Act is a bit more strict, but without any responsible surveillance over AI, it doesn’t functionally stop people from using others’ work.

The term fair use is “dubious,” said Naimul Khan, a professor at Toronto Metropolitan University’s engineering department. “The fair use [doctrine] allows developers to use data for personal and non-profit reasons. But tech companies are making for-profit software off of artists’ intellectual property and discrediting them simultaneously,” he says.

“Intellectual property does not cease to exist just because it is alongside 100 million other pieces,” says Blair Attard-Frost, an AI ethics and governance researcher and PhD candidate in the Faculty of Information at the University of Toronto. “I find the argument that AI models are not taking people’s intellectual property hard to take seriously when you can go into the data sets and see it is copyrighted material,” they say. Worse, Attard-Frost says, in North America there are no governance or regulatory bodies preventing that scale of potential intellectual property theft from happening. They say we need design requirements specifying how to build these applications, how to ensure data is being ethically sourced and attributed, and that no unauthorized data is being swept up in training data grabs.

Second, many may use generative AI tools for seemingly harmless reasons like satire and fun. But a report by the U.S-based geopolitical risk analyst Eurasia Group notes that as AI technology advances, the possibilities for those using this technology to spread misinformation increase with equal measure.

Moreover, not everyone is falling head over heels for generative AI technologies. Maybe it’s because they realize that systems like Stable Diffusion and NightCafe are complete failures in the racial and gender representation arena. I purposely prompted the two models with an unsophisticated prompt like “Arab belly dancer.” All the outputs I received were pictures of either white women dressed as dancers or disfigured, scary faces with hyper-sexualized bodies. Attard-Frost had a similar experience. They prompted DALL-E for non-binary and trans outputs only to be disappointed. They say that the system produced “freakish” looking renders as if the tool did not know what to make of trans or non-binary representation.

AI systems appear to be reinforcing normative cultural dogmas by othering anyone who is not a cisgender white man. In the past, Attard-Frost says, tech companies didn’t have enough representation. But now, with more knowledge around bias in data sets, there’s no longer any excuse. Tech companies are well aware of these issues. Mistakes happen; however, if software developers don’t take measures to correct them and they keep transpiring, then it’s negligence, they say.

But then again, this kind of ignorance is to be expected when tech developers fire their “responsible AI” researchers, whose job is to advise on ethical oversight, only to listen to the pioneers of AI. In 2020, Timnit Gebru, one of the lead researchers on Google’s ethical AI team, was let go after releasing a paper that explored racial and gender biases and environmental risks that AI poses. However, when Geoffery Hinton, the so-called “Godfather of AI,” decided at 75 that it was okay to get cautious about these technologies, he quit Google. It’s a good reminder that in 2015, when another researcher asked Hinton about furthering AI technologies that could be abused for political gains, he said, “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.” His answer makes me doubt his apparent newfound ethical realization.

Khan says there’s no easy solution, especially not a technical one, to AI-related ethical issues. But making ethics a core part of engineering education would be a start. “It has to be a collaborative effort between software developers, artists, engineers, and regulatory bodies,” he says. Khan thinks this would usher in a generation of engineers who would be more aware of how their codes and algorithms impact non-STEM professions. However, he notes that there are two barriers to this. The first is that the tech developers who hire engineers work for for-profit companies and will chase opportunities to make money. The second is related to the ethics knowledge gap: he says it usually doesn’t occur to engineers that their data might be taken from someone unfairly. “But we can teach them better,” he says.

While I hold engineers and tech developers against my critical pitchfork, I show no mercy to corporate media for neglecting marginalized voices around the conversation about AI ethics either. The news media hypes up AI and under-reports the power dynamics behind it. This is problematic because it only reflects business and government interests. As a journalist, I expect reporters to do better. Media must diversify their sources by including voices of marginalized tech experts, who recently penned an open letter about the lack of inclusivity.

Instead of hailing the “Godfather of AI” for resigning so he could speak freely about AI risks, journalists should be questioning why he didn’t speak sooner or show solidarity with his marginalized counterparts. I’m not minimizing Hinton’s concerns. But women and non-binary researchers from social sciences and STEM backgrounds who are critical of AI technologies should be equally centred.

By giving hegemonic voices the centre stage in journalism, using anthropomorphized language like “artificial intelligence” instead of algorithms or machine-learning technologies and by glorifying AI’s capabilities, we are priming people to adapt to these tools. It ought to be the opposite. We must adapt and regulate the tools to meet human needs.

Show Comments