People who know more about artificial intelligence think it is less ethical


People who know more about artificial intelligence think it is less ethical

As people understand the system and process behind AI art, its moral implications become harder to accept

Humanoid robot that paints the same artwork as a conceptual illustration

Malte Mueller/Getty Images

A year ago, at the auction house Christie’s in New York City, the auctioneers sold an unusual collection of works of art: surreal portraits, photorealistic images and cartoon-inspired creations, all created by artificial intelligence. The first event of its kind triggered a backlash. More than 6,000 artists protested that the AI ​​models used to create these works had been trained on copyrighted images without the creator’s consent. While the auction house had claimed the works demonstrated “human agency in the age of AI”, critics saw the event as an example of an industry rushing to commercialize technology built on uncompensated creative labour.

Other artistic and professional circles have also been concerned. A report published last November found that more than half of writers surveyed in the UK believed AI could end their careers. And the public seems to have complicated feelings about the technology, too. As a survey found, many Americans are okay with AI as a tool for creative professionals, but not as a replacement for their work.

However, a viewer’s comfort with AI art may depend on how much they know about how it is made. I study neuroaesthetics, a field that combines neuroscience, psychology and our perception of beauty and art. My colleagues and I have found that the more people learn about how AI’s backend works—the datasets, the training process, the prompt—the less comfortable they are with the moral considerations surrounding these creations and the value of AI-generated chips.


On supporting science journalism

If you like this article, please consider supporting our award-winning journalism by subscribes. By purchasing a subscription, you help secure the future of impactful stories about the discoveries and ideas that shape our world today.


I became curious about AI because its rapid spread into the art world has begun to reveal a gap between what the technology is and what people know about it. Previous research has shown that people tend to give AI art lower ratings of creativity, value and emotional depth. And in my own work I had studied how knowledge of art changes the way we look at it. This made me wonder if knowledge of AI shapes people’s judgments of AI-generated art and might help explain the often observed bias against it. To investigate this, my colleagues and I conducted three experiments, each with 100 participants. We started by presenting people with AI-generated art images and asking questions about their morality and aesthetic value. For example, participants in two of these experiments had to assess how morally acceptable it was to use AI to produce such art, earn money or prestige from these works, and label them as conventional art. People also had to rate how much they aesthetically appreciated the images we presented.

In the first experiment, we showed our participants 20 landscapes and 20 portraits that were generated using the DALL-E 3 with prompts based on the impressionist art of the Spanish painter Joaquín Sorolla. Half of the participants viewed this AI art without additional context. The other half received a short text that gave them more information. It read:

“This image was generated by an AI algorithm that produces images from textual descriptors. To achieve this, several steps are required. First, the AI ​​algorithm is trained by learning a large dataset of art images and associated textual descriptors, such as the artist’s name. Then, the AI ​​algorithm is able to generate new images based on different textual descriptive artist names (e.g. a seascape, landscape, or people).”

The additional information made a difference. When people knew how the AI ​​system worked, they perceived the AI ​​art images as less morally acceptable, especially when the creation of these images involved financial gain and artistic recognition. But the aesthetic appeal of the images did not change, suggesting that learning how AI works made people reflect on ethics, not aesthetics.

Psychologists have found that people’s judgments about what is good or valuable can change when they learn that something has received awards or praise from experts. For example, the authority bias makes us more inclined to agree with people who appear to be in charge or know. In addition, signals such as success or prestige can make people see something as more morally good. In our second study, we told a group of participants that some of the AI ​​art images had been exhibited, sold, or praised. But we were surprised to find that sharing a work’s success did not improve the moral acceptance of those images in the eyes of people who had learned about how those works were created.

In a final experiment, we tested people’s automatic judgments of AI-made versus human-made art. We used a tool from psychology called a go/no-go association task, where people are asked to very quickly pair one kind of prompt, such as a picture, with another, such as the words “good” or “bad.” In this experiment, we showed participants images (which were either AI-generated or human-made impressionist paintings), along with object category labels on the left (“AI art” or “human art”) and attribute labels on the right (such as “good” or “bad”). Participants had to click a button if the image and labels were aligned, and refrain from responding when they were not. This task had to be done quickly and over many trials as a way to capture people’s most immediate associations. We worked with people who hadn’t received any additional training on AI to try to get a sense of what the average human might think.

We found no strong automatic tendency to see AI or human art as inherently better or worse. This finding tells us that people do not yet have a knee-jerk or deep attitude about AI as opposed to human art. It also emphasizes that, as our previous experiments suggested, moral opposition to AI art is something people learn over time.

Overall, when people know how AI works, they become more cautious about judging its moral righteousness. This suggests that educating the public, artists, curators and policy makers about how technology works can shape the future of technology in the art world. Artists working with AI tools can help in this effort by sharing information about the models, data or questions they used and clarifying where their own human hand guided the process. While such openness can lead to criticism, it can also build credibility and equip people with the tools to think critically about technology.

It’s time to stand up for science

If you liked this article, I would like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in its two-century history.

I have been one Scientific American subscriber since I was 12 years old, and it helped shape the way I see the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does for you too.

If you subscribe to Scientific Americanyou help ensure our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten laboratories across the United States; and that we support both budding and working scientists at a time when the value of science itself is too often not recognised.

In return, you receive important news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-see videos, challenging games, and the world of science’s best writing and reporting. You can even give someone a subscription.

There has never been a more important time for us to stand up and show why science is important. I hope you will support us in that mission.

Add Comment