Berghs' design students trained ai to be more inclusive

The course Visual Culture for second-year students in the Communication Design program has just wrapped up. Course director Daniela Juvall aims, among other things, to give visual creators a deeper understanding and useful tools for interpreting and analysing images which will be a major part of their future work as visual creators. The rise of AI in visual creation has sparked discussions about everything from rights and norms to energy consumption.
Hi Daniela! In this course, you had a workshop in collaboration with Florida Atlantic University (FAU) that explored AI bias, led by designer and assistant professor Mehrdad Sedaghat Bahgbani. Can you tell us a bit about the starting point?
The aim of the course is for students to develop their ability to understand, analyse, reason about, and discuss visual culture, images, and image use. A core part of the course focuses on the power and potential of images to influence, and as a consequence, the responsibility we have as image creators. The development of generative AI models introduces a new and very complex dimension to this. Whether or not the students will work directly with generative AI, they need a deeper understanding of what the technology does to our perception of images and visual culture at large. This workshop was a great way to approach the issues of power, representation, and technology.
Mehrdad Sedaghat Bahgbani is an Iranian-American designer who works with art, design, and technology to explore cultural identity and migration. He has observed that AI image generators are relatively good at recreating Western culture, but have significantly poorer knowledge of the rest of the world. This Eurocentric perspective became evident when students asked ChatGPT to generate photorealistic images of a farmer. Nearly all the images showed a Western man in a hat and overalls in front of a wheat field, despite the fact that most farmers live in countries like India, China, Ethiopia, Indonesia, and Pakistan.
Similarly, AI is really poor at reproducing visual culture that isn't Western. This blind spot is a kind of digital colonialism that makes large parts of the world's visual history invisible. One way to counterbalance this is to train the algorithms with other types of images. This is where we as designers play an important role. As visual experts, we can be active co-creators and help broaden the visual perspective so that the images being generated better reflect the whole world.
How did the students work to improve the AI algorithms’ visual competence?
The students worked in groups together with the American design students. They analysed historical and contemporary photographs, calligraphy, graphic design, architecture, and typography from the Middle East and North Africa, writing short descriptions of what they saw. In this way, they enhanced the AI algorithms’ ability to interpret visual culture from the region. When the class later tested generating images, it became clear how much better the algorithms had become at understanding design and typography from the Middle East and North Africa. It was a vivid example of how culture can be preserved, represented, and transformed through its relationship with AI.
In addition to practicing how to observe and describe visual artefacts from other cultures, the collaboration was interesting because the cultural differences between the Swedish and American students also became apparent. How we see and talk about images is closely tied to the culture we live in.
This was an incredibly rewarding workshop. It broadened perspectives on AI technology and provided deeper insight into graphic design, typography, and calligraphy from the Middle East and North Africa. It also made clear that it’s more important than ever to protect a diverse and rich visual culture. This is where a designer’s perspective is very valuable.