Artistic License Gives Way to Artistic Bias

January 5, 2021

It is a fact that AI facial recognition technology is skewed towards favoring white males. The algorithms misidentify non-white people and females. Facial technology is biased because the developers are generally white males and use their images as data to test the algorithms. A new trend in AI is making art from algorithms, but Venturebeat says there unintended biases in art too: “Researchers Find Race, Gender, And Style Biases In Art-Generating AI Systems.”

Fujitsu researchers discovered that AI art algorithms include socioeconomic impacts and have clear prejudices. The researchers covered a lot of ground for their study:

“In their work, the researchers surveyed academic papers, online platforms, and apps that generate art using AI, selecting examples that focused on simulating established art schools and styles. To investigate biases, they considered state-of-the-art AI systems trained on movements (e.g., Renaissance art, cubism, futurism, impressionism, expressionism, post-impressionism, and romanticism), genres (landscapes, portraits, battle paintings, sketches, and illustrations), materials (woodblock prints, engravings, paint), and artists (Clementine Hunter, Mary Cassatt, Vincent van Gogh, Gustave Doré, Gino Severini).”

The researchers discovered that when different artworks were translated into different styles or altered dark skin color was not preserved and long-haired men were confused for female. Also artwork from different style eras did not translate well to others. The problem comes from the same issues as facial recognition technology lack of diverse data and inconsistent labeling:

“The researchers peg the blame on imbalances in the datasets used to train generative AI models, which they note might be influenced by dataset curators’ preferences. One app referenced in the study, AI Portraits, was trained using 45,000 Renaissance portraits of mostly white people, for example. Another potential source of bias could be inconsistencies in the labeling process, or the process of annotating the datasets with labels from which the models learn, according to the researchers. Different annotators have different preferences, cultures, and beliefs that might be reflected in the labels that they create.”

The article ends with a warning that AI generated art could lead to “false perceptions about social, cultural, and political aspects of past times and hinder awareness about important historical events. For this reason, they urge AI researchers and practitioners to inspect the design choices and systems and the sociopolitical contexts that shape their use.”

There is no argument that diverse data is needed to perfect AI generated art, however, there is going to be a lack of it simply because it does not exist. These art movements, especially those from Europe will not be ethnically or sexually diverse. The art movements will be more sexually diverse than ethnically, because there were female artists and females were painted in a lot of pictures. Ethnically, however, Europe is where these art movements began and white people came from, so the data will be skewed in that favor.

Modern artists of all ethnicities and genders can imitate these art styles, but that does not make them authentic to the era. There exceptions to this rule, of course, but they are limited. It is similar to asking for a Chinese eye witness account to New World colonization or wanting to know how Beethoven was influenced by African music. They simply do not exist.

Instead of concentrating solely on European art movements, why not incorporate African, Asian, and aboriginal art from the same eras? It will provide diverse data from real era appropriate art with lots of different styles.

Whitney Grace, January 5, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta