Art Shaped AI: Value Creation in the Digital Era

Introduction

The arts are instrumental in the future of artificial intelligence (AI), as a tool for digital and scientific literacy, as a means of civic engagement in a digital democracy, and as part of emerging interdisciplinary machine learning design methods. While there is already substantial literature covering the role that the arts can adopt in order to illustrate complex notions, to depict realities that must be hushed for various reasons, or to oppose status quo, the literature on the role of art in the development and governance of AI is still emerging. The intersection of new scientific directions in machine learning design with inter-arts curatorial practices in AI ethics leads the reader to imagine a creative, sustainable and inclusive AI.

1. Some obstacles to ethical and responsible AI

1.1 A gendered digital divide

« Socioeconomic (income and other) inequalities are closely associated with digital inequalities, as the former typically shapes the latter, which in turn reinforces existing inequalities, creating a vicious circle. Tackling socio-economic inequality through digital technologies can therefore only address the symptoms, but not the root causes of inequalities. Policies to reduce the digital divide must be multidimensional: technological, economic, social and educational (awareness raising) and should address both socioeconomic and digital inequalities simultaneously. » (UNDESA).

1.2 Future citizenship: lack of capacity to make informed political choices

Experts agree that ethical guidelines for the development and governance of artificial intelligence require accountability, fairness, and transparency. However, the definition of what these terms entail can differ quite significantly. We assume that transparency goes beyond the ability to explain the results of algorithms (a concept called “explainability” or “interpretability”) and is not just about being able to explain an algorithmic decision to disgruntled customers/investors/judges. It is fundamentally about enabling citizens to make informed decisions about the use of their data in algorithms. Yoshua Bengio, a renowned AI expert and researcher, is adamant: “We have a responsibility not to leave (these decisions) in the hands of a few people because the impacts (of AI) will affect everyone. There are political choices to be made and the ordinary citizen needs to understand them.” Niskar et al. found that in order to design legitimate policies, policymakers must ensure that a large number of citizens with diverse perspectives understand the implications of new technologies or scientific applications, and their research shows that the arts are among the most effective tools for achieving these goals. Recent United Nations policy recommendations emphasize the important role of civil society and the arts in sustainable and ethical digital governance (UNDESA); however, on the ground, a better understanding of the implications of AI remains a goal to be achieved.

1.3 Controlling the narrative around new AI technologies creates a lack of trust

A new technology understood by a limited number of experts (Gagné) and investors (Brandusescu) is fertile ground for restricting its benefits to this set of players. The history of the regulation of new technologies shows that it is strongly influenced by powerful consortia of private interests. In Electric Sounds, Technological Change and the Rise of Corporate Mass Media, Steve J. Wurtzler explains how corporations built strategic alliances to control both the narrative of the new technology and ownership through the creation of patent pools, defined as agreements between patent owners to share the profits. Innovation in acoustics thus exacerbated an increasing concentration of ownership and power within the U.S. mass media. During this same period, acoustic innovation was lathered up as a “tool of public necessity” when in fact the independent and educational uses of acoustic innovation were elided by the above strategies (Wurtzler).

2. Sociotechnical pipeline: issues and intervention methods

2.1 Gender bias in the socio-technical pipeline, from input data to algorithmic output

The term “sociotechnical pipeline” should be read in the context of this paper as a space of intervention intended to reduce the harm that algorithms might cause, or increase their benefits (Suresh and Guttag). The pipeline starts from the design of the questions asked/solutions sought, and includes data collection, data preparation (annotation, labeling), data architecture design, algorithmic model development, and its governance (ethical and normative frameworks). It is from the beginning of the data-algorithm pipeline to its end, and ideally in a continuous loop, that an inter-arts transdisciplinary approach can intervene to foster an ethical and responsible development and governance of AI.

2.2 AI generates images based on the words of humans

Algorithmic applications and models in AI are vast, and the author chose a case study that specifically focuses on deep learning models that are used to automate image generation. Automated image generation uses deep neural networks trained on large amounts of data consisting of images and corresponding written descriptions (Xu et al., 2018, and references in Goddard et al., 2021). These models, along with the data collection and annotation processes, replicate existing systemic discrimination in society, and in this case, discrimination against women or people who identify as women.

2.3 Interdisciplinarity to improve the quality of AI algorithms and technologies

First, we note the extensive research of Sheuerman et al. who analyzed 113 machine vision (computer vision) datasets to identify the values that framed the choice of data collected or rejected. Four dominant values were identified: efficiency, fairness, universality, and “algorithmic model improvement”. On the other hand, other values were neglected or implicitly devalued in favour of the selected values. Thus:

2.4 Inter-arts transdisciplinary research in AI ethics meets emerging scientific directions in machine learning design

This chapter focuses on “inter-arts” practice, an artistic discipline recognized by the Canada Council for the Arts (CCA). The CCA defines inter-arts practice as the exploration or integration of multiple traditional and/or contemporary artistic disciplines that are merged in such a way that no single artistic discipline dominates the final result. These trans-disciplinary methods intersect the arts with other non-arts disciplines to explore a theme or issue. The author, a legal scholar and inter-arts curator, promotes iterative and participatory research into the social, legal, economic, political and ethical implications of AI, including algorithmic art as a tool. Inter-arts interventions focus on specific issues such as social justice or climate change.

2.5 Pear AI.Art: data collection and participatory rehabilitation of algorithms

Biases are not always reflected in numbers, they can also be reflected in the words we use to describe the world around us (Luccioni and Bengio). In their study, D. Smith et al. concluded that different words are used to describe male and female leaders, and that women are given significantly more negative attributes. To this end, diversity of perspective in labeling images is essential because both data collection and annotation are highly subjective processes. (Haralabopoulos et al.)

Figure 1-Screenshot of the application PearAI.Art. Illustrations by Audrey Desaulniers. App by Applied Perception Lab team under the direction of Marta Kersten-Oertel, AI Impact Alliance under the supervision of Valentine Goddard. Team members at the end of this text.

3. Best practices in projects integrating art and ethics of AI

Part of the history of generative art is the desire to avoid the darker side of humanity, residing in its subjective nature, and the aspiration to find objective ways to support democratic, transparent and participatory processes of collective communication. Part of the thinking was that if machines could remove the subjectivity of art and aesthetic judgment and imbue them with the transparency and clarity of science, we could achieve clearer communication (Caplan). Contrary to these hopes, in 2021, one need only read the technology news feed to recognize that AI systems are neither neutral nor unbiased, and that machines alone cannot provide the hoped-for impartial communication tool.

3.1 Algorithmic art must be political and contribute to the evolution of AI governance

3.2 Avoiding dystopia and fostering a sense of “agency”

Sommer and Klöckner’s research, based on environmental psychology theory, identified the mechanisms by which engaged art affects the audience. They concluded that artists who care about the impact of their work should move away from depicting issues such as climate change or the impact of AI on human rights in a dystopian way, and instead prefer a design that offers the audience solutions. The artworks that most engaged participants highlighted the personal consequences of participants and their own role in the situation. Their research recommends fostering a sense of “empowerment” in the audience.

3.3 We learn best together

An exploratory study on the impact of group immersion learning concluded that immersive art installations and environments promote learning, but that participants learn best when they are in the environment with others. (Du Vignaux et al.)

3.4 Inclusion in Design

Good curatorial practice in the design of games, or other forms of artistic intervention, that explore the ethical implications of AI should include the (paid) participation of people underrepresented in AI. For example, the Art Impact AI games (Goddard) were designed by a team of artists from communities underrepresented in AI and allowed for an open dialogue about the implications of facial recognition, recommendation, and decision support systems.

3.5 Get out of institutions, favour public places

The same research concludes that it is best to take art out of institutions and into public spaces, not only to reach a wider audience, but also because it avoids the connotation that art is reserved for a certain elite population (Sommer and Klöckner). Jer Thorp’s book, Living in Data, invites citizens to collect data about themselves, and to allow artists to use that data to, in turn, engage citizens on important social issues. He says that while data visualization can be a powerful tool, the tools and knowledge to use it effectively are not always accessible. For this reason, analog art forms, as well as simple tools like cardboard boxes, can be very effective in expressing the meaning of data.

3.6 Recognize the plurality of knowledge sources in co-construction processes

Capturing data is a way to document our perceptions of a facet of a reality. The results, rendered by a traditional media or a new AI technology, are a way of co-constructing a documentary. Assuming that the goal of this process is beneficial social change (human rights, sustainable development goals), the recommendations of authors and experts in emerging media emphasize the importance of highlighting and appreciating this plurality of knowledge sources (Auguiste et al, 2020). It is one person’s questions, another’s wonder, an author’s research, a random lecture, a painting from another era that informs best practices, ethical frameworks that evolve through an equitable iterative process that promotes greater inclusion and diversity of perspectives.

3.7 Authenticity and concrete objectives

Algorithmic art, within a framework of engaged inter-arts practice, is an important tool for challenging societal and automated systems that promote gender, racial, and cultural biases and subsequent systemic discrimination. Therefore, it must foster “a climate in which there is genuine concern for (and a concrete commitment to achieving) full equal rights,” and avoid the “danger that using the law to achieve change” will “focus too much on the (minimal) changes deemed necessary. (H. Smith et al.)

Conclusion

In addition to being a fundamental literacy and civic engagement tool in a digital democracy, the author hopes to have demonstrated the importance of the arts, particularly an inter-arts practice via algorithmic arts, in data curation and machine learning design. This meeting between an inter-arts practice of AI ethics and the emerging scientific orientations of machine learning design, leads us to a transdisciplinary approach that transcends the traditional boundaries and definitions of each of the disciplines involved, and aims beyond the interdisciplinarity between two disciplines, becoming a discipline in itself (Choi and Pak). Let us call this one the inter-arts design of AI for the purposes of this conclusion.

Addendum

These digital art prints created from selected words in the PearAI dataset have allowed for preliminary observations. These words, subjected to the image generation model, in turn raise questions about existing algorithmic models and the datasets on which they still rely.

Figure 2-Some of the words collected by the PearAI.Art application. Cloud by Marta Kersten-Oertel
Figure 3-Image generated by the AttnGAN model using Runway ML when fed the word Vagina. Digital editing, Valentine Goddard.
Figure 4-Image generated by the AttnGAN model on Runway ML using the phrase “Imperfectly beautiful woman”; Digital editing, Valentine Goddard.

You read all the way to the end! If this topic is of interest, please consider submitting to this Call for Abstracts.

References

Achebe, Chinua, from Conversations with James Baldwin, James Baldwin, Fred L Standley, Louis H Pratt (eds.), University Press of Mississippi, 1989.

--

--

Advisory Council of Canada/United Nations expert on AI & Data Policy & Governance; Lawyer/Mediator/Curator; Socioeconomic, legal, political implications of AI.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Valentine Goddard

Valentine Goddard

Advisory Council of Canada/United Nations expert on AI & Data Policy & Governance; Lawyer/Mediator/Curator; Socioeconomic, legal, political implications of AI.