Diverse Voices and Ownership in AI are Vital for the Arts

08.11.23

John Holmes was previously Marketing Director for the Orchestra of the Age of Enlightenment and Joint CEO of inclusive theatre company Spare Tyre. He is now a freelance consultant helping organisations rejuvenate their audience development strategies.

This post is the second in a series on AI in the arts. Check out the first, third and fourth posts! 

Sam Altmann, Geoffrey Hinton, Elon Musk talking to the Prime Minister about the end of work… If you hear someone talking about AI in the media, it’s usually a rich white man. That’s not a coincidence. Silicon Valley is notorious for its lack of diversity. A review of 177 large tech companies by the Center for Employment Equity found that only 20.8% of executives were women, and that global majority women are “vanishingly rare in the executive ranks”.

This lack of diversity matters. It’s making a real world difference to how generative AI platforms (Chat GPT, Dall-E etc) affect society because - as with social media previously - profit is prioritised over impact on people and the livelihoods of artists and creative practitioners are put at risk.

It takes vast resources to create and run a generative AI system such as Chat GPT. The water consumption alone is staggering - a University of California study estimates every single 20-50 prompt ChatGPT ‘conversation’ consumes the equivalent of a 500ml bottle of water. The resources required mean very few can raise the capital to build a large language model of GPT’s size. Who can? Tech companies with access to venture capital. Usually, of course, rich white men. And it’s reasonable to deduce from the behaviour of Facebook, Amazon, Google, Uber et al that concern for the consequences of their actions isn’t top priority when it comes to decision making.

Various choices have been made in Generative AI rollouts that reflect Big Tech’s famous ‘go faster, break stuff’ mentality. These choices are the antithesis of ideas around inclusion, anti-racism, anti-ableism and climate justice that are important to us in the arts, as well as much AI research practice away from Silicon Valley.

Here are some examples. Firstly, the training data for generative AI includes the works of human writers and artists without their permission, and without credit or recompense. Authors including George R.R. Martin, Jodi Picoult and John Grisham are now suing ChatGPT’s creators, Open AI, for this very reason. Secondly, because the training data is basically ‘the internet’, Generative AI is prone to creating text and imagery reflecting stereotypes and biases. Thirdly, the very human job of filtering toxic inputs, such as violent language, was outsourced by Open AI to low-wage workers in the global south. None of this was inevitable. The choices were made because of the specific corporate culture and ideology that pervades Silicon Valley.

Workers, scientists and scholars calling for a more ethical rollout of AI are often ignored by Big Tech or sometimes deliberately silenced - especially women and people from the global majority. Computational linguistics expert Professor Emily Bender created a bingo sheet for any time 'Godfather of AI' Geoffrey Hinton cited a non-male in a recent keynote speech. At the end of the speech it was blank. The culture at Google is well documented - scientists Timnit Gebru and Margaret Mitchell were fired after advocating for more caution regarding large language models, particularly around potential impact on minority communities.

What can the arts do about all this, especially as most of us don't have Sam Altmann on speed dial? A simple thing is to be wary of statements by Silicon Valley CEOs and get your views on AI from a variety of sources.

A good place to start is 404 Media, a new independent media company launched by respected tech journalists. You could also explore the work of researchers such as Timnit Gebru’s Distributed AI Research Institute, sustainability expert Sasha Luccioni and Melanie Mitchell (author of AI: A Thinking Guide For Humans) to get a different perspective to Silicon Valley boosterism.

As many have tried Mastodon as an alternative to Twitter, you might try Generative AI tools that have been developed along different principles. Hugging Face has led development of BLOOM, a large language model built on principles including data transparency and clear recommendations for avoiding misuse. As a decentralised collaboration between 1,000 researchers, 'go fast and break stuff' is the very opposite of its ethos.

Our creative industries' bodies and unions should lobby for legislation so that AI can enhance - rather than threaten artists' livelihoods. For example, requiring that training data is published openly and licensed and artists receive fair pay when their work is used in this way.

BRAID - a new research programme to connect the arts and humanities with Responsible AI - is an exciting development right here in the UK. In their first blog, Co-Directors Shannon Vallor and Ewa Luger argue artists (alongside historians and philosophers) have an important role "to develop the unique forms of knowledge we need to make AI safe and beneficial". One of BRAID’s aims is to connect diverse knowledge with industry and policy-makers, so that “responsibility is not the last, but the first thought in AI innovation.”

This spirit - where people with diverse experience collaborate on AI projects - can have brilliant, inclusive outcomes. For example, Sinfonia Viva and the University of Nottingham have collaborated on Jess+, an project using AI to support disabled and non-disabled musicians to improvise together.

When I talk with colleagues in the arts about AI, there’s usually a sense of resignation. “We need to be ready for AI to take over”, “It’s not going to take our jobs - yet”. This fatalism shows just how well the big tech PR machine works. It’s as if it’s entirely natural that the elite with the resources to build new technologies should get to dictate how they are deployed. Let’s not let this be the end of the conversation.

There is hope too. In September, the Writers Guild of America (WGA) ended a 148 day strike with a landmark deal described in the Guardian as "a ‘smart’ deal that allows for artificial intelligence as a tool, not a replacement – and could be a model for other industries". This success shows the role alternative and collective voices have to play in making sure that AI technologies are ultimately deployed for wide benefit, not just the profit of a wealthy few.

Want to know how AI could help your marketing campaigns? Contact us at Mobius!

If you'd like to keep up to date with all our blog posts, important and interesting stories in the worlds of theatre, arts and media, plus job ads and opportunities from our industry friends, sign up to our daily media briefing at this link.

What we do
Contact us to discuss your next project