
From left to right: Rachel Coldicutt, David Leslie, Rumman Chowdhury, Noura Al Moubayed and Wendy Hall
Royal Society/Debbie Rowe
It’s day two of Women and the future of science conference at the Royal Society in London, but I’m finding it increasingly difficult to concentrate on the speakers because my AI transcription software – which is supposed to make my life easier – keeps insisting on misspelling someone’s name. For every mention of a Juliet, it prints Julian. The irony is not lost on me: this is the session about artificial intelligence, and specifically about how women are being erased from the latest AI technologies.
This is much larger than the now-familiar idea that AI algorithms carry the biases of the data sets they are trained on, including gender bias.
Instead, the focus of the conference session, led by computer scientist Wendy Hall, seeks to address a more fundamental problem: the fact that new AI technologies, which will have a transformative effect on the whole of society, are being designed almost exclusively by men.
Technology has always been an overwhelmingly male sector. In the UK, only 25 per cent of those studying computer science are women. But in recent years – and while generative artificial intelligence has flourished – Silicon Valley has become increasingly misogynistic.
“In the last two years there has been a regress,” says David Leslie, who is responsible for ethics and responsible innovation research at the Alan Turing Institute. “The question of whether the Trump administration has caused intergenerational harm to women in the sciences is undeniable. We are living through a time of backward thinking.”
Last year, US President Donald Trump issued an executive order targeting so-called woke AI, recommending that the US National Institute of Standards and Technology revise its AI risk management framework to “eliminate references to misinformation, diversity, equity and inclusion and climate change”.
One panelist, Rumman Chowdhury, a computer scientist and former US science envoy for artificial intelligence, was in charge of ethics and accountability at Twitter before Elon Musk took over and fired her team. She points out that the concept of woke AI was born out of misogynistic attitudes in Silicon Valley before Trump’s order.
Asked by Hall to describe AI without women, several panelists argue that we’re already there. “I’m in the world of frontier AI, and it’s the world of AI without women,” says Chowdhury. This is a sentiment echoed by Rachel Coldicutt, who researches the social consequences of new and emerging technologies. “If we think about what the world looks like without women in AI, I think that’s what we have at the moment. It’s not fantasy at all.”
It should go without saying: this matters. There is a long history of technologies being developed for men’s bodies and needs, from crash test dummies to office air conditioners, astronauts’ spacesuits and the vast majority of medical research. This is known as the gender data gap, and the consequences can range from annoying to life-threatening.
AI will affect everything from the jobs we do to the way we educate our children and the diseases we can treat. But currently only 2 percent of venture capital funding goes to women, Chowdhury points out. Meanwhile, less than 1 percent of health research and innovation goes to women’s health. “We need to make technology work for 8 billion people, not eight billionaires,” says Coldicutt.
What should be done? With hundreds of years of biased data baked into today’s AI models, Coldicutt doesn’t think it will be possible to correct them. “We need alternative models,” she says. This is also a chance to shift the focus to what these models do. “It’s about cultivating models … that prioritize caring for people, for the planet.”
Chowdhury, who founded a nonprofit organization called Humane Intelligence, which helps companies make AI systems more accountable and fair, believes that part of the problem is that many of today’s AI developments are built around a false sense of urgency, focusing on the existential risk AI poses to jobs or even humanity. If the narrative is that your house is on fire, “you’re not like, ‘What happened to your mom’s jewelry?'” she says. If people feel they don’t have time, they will drop anything that feels foreign, including diversity, she says.
As for the next generation, we need to address the economic and political framework through which AI is developed if we are going to encourage young people to develop AI for the social good, says Leslie: “We need to start with the basics, start with transforming the incentives.”
Ultimately, we may need to rethink the very definition of intelligence in the context of AI to include broader, more diverse ways of thinking. Much of the original thinking about AI, including how to define it, originated at an influential meeting in the 1950s at Dartmouth College in New Hampshire. “This definition of intelligence comes out of the Dartmouth conference,” says Hall. “Which, by the way, were all men.”
Topics:






