Generative AI and the Gender Bias
On gender bias in Generative AI models, fairness assessment, and being a female leader in the tech industry. Special post for International Women’s Day 2024.
Dedicated to the many talented women I have the honor of working with,
and to my beloved girls, who continue to inspire me every day.
Today, the world marks International Women’s Day.
Nothing to celebrate really. In an era when rape is used as an act of war and some women’s rights organizations blame the victims; when women are being kidnapped by terrorists and the world remains silent; when regimes that diminish women increase their power, when cultures that discriminate women gain popularity and mindshare, we hardly have a reason to celebrate International Women's Day.
It is, however, an opportunity to reflect on what it’s like to be a woman in tech and how the gender bias presents itself in AI models.
Women in Tech
Women are under-represented in the tech industry globally, and especially when it comes to the more senior levels.
Being a woman in the tech industry, you learn to grow thick skin. You learn to not let it get through to you when men look down at you, try to dismiss your opinion, judge you for your looks, or minimize your achievements. You learn to ignore all of that and earn respect through your hard work and professionalism.
It does not get easier as you become more senior.
Being a female senior executive in the tech industry, it still happens that I’m the only woman in the meeting room. It sometimes makes it hard to get into the social circle.
And people tend to criticize female leaders much more than male leaders.
An HBR research demonstrated the criticism and the double standards related to norms of leadership. Another research indicated that female leaders encounter a unique challenge: they need to balance the expectation of being warm and nice - as traditionally expected from women - with being competent and tough - as traditionally expected from leaders.
In other words, in cases where a man is perceived as a strong, tough leader, a woman with similar attributes might get harshly criticized.
Don’t let those double standards dishearten you, ladies.
Keeping the professional approach - despite all of that - requires a ton of grit and true passion for making a difference. And having a supportive employer that invests in diversity & inclusion certainly helps. But that stoic attitude of mine has taken years to evolve.
My take on it: Keep Calm and Be a Pro.
The Gender Bias
Despite advancements, some generative AI models still demonstrate gender and sexuality-based bias. Examples include associating feminine names with traditional gender roles, making assumptions based on sexual orientation, and assigning stereotyped professions based on gender and ethnicity. The problem is that it impacts decision making and fixates inequalities.
Let’s look at one example to illustrate the gender bias, one that has surfaced in my earlier blog post on Generative AI impact on the film industry. I asked the model to create a film poster for the theoretical film about the White House clinical trials story. In the prompt to DALL-E for image generation, I described each character and made it clear that two of the three computer scientists in the story were female, however the model kept coming up with male scientists.
But Generative AI models learn from seeing examples. In this case from real world data across the web. And just like a child who does not see enough examples of female scientists and leaders as role models, how can we expect an AI model to learn a pattern if it only sees very few examples of it?
Just like a child who doesn’t see enough examples of female leaders as role models, how can we expect an AI model to learn a pattern if it sees only very few examples of it?
Fairness Assessment
Fairness Assessment is a process that evaluates the fairness of a system, model, or service and helps surfacing additional data needs. Fairness Assessment is typically performed by identifying factors and/or demographic groups we want to test the model for, and then evaluating the model performance on data associated with one group compared to other groups. Criteria for fairness assessment could be, for example, that the performance of the model on data of female patients data should not be different than male patients in a statistically significant way.
Fairness Assessment is one of the required processes we apply to all AI models we build, to comply with our Responsible AI standards.
And back to the physical world
So what can be done about all of the above?
We need to show our younger generation more examples of female leaders and their accomplishments, so that they have role models. I’m talking real leaders: professionals and subject matter experts, rather than social media influencers and activists. Those role models could be women like Sheryl Sandberg, Professor Ada Yonath, Amy Hood, Mira Murati, Professor Dina Ben-Yehuda and Lt. Col. Or Ben-Yehuda, who happens to be her daughter – those are some good examples.
We need to make sure the training data we provide AI models is balanced and includes enough representation for different demographic groups. Yet, we need to be careful to not create bias in the other direction, as recently observed in Gemini AI model bias.
As a society, we need to support and enable women in tech. Showing clear preference for employers that promotes diversity & inclusion principles would help make this the new norm. Women should continue to support other women in the workplace, be an ally to other women and call out BS.
And we need to encourage women to dare, and overcome their self-doubt.
Wishing us all better days, when we can truly celebrate International Women’s Day.
Recent posts:
About me: Real person. Opinions are my own. Blog posts are not generated by AI.
See more here.
About Verge of Singularity.
LinkedIn: https://www.linkedin.com/in/hadas-bitran/
X: @hadasbitran
Instagram: @hadasbitran