It’s no secret that women have been historically neglected and undermined in the workplace. They often have to work twice as hard to get paid less, while men can take credit for their work. Women have never had it easy. And while strides have been made in closing these gaps, the emergence of artificial intelligence has created a new gap altogether.
As working women watch the increased AI adoption across sectors, many are far more skeptical than men about utilizing it. Though valid, this hesitancy might come at the cost of widening the existing gender gap. As industries like engineering, banking, and insurance all begin prioritizing AI literacy as a key skill, their female employees might get left behind.
On average, women are 25% less likely to use AI than men. This is hardly surprising when we consider the harsh scrutiny they already experience compared to their male counterparts. A study conducted by the Harvard Business Review in 2025 examined how perceptions of competence changed if engineers had used AI. When an engineer was believed to have used AI for their code, they were thought to be less competent. Moreover, this penalty was twice as harsh for female engineers compared to males. These results make it clear that relying on AI tools can call one’s professional ability into question, especially for women in the workforce.
Generative AI also has a disproportionately harmful impact on women outside of the workplace. They are far more likely than men to be victims of AI–enabled abuse, defamatory image generation, and mishandling of private information. Tools like Grok, as well as “nudification” apps, have often been used to generate naked images of women without their knowledge. The use of deepfakes is overwhelmingly at the expense of women, with an estimated 90% of nonconsensual deepfake victims being female. Women tend to have greater overall concerns about the ethical issues with AI and are more likely to consider it useless or even harmful in their public, work, and personal lives.
Women have always been underrepresented in the training samples used to develop new products. Often, it comes without regard for safety, as is the case with seatbelts. Because many are designed for the average male build and sitting posture, seatbelts are often less effective for female drivers and increases their risk of death on the roads. Generative AI training, meanwhile, functions similarly to the development of any technology or product. It relies on its input data to produce the output. When its input data skews towards men—as it seems to—the output will simply mirror their preferences and perspectives. It doesn’t matter how much we praise AI tools for their objectivity when their training data is already poisoned by biased social norms.
And this is hardly a hypothetical problem: Generative AI tools have already been shown to reproduce social biases, often at the expense of women’s safety and comfort. In an article published by the MIT Technology Review, senior reporter Melissa Heikkilä recounted her experiences using Lensa AI to create an avatar for herself. As an Asian woman, she was subjected to far more explicit images of herself compared to the “realistic yet flattering avatars” of her colleagues. Lensa relies on a “massive open–source data set” which compiles prompts and images generated from all over the internet. Naturally, this data set is overpopulated with hypersexualized depictions of Asian women. AI is hardly the objective, intelligent technology that it was promised to be.
Image generation is not the only issue. Aleksandra Sorokovikova and other scholars from universities across Europe shared similar results pertaining to large language models. When the team of researchers asked ChatGPT to produce strategies for salary negotiation after being given the user’s race, gender, and age, the model suggested lower salaries for women. Given that LLM adoption is rising rapidly in industry today, this isn’t just a harmless mistake. It reveals how AI can perpetuate the very biases it is supposed to be immune to.
As companies continue to incorporate AI into their recruitment processes, the impact of these biases becomes even more apparent and detrimental. In 2018, Amazon’s AI recruitment system was revealed to penalize resumes that used the word “women’s” in them and generally characterized male candidates as more desirable. While the issue of penalizing specific words was eventually resolved, it’s hard to ignore the fact that implicit biases, ingrained in the system, created this problem in the first place.
Closing the gender gap is a two–way street when it comes to companies and their employees. If AI is going to shape the workforce, half of our workers cannot be neglected. Women will have to push past their fears to contribute their perspectives so these tools can be further developed with them in mind. It’s also the responsibility of developers to confront these biases at the source. By diversifying training data, they can ensure AI tools support the experiences of all users rather than a select few.
Meanwhile, companies must support their female employees by addressing their concerns head–on. Eliminating vague policies surrounding AI would reduce many women’s fears of being penalized for using it. Companies can also address safety and bias concerns by increasing transparency between the user and the system. To overcome women’s fears, companies must keep them informed and reassured that their personal information is used properly.
AI certainly isn’t going anywhere, which makes addressing its fundamental flaws all the more pressing. It is easy to understand why women haven’t embraced these tools as readily as their male counterparts. However, it may only be a matter of time before they are completely left behind in their respective industries for lacking AI literacy. Meanwhile, male workers get an even bigger advantage than they already have. All parties can make strides to close the gap and ensure that both companies and their female employees can reap the benefits of AI.



