Algorithmic Suppression: Is LinkedIn's Algorithm Biased Against Women? A Legal and Technical Analysis
- Martyn Redstone
- Oct 28
- 4 min read
Updated: Oct 29
A few weeks ago, a simple experiment on LinkedIn sparked a critical conversation.
It began with a post by Dorothy Dalton, building on an experiment by Jane Evans and Cindy Gallop. They tested what would happen if men and women posted identical content at the same time. The results were jarring: in one test, two male participants with a combined following of ~9,400 saw their posts achieve significantly more reach than two female participants with a combined following of over 154,000.
This raised a question that goes far beyond just "culture or code." Is the platform algorithmically suppressing women's voices?
The answer is that the claim is highly plausible. The cause is likely not direct, intentional discrimination, but a more insidious problem: proxy bias. And this isn't just a technical flaw; it's a significant liability risk under new EU regulations and existing UK law.
The "How": From "Gender Bias" to "Proxy Bias"
The algorithm isn't coded to IF (gender == 'female') THEN (demote_post). That's not how this works.
Instead, the algorithm is coded to IF (content == 'high-quality professional') THEN (promote_post).
The problem is how the machine learned to define "high-quality professional." It learned from historical data, and that data reflects a world of existing, systemic, and often unconscious biases. The algorithm has, in effect, learned a narrow, historically male-centric model of what "professional" looks like.
This is proxy bias: the algorithm isn't penalizing the gender; it's penalizing neutral characteristics that are correlated with gender.
My research identified three likely proxies for this bias:
Topic Bias:Â The algorithm may be trained to favor "hard" business topics (e.g., tech, finance, sales) over "soft" topics (e.g., Diversity, Equity, and Inclusion, workplace culture, harassment, or burnout) that, while critical, are more frequently discussed by women.
Language Bias:Â Research shows that professional language is heavily gendered. Men are often described with "agentic" words ("driven," "strategic," "leader"), while women are described with "communal" words ("collaborative," "supportive," "helpful"). If the algorithm has learned that "agentic" language equals "authority," it will systematically down-rank content that uses a "communal" style.
Data Bias:Â The algorithm may penalize career patterns that don't fit a traditional, linear model. This includes career breaks, which are taken far more frequently by women, often for caregiving.
The outcome is discriminatory, even if the intent is neutral.
The Law: Where "Bias" Becomes a Legal Liability
The original conversation correctly identified that this has legal implications. Our research found that the legal frameworks in the UK and EU are robust and applicable.
1. The UK's Equality Act 2010
One commenter, Susannah Walker, was spot on. The UK Equality Act 2010 is immediately applicable. The law is "technology-neutral"—it doesn't care if a human or an algorithm made the decision.
The relevant concept is "indirect discrimination."Â This is when a seemingly neutral policy (the algorithm's ranking logic) is applied to everyone but puts a group with a protected characteristic (sex) at a particular disadvantage.
A claimant wouldn't need to prove how the algorithm works, only that it produces a discriminatory outcome. The burden of proof would then shift to the platform to prove its algorithm is a "proportionate means of achieving a legitimate aim."
2. The EU's Dual Approach: The AI Act vs. The Digital Services Act
This is the most critical finding. Dorothy's question about the EU AI Act was exactly right in spirit, but it led to a more nuanced discovery. The EU is tackling this from two different angles.
The EU AI Act (For Hiring): The AI Act applies to "high-risk" systems. As Dorothy suspected, this absolutely covers LinkedIn's hiring tools—the "ATS" functions like LinkedIn Recruiter that sort, rank, and recommend candidates. For these tools, LinkedIn must prove its data is non-discriminatory and has robust human oversight.
The Digital Services Act (DSA) (For the Content Feed):Â This is the actual law that governs the newsfeed. The "suppression" problem falls directly under the DSA. LinkedIn is a "Very Large Online Platform" (VLOP) and is legally required to assess and mitigate "systemic risks to fundamental rights." Gender discrimination is a textbook example of such a risk.
This means LinkedIn is facing legal requirements for fairness on two fronts: the AI Act for its paid recruiting products and the DSA for its public content feed.
The "Smoking Gun": A Problem of Priority, Not Capability
This may not even be an unsolved problem for LinkedIn. The most powerful finding from our research is the stark contrast in the company's own actions.
For its Recruiter Tool (the "AI Act" problem): Our research found LinkedIn has published detailed, peer-reviewed, academic-level papers on how it measures and fixes gender bias. It has developed "fairness-aware re-ranking" frameworks. This proves it has the technical capability to solve this.
For its Content Feed (the "DSA" problem):Â In contrast, LinkedIn's public statements about the feed (like its "Mythbusting" series) are high-level, non-technical, and offer no such evidence of a fairness framework being applied.
This disparity is the story. The issue doesn't appear to be a lack of capability on LinkedIn's part—they are an industry leader in this. It appears to be a lack of priority and transparency in applying those same fairness principles to the public content feed.
From Conversation to Accountability
The anecdotes that started this conversation are not just "feelings"; they are plausible indicators of a real, systemic problem.
The original question was "culture or code." The answer, it seems, is that the code has learned our biased culture.
Now, new laws like the DSA in the EU and established laws like the UK's Equality Act provide the legal power to demand better. This "algorithmic suppression" is no longer just a community concern. It is a core compliance issue.
The question for LinkedIn is no longer if they can address this, but when they will—and whether regulators will use their new powers to make them.
