A recent social experiment has sparked serious questions about LinkedIn’s content algorithm. Women on the platform temporarily changed their profile genders to male and saw dramatic spikes in post visibility. This test, called #WearthePants, suggests a potential bias in the AI systems that decide what content users see. According to a report by TechCrunch, one founder saw her impressions jump 238% within a single day after making the switch.
The experiment began after users complained for months about falling engagement. Many suspected LinkedIn’s new large language model (LLM) systems, implemented recently, were at fault. Professional women with large followings reported their posts were being seen by far fewer people than identical content posted by men with smaller networks.
Testing the “Bro-Coded” Algorithm
Entrepreneurs Cindy Gallop and Jane Evans initiated the #WearthePants test. They asked male colleagues to post the same content they did. The results were stark. Gallop’s post reached only 801 people. The identical post by a man reached over 10,400 people, exceeding his entire follower count. This prompted dozens of other women to conduct their own tests.
One strategist, identified as Michelle, told TechCrunch she changed her profile to “Michael” for a week. She also adopted a more direct writing style. Her impressions surged 200% and engagements rose 27%. She concluded the algorithm seemed to devalue communication styles often associated with women. Data ethics consultant Brandeis Marshall explained that AI models can inherit societal biases from their training data.
LinkedIn firmly denies using demographic data to rank content. A company spokesperson stated its systems do not use age, race, or gender as signals for visibility. They attribute changes in reach to increased competition, with posting and comment activity rising significantly year-over-year.
Widespread Confusion Over AI-Driven Changes
The controversy points to a broader issue: user distrust of opaque algorithmic systems. Many professionals, regardless of gender, report confusion and frustration with LinkedIn’s latest updates. A data scientist mentioned her consistent posting now yields only a few hundred impressions, down from thousands. This drop is demotivating for creators who built loyal audiences.
However, some users report success. One man said his reach increased over 100% by writing on specific topics for targeted audiences. LinkedIn confirms the algorithm now prioritizes professional insights, industry analysis, and educational content. The shift appears to reward clarity and value over frequent posting or perfect timing.
Experts say the issue is complex. Algorithmic bias is often implicit, not explicit. The AI may amplify existing engagement patterns on the platform. This can unintentionally sideline certain voices or topics. The core demand from users is for greater transparency. Yet, platforms like LinkedIn are unlikely to reveal their secret formulas, fearing people will game the system.
The ongoing debate over the LinkedIn algorithm bias highlights a critical tension in professional networking. As AI takes a greater role in curating our feeds, understanding its unseen influence becomes essential for equity.
Thought you’d like to know
What is the #WearthePants experiment?
It was a social test where women on LinkedIn changed their profile gender to male. They wanted to see if the platform’s algorithm was suppressing content from female users. Many participants reported a large increase in post views after making the change.
How has LinkedIn responded to the bias allegations?
LinkedIn states its algorithm does not use gender, age, or race to rank content. The company says it tests millions of posts to ensure fairness. It attributes changes in reach to more users posting, creating greater competition for attention in feeds.
What kind of content does LinkedIn’s new algorithm favor?
According to LinkedIn, posts about professional insights and career lessons perform well. Industry news analysis and educational content about work and the economy are also prioritized. The system aims to surface the most relevant and valuable career-focused material.
Could writing style be a factor instead of gender?
Some experts and users believe so. The AI may reward concise, direct communication styles historically associated with male professionals. If a model is trained on data reflecting this bias, it could inadvertently disadvantage other writing styles.
Are men also experiencing drops in engagement?
Yes. Reports indicate many users, regardless of gender, have seen lower post visibility recently. This suggests the algorithm change is affecting a wide swath of users, though the #WearthePants test indicates the impact may not be evenly distributed.
iNews covers the latest and most impactful stories across
entertainment,
business,
sports,
politics, and
technology,
from AI breakthroughs to major global developments. Stay updated with the trends shaping our world. For news tips, editorial feedback, or professional inquiries, please email us at
[email protected].
Get the latest news and Breaking News first by following us on
Google News,
Twitter,
Facebook,
Telegram
, and subscribe to our
YouTube channel.



