On January 10th, Elon Musk announced that X's algorithm will be open-sourced within seven days. All the code that determines which posts and ads you see. Updates every four weeks with developer notes. Full transparency.

My first reaction was a tired "mhm". We've heard this before. This exact thing, actually. Almost word for word.
But then I started thinking about why he's saying it right now. And the context shifted my cynicism a little. Not completely. But a little.
We have, in fact, heard this before
When Musk bought Twitter in 2022, "open-sourcing the algorithm" was one of his main promises. Transparency. Trust. No secret manipulation.
In April 2023, they actually released code on GitHub. The "For You" feed went public. Musk tweeted about how this was "the most transparent system on the internet".
The problem? The code told us almost nothing.
They released the structure, but not the weights. You could see that the algorithm had a factor for "engagement", but not how much it mattered. Like getting a cake recipe without proportions: "flour, eggs, sugar" is technically correct but practically useless.
Worse: the trust and safety classifiers were missing entirely. The machine learning that decides what counts as "borderline content" that gets quietly downranked instead of removed? No code. No insight. Concerns about people "gaming the system" was the explanation.
And then the code was never updated. It sat on GitHub getting stale while the real algorithm kept evolving in private.
So yes. We've heard this.
But the context is different now
Here's where it gets interesting. Look at what's happened recently:
In July 2025, French prosecutors opened an investigation into X for suspected algorithmic bias and unauthorized data extraction. Musk called it a "politically motivated criminal investigation" and refused to cooperate.
The EU fined X 120 million euros for violating the Digital Services Act. Specifically: lack of transparency around the ad repository, the blue checkmark system, and refusing to give researchers access to platform data.
In early January, the European Commission extended a "retention order" requiring X to preserve data related to algorithms and the spread of illegal content. The extension runs through the end of 2026.
And then, three days later: "We're making the algorithm open source within seven days."
The timing is... interesting.
Cynical reading vs generous reading
The cynical interpretation is obvious: this is damage control. The EU is pushing. France is investigating. Fines are accumulating. Suddenly transparency matters again.
If X releases the code voluntarily, they can argue they're already transparent. Why do we need regulations when we're showing everything openly? It's a chess move, not a values decision.
But here's the generous interpretation: maybe the motive matters less than the outcome.
If the algorithm actually becomes public, if it actually gets updated every four weeks, if the weights are actually included this time, then we have something we've never had before: the ability to examine how one of the world's largest platforms decides what billions of people see.
Maybe it doesn't matter if Musk is doing it because he believes in transparency or because the EU is forcing his hand. The result is the same.
Or is it?
The Grok in the room
This is where I get skeptical again.
X has been working on integrating Grok, Musk's AI chatbot, into the recommendation algorithm. The goal is to have all 100+ million daily posts evaluated by Grok, which then suggests which posts are most relevant to you.
Musk described it as something that will "profoundly improve the quality of your feed".
But what does "transparency" mean when the decisions are made by an AI model? You can release all the code in the world, but if that code says "ask Grok what the user wants to see", you haven't explained anything.
Modern AI models are effectively black boxes. You can show the architecture, but you can't explain why the model prefers one post over another. Not even the people who built it can do that.
So we might get to see the code that says "here we call Grok". But we still won't know why your feed looks the way it does. The transparency just moves one step, from "secret code" to "incomprehensible AI".
That's still a step forward. But it's not the full picture that "open source algorithm" implies.
What I'm actually hoping for
Despite all of this, there's a part of me that hopes it works.
Not because I trust Musk. Not because I think X has suddenly become a champion of openness. But because the alternative is worse.
If X actually delivers on the promise, it puts pressure on other platforms. TikTok, Instagram, YouTube all have algorithms that affect what billions of people see and think. None of them are open about how.
If X shows that it's possible to be transparent without the world ending, "we can't show that" becomes a weaker argument for everyone else.
And if X doesn't deliver? If the code is as incomplete as 2023? If the Grok integration makes the whole exercise meaningless?
Then at least we have proof that the promises were empty. That's worth something too.
Wait and see (but with open eyes)
I'm not celebrating yet. Seven days is nothing. Promises are easy. Execution is hard.
But I'm not dismissing it entirely either. The context is different. The pressure is higher. And sometimes, sometimes, people do the right thing for the wrong reasons.
What determines whether this means anything isn't the press release. It's what actually shows up on GitHub in a week. And whether it gets updated in four weeks. And whether the weights are included. And whether the Grok integration is documented in a way that's actually understandable.
Until then? Skeptical optimism. Or optimistic skepticism. I haven't decided which yet.
Ask me again in a week.
Sources: Bloomberg, Engadget, Gizmodo, The Business Standard. The EU fines and French investigation are documented by multiple news outlets. Musk's original 2022 promises and the 2023 GitHub release are archived. Analysis of what was missing from the 2023 release comes from the Knight First Amendment Institute at Columbia University.