There is a question that every working journalist should ask at the start of every shift: who decided this was the story?

Not which editor approved it. Not which source tipped it. Who – or what – decided that this particular piece of information would reach the public, and that other pieces would not?

Increasingly, the answer is not a person. It is a recommendation engine.

The Invisible Editor

We have spent decades fighting overt censorship. State-controlled media, editorial interference by owners, legal threats designed to kill stories before publication – these are enemies we understand. We have names for them. We know how to resist.

But the algorithmic curation of news is a different kind of threat, because it does not suppress stories. It simply makes some stories invisible and others inescapable, based not on their importance but on their capacity to generate engagement.

A well-reported investigation into municipal corruption will lose, every time, to a provocative headline designed to trigger outrage. Not because the audience prefers outrage – but because the system that delivers content to them has been optimized for clicks, not for civic value.

The Numbers Lie

I have heard editors defend this arrangement with a familiar argument: “We are giving people what they want.”

This is a lie, and it is important to understand why.

What engagement metrics measure is not what people want. It is what people react to. These are fundamentally different things. A person who spends forty minutes reading a carefully sourced article on climate policy does not generate the same signal as a person who rage-clicks on a misleading headline, shares it with an angry comment, and moves on in thirty seconds.

The metrics see the second person. They do not see the first.

When you build an editorial strategy around visibility metrics, you are not serving your readers. You are training them to expect less.

What an Editor Does That an Algorithm Cannot

A human editor makes judgments that no engagement model can replicate:

  • This story matters even though it will not trend. A local zoning decision that will displace 200 families is not exciting content. It is essential information.
  • This story is popular but misleading. A viral claim that confirms existing biases may generate enormous engagement. A responsible editor kills it or corrects it. An algorithm promotes it.
  • This source is unreliable. Algorithms do not evaluate credibility. They evaluate performance. A propaganda outlet with high engagement will outperform an honest reporter with a small audience, every time.

The editor’s job is not to be popular. It is to be right. That distinction is the entire foundation of press freedom, and we are allowing it to be optimized away.

The Responsibility

I am not arguing against technology. I am arguing against abdication.

If your newsroom uses algorithmic tools to distribute content, those tools must be subordinate to editorial judgment, not the other way around. The algorithm should be a delivery mechanism, not a decision-maker.

And if you are a reader, the responsibility is yours as well. The stories that matter most are often the ones that no algorithm will ever show you. Seek them out. Subscribe to the reporters doing the work. Read past the headline.

The truth does not optimize well. That is precisely why it needs defenders.


This post reflects the editorial position of the Council of Twelve. The facts presented have been independently verified.