How Algorithmic Feeds Changed Political Discourse Without Anyone Deciding to Let Them

by Scott

There was no meeting at which the decision was made. No legislative body voted on it. No referendum was held. No manifesto was published explaining the intended consequences. The transformation of political discourse that has occurred over the past fifteen years, the fragmentation of shared reality, the intensification of partisan identity, the collapse of the conversational middle ground, the rise of outrage as a primary mode of political communication, happened through a sequence of engineering decisions made by small teams of product managers and data scientists at a handful of technology companies, decisions that were evaluated against engagement metrics rather than civic health and that reshaped the information environment of billions of people without anyone fully intending the outcome that resulted.

This is the most important and least acknowledged fact about the current state of political communication. The change was not driven by a shift in human nature. People are not fundamentally more tribal or more angry or more epistemically closed than they were twenty years ago. What changed is the information environment in which those human tendencies operate, and the change was engineered, technically and deliberately, in pursuit of objectives that had nothing to do with politics and everything to do with advertising revenue.

To understand how this happened requires going back to a specific inflection point in the history of social media, the transition from chronological feeds to algorithmic feeds. The earliest social networks displayed content in reverse chronological order. You saw the most recent posts from the people you followed, in the order they were posted, and then you reached the end of what was new since your last visit and the feed stopped. This model treated the social network as a communication channel, a way of staying current with the people and organisations you had chosen to follow. It was limited by the attention of the people you followed, because if they did not post, there was nothing to see.

The chronological feed had a property that the algorithmic feed does not, which is that it was neutral with respect to content. A post from a close friend and a post from a distant acquaintance appeared on the same terms if they were published at the same time. A post expressing a measured opinion and a post expressing an outraged opinion had equal prominence if they were published simultaneously. The feed did not decide that one type of content was more valuable than another. It simply showed you what was posted, in the order it was posted.

The algorithmic feed was introduced as a solution to a genuine problem. As social networks grew and users followed more accounts, the chronological feed became less useful because the volume of content exceeded the amount any user could reasonably consume. Facebook introduced its News Feed algorithm in 2009 and began significantly ranking content by predicted relevance in 2011. Twitter began shifting to an algorithmic timeline in 2016. Instagram, owned by Facebook, made the same transition in 2016. The stated rationale in each case was to show users the content most likely to be relevant and interesting to them, surfacing the posts they would most want to see from among the full volume of content available.

The engineers who built these algorithms were solving an information overload problem, and the metric they used to measure whether they were solving it correctly was engagement, specifically whether users clicked, liked, commented, shared, and spent time on the platform. Engagement was a reasonable proxy for relevance in the narrow sense that content that users engaged with was content they were responding to. But engagement turned out to be a very imperfect proxy for the kind of content that users would retrospectively say was good for them or that contributed to a healthy information diet, because engagement is strongly associated with emotional arousal and emotional arousal is not uniformly distributed across the range of human experience.

Content that makes people angry, frightened, disgusted, or tribally activated produces higher engagement than content that informs, challenges, or promotes nuanced understanding. This is not a hypothesis or a speculation. It is one of the most robust findings in the research literature on online behaviour, and it has been replicated across platforms, across cultures, and across political contexts. The mechanism is straightforward from an evolutionary perspective. Emotional arousal, particularly negative emotional arousal, evolved to capture and hold attention because threats required immediate response. Content that triggers these arousal states hijacks the same attentional mechanisms, holding the reader on the page and prompting the social responses, sharing and commenting, that signal high engagement to the algorithm.

The algorithm, optimising for engagement, learned to surface more of the content that produced engagement, which meant more of the content that produced emotional arousal, which meant more outrage, more fear, more tribal conflict, and more of the particular type of political content that frames every issue as an existential battle between a righteous in-group and a villainous out-group. This was not a deliberate editorial choice. It was an emergent property of an optimisation process applied to human psychology at scale. The algorithm had no model of civic health, no concept of the difference between productive disagreement and destructive tribalism, no way to distinguish between engagement that enriched the user’s understanding and engagement that merely activated their amygdala. It optimised for the signal it was given, and the signal rewarded the most emotionally activating content.

The consequences compounded through several mechanisms that operated simultaneously and reinforced each other. The first was the filter bubble effect, in which the algorithm’s tendency to show users more of what they had previously engaged with created information environments that became progressively more homogeneous over time. A user who engaged with left-leaning political content was shown more left-leaning political content, which prompted more engagement, which prompted more such content, until the user’s feed was predominantly a single ideological perspective presented at increasing levels of intensity. The same process occurred in the other direction for right-leaning users. The two populations were simultaneously inhabiting the same platform while occupying entirely different information environments, each of which felt complete and representative to the people inside it.

The second mechanism was the amplification of extreme voices within each ideological community. The algorithm’s preference for high-engagement content meant that the most emotionally activating voices within any community received disproportionate distribution. A measured commentator who accurately described a complex policy situation received less algorithmic promotion than one who described the same situation as an outrage and attributed it to the malicious intent of the opposing side. Over time, the voices that received the most algorithmic amplification, and therefore the most followers and the most influence, tended to be those that specialised in high-intensity tribal messaging. The moderate centre of each political community received less algorithmic support and gradually receded from prominence, not because its members stopped speaking but because the algorithm consistently preferred their more extreme neighbours.

The third mechanism was the weaponisation of outrage as a deliberate communication strategy. As political actors, both genuine political figures and the broader ecosystem of commentators, activists, and content creators who live in the attention economy, came to understand that emotionally activating content received more algorithmic distribution, they adapted their communication strategies accordingly. The incentive structure of the algorithmic feed rewarded content that provoked strong emotional responses, and so content creators who wanted to reach large audiences learned to produce such content. The political media environment that emerged from this adaptation was one in which provocation, outrage, and tribal activation were not merely natural tendencies but deliberate strategies optimised for algorithmic distribution.

The effects on political epistemology, the way people form beliefs about the political world, were profound. The information environment in which people formed their political beliefs shifted from one in which a variety of sources and perspectives competed for attention on roughly equal terms to one in which the most emotionally activating framing of every issue received disproportionate distribution. People who consumed news and political information primarily through social media feeds were not getting a representative sample of political discourse. They were getting a sample that had been filtered for emotional intensity, which meant a sample systematically biased toward the most alarming, the most outrageous, and the most tribally divisive interpretations of every political development.

Shared factual reality, the foundation on which democratic deliberation is supposed to rest, eroded as people in different filter bubbles encountered not just different opinions but different facts, different framings, and in some cases entirely different understandings of what had happened in a given political event. The concept of the mainstream media, which whatever its flaws at least provided a common set of facts that political argument could take as its starting point, was displaced by an ecosystem in which every faction had its own high-engagement information sources that provided framing consistent with its existing beliefs and systematically excluded information that might complicate those beliefs.

The role of misinformation in this ecosystem is frequently discussed but often misframed. The most important form of misinformation in the algorithmic feed era is not the outright fabrication that is sometimes called fake news. It is the accurate-but-misleading, the selectively emphasised, the decontextualised, the framed-to-provoke. These forms of misinformation are much harder to fact-check and much harder to counteract because they contain enough truth to be defensible while being structured to produce the maximum emotional activation and tribal response. The algorithm has no way to distinguish between content that is accurate and context-rich and content that is accurate and context-stripped, because it measures engagement rather than epistemic quality.

The internal research conducted by Facebook and documented in the disclosures made by whistleblower Frances Haugen in 2021 provided a rare window into what the platform’s own researchers understood about the relationship between its algorithms and political polarisation. Internal documents showed that Facebook researchers had found clear evidence that the platform’s recommendation algorithms were amplifying divisive content, that the company had identified specific algorithmic changes that would reduce polarisation at the cost of reducing engagement, and that these changes had been deprioritised because of their projected impact on engagement metrics. The platform knew what it was doing to political discourse and chose not to change because the change would have cost engagement, and engagement was what the advertising model ran on.

This is the moment at which the absence of a decision becomes most visible. Nobody decided that political polarisation was an acceptable price for advertising revenue. The engineers who built the engagement algorithms were not making political choices. The product managers who evaluated changes against engagement metrics were not consciously prioritising tribalism over democracy. The executives who made the final decisions about which changes to implement were thinking about quarterly revenue and user growth. The political consequences of their choices were real, significant, and at least partially understood within the organisations, but they were externalised costs, borne by society rather than by the platforms, and therefore systematically underweighted in the decision-making process.

The regulatory response to this situation has been slow, fragmented, and largely inadequate relative to the scale of the problem. Legislators in most jurisdictions lack the technical understanding to design effective interventions, and the platforms have invested significantly in lobbying and in public relations that frame any regulatory intervention as a threat to free speech. The interventions that have been attempted, mostly focused on content moderation, address only the most extreme forms of harmful content and leave the fundamental architecture of engagement-optimised algorithmic amplification entirely intact.

The question of what a different architecture might look like has been explored by researchers and advocates but has not been seriously engaged by the major platforms. Algorithmic transparency, requiring platforms to disclose how their ranking systems work, would at least allow external researchers to audit the relationship between algorithmic design and political outcomes. Algorithmic choice, allowing users to select between different feed ranking approaches including chronological, would distribute the choice about how the information environment operates back to the users rather than concentrating it in the platform. These are not radical proposals. They are modest adjustments that would maintain the basic structure of social media while reducing the degree to which the platforms’ commercial incentives determine the character of public political discourse.

What the history of algorithmic feeds and political discourse ultimately illustrates is a principle that applies to technology more broadly. Decisions about technical architecture are decisions about social outcomes, whether or not the people making them understand them as such. The choice to optimise a feed for engagement was a choice with political consequences, even though it was made entirely without reference to politics. The insistence that technology is neutral, that platforms are merely pipes, that engineering decisions are separate from their social effects, is not merely an intellectual error. It is a way of avoiding accountability for choices that have changed the world in ways that nobody voted for and nobody decided were acceptable.