Why It Matters How Algorithms Target Young Users

by Scott

There is a version of childhood that existed before the smartphone, before the recommendation engine, before the infinite scroll. Children in that version of the world were bored sometimes, and boredom turned out to be useful. It was the condition in which imagination developed, in which children learned to entertain themselves, in which the slower rhythms of play and reading and wandering produced a kind of inner life that required no external stimulus to sustain it. That version of childhood has not entirely disappeared, but for a significant and growing proportion of children in connected societies, it has been displaced by something qualitatively different, an environment of engineered engagement that is specifically designed to capture and hold attention, and that has been optimized with extraordinary technical sophistication for adults while being applied to children whose developing minds were never part of the design brief.

The algorithms that govern what young people see on social media platforms, video streaming services, and gaming environments are not neutral discovery tools. They are optimization systems, trained to maximize the metric that their operators care most about, which in most cases is time spent on the platform. The logic is straightforward as a business proposition. More time on the platform means more advertising inventory, more data collected, more habitual return visits, and more deeply embedded behavioral patterns that make the platform difficult to leave. The content recommendation systems that these platforms deploy are extraordinarily effective at achieving this goal for adult users, and they are even more effective when applied to children, for reasons that are rooted in developmental neuroscience and that should give anyone serious pause.

The prefrontal cortex, the region of the brain most associated with executive function, impulse control, long-term planning, and the evaluation of consequences, does not fully mature until the mid-twenties. In children and adolescents, the limbic system, which governs emotional responses and reward-seeking behavior, is highly active, while the prefrontal regulation of those impulses is still developing. This neurological architecture makes young people significantly more susceptible to content that triggers emotional responses, more sensitive to social validation signals like likes and comments, more vulnerable to the lure of what comes next in a sequence of content, and less able to voluntarily disengage from stimulation that their rational mind, such as it is at their age, might recognize as unproductive or harmful. The recommendation algorithm did not create these vulnerabilities. But it was built by people who understood engagement psychology well enough to exploit them in adults, and its application to children produces effects that are disproportionate to anything that could have been anticipated from studying adult behavior alone.

The mechanics of how recommendation algorithms work in practice for young users illuminate why the concern is not theoretical. A child who watches one video about a particular topic, whether it is a game, a beauty tutorial, a prank, or a fitness routine, immediately receives recommendations for more content in the same vein. The algorithm has observed that the child engaged with the content and responds by surfacing more of it. Each subsequent piece of content is chosen partly on the basis of what other users who engaged with similar content went on to watch, which means the recommendations are drawn from the behavior of a population that includes many adults whose psychological responses to the content are different from those of a developing child. The algorithm has no model of the child’s age, developmental stage, or vulnerability. It has a model of engagement, and it optimizes for that.

The consequence, documented extensively in internal research at major platforms and in academic studies, is that recommendation systems tend to lead young users toward increasingly extreme or emotionally intense content through a process of escalation. A child who watches videos about dieting may be recommended increasingly restrictive content about food restriction. A teenager who watches videos expressing dissatisfaction with their appearance may be recommended content that reinforces and amplifies those feelings. A young person exploring content about anxiety or depression may be directed toward communities where such feelings are normalized, celebrated, or intensified rather than toward resources that might offer perspective or support. Each step in this escalation is a rational optimization decision from the algorithm’s perspective, because more emotionally intense content tends to produce higher engagement metrics. The algorithm is not malicious. It is indifferent, which in some ways is worse, because indifference cannot be reasoned with.

The social comparison mechanism that social media platforms activate in young users is particularly damaging precisely because adolescence is the developmental stage at which identity formation is most active and most sensitive. The adolescent brain is intensely focused on social positioning, on understanding how one fits into the social world, on calibrating self-image against external feedback. This is not a disorder or a weakness. It is a normal and necessary developmental process. The problem is that social media platforms have created an environment in which social comparison is continuous, algorithmically curated, and systematically biased toward presenting the most aspirational and least realistic images of other people’s lives, bodies, and experiences. The comparison that a teenager makes between their own life and the carefully filtered, professionally lit, algorithmically amplified content of influencers and peers is not a fair comparison in any meaningful sense. But the developing brain processing that comparison does not have the cognitive apparatus to fully discount the unreality of what it is seeing.

The research linking heavy social media use in adolescence to poor mental health outcomes has accumulated to the point where the general direction of the relationship is difficult to dispute, even if the precise magnitude and causal mechanisms remain subjects of active investigation. Studies across multiple countries and methodologies have found associations between heavy social media use and increased rates of depression, anxiety, loneliness, poor sleep, and reduced self-esteem in adolescent girls in particular, with the effects appearing most pronounced for the youngest adolescents and for those who are already psychologically vulnerable. Internal documents from major platforms, disclosed through legal proceedings and whistleblower accounts, have revealed that the companies’ own researchers identified these harms and in some cases found the results alarming, while the platforms continued to operate their recommendation systems without material modification.

The response from platform companies to this body of evidence has followed a predictable pattern. Initial denial that the platforms cause harm. Subsequent acknowledgment that the research is complex and that correlation does not establish causation. Introduction of cosmetic features such as screen time dashboards and content warning labels that have been consistently found in research to have minimal impact on actual behavior. Statements of commitment to child safety accompanied by continued optimization of the core recommendation systems for engagement. The gap between the public statements about child wellbeing and the internal documents about platform design has been visible enough that it has attracted regulatory attention and litigation in multiple jurisdictions.

Gaming environments present a related but distinct set of concerns. Modern online games are designed by teams of psychologists and behavioral economists as well as software engineers, and the techniques they use to maintain engagement draw on the same research base that casino designers have used for decades. Variable reward schedules, in which rewards are delivered unpredictably rather than at fixed intervals, produce more persistent engagement than predictable rewards, and this principle is embedded in the loot box mechanics, random drops, and daily login bonuses that are standard features of contemporary games marketed to children and teenagers. The social dynamics of online gaming communities, including the status hierarchies that develop around performance and the social pressure to maintain participation to avoid disappointing teammates, create additional retention mechanisms that can make disengagement feel socially costly as well as psychologically difficult.

The purchase mechanics embedded in games aimed at young audiences have attracted particular scrutiny from regulators and researchers. The use of in-game currencies that obscure the real-money cost of purchases, the placement of premium content in ways that make its absence visible and socially salient to other players, and the targeting of spending prompts at moments of heightened emotional engagement within the game are all techniques that have been identified as specifically problematic when applied to minors who have less developed understanding of financial consequences and less impulse control than adults. Several countries have classified certain loot box mechanics as gambling and regulated them accordingly, while others have required greater transparency about purchase mechanics and spending limits for minors.

The data collection dimension of how algorithms engage with young users adds a layer to the concern that is distinct from the psychological impact of the content itself. Every interaction a young person has with a platform, every video watched, every search made, every post liked, every product considered, is data that contributes to a behavioral profile that will persist, in various forms and across various companies, for that person’s entire life. The data collected from a child’s years of social media use may inform the advertising they see, the credit they are offered, the prices they are shown, and the content they encounter for decades after the original collection. The child who accumulated this data had no meaningful understanding of what was being collected, no genuine ability to consent to its collection, and no legal or practical mechanism to reclaim or delete it in most jurisdictions.

The question of what effective regulatory response looks like is contested and genuinely difficult. Age verification, the most commonly proposed mechanism for restricting children’s access to platforms designed for adults, faces serious practical and civil liberties challenges. Requiring users to prove their age before accessing online services requires the collection of identity documents, which creates its own privacy risks and excludes users who lack the required documents. Technical circumvention of age verification is accessible to any motivated teenager, which means the systems primarily affect compliant young people while leaving determined ones unaffected. The alternative of designing platforms specifically for children, with recommendation systems that are genuinely optimized for age-appropriate content and wellbeing rather than engagement, requires a level of regulatory compulsion that most major platform companies have resisted strenuously.

Some jurisdictions have moved further than others. The United Kingdom’s Age Appropriate Design Code, which came into effect in 2021, requires online services likely to be accessed by children to configure their default settings to provide high levels of privacy protection, to consider the best interests of child users in their design decisions, and to avoid using techniques that exploit children’s vulnerabilities to encourage extended use. The code has influenced platform design decisions in meaningful ways, though its enforcement has been patchy and its effects on the recommendation systems that drive the most concern have been limited. Several American states have passed laws imposing various restrictions on how platforms can engage with minors, and a federal law specifically addressing children’s online safety has been debated but not enacted as of the time of writing.

The educational dimension of this issue is one that receives less attention than regulatory approaches but may ultimately be more durable. Young people who understand how recommendation algorithms work, who can articulate what the platform is optimizing for and why, who have developed the critical vocabulary to analyze the content they are consuming rather than simply consuming it, are better equipped to navigate the algorithmic environment than those who encounter it without any conceptual framework. Media literacy education that includes specific, age-appropriate instruction in how recommendation systems work has been shown to improve young people’s ability to recognize and resist manipulation, though the implementation of such education is inconsistent and often inadequate relative to the sophistication of the systems being taught about.

The adults who designed the recommendation systems that now shape the information environment of billions of young people did not set out to harm children. Most of them were solving interesting technical problems in the context of building products that adults wanted to use, and the extension of those products to younger audiences happened through a combination of market pressure, regulatory vacuum, and genuine belief that the platforms offered value that outweighed the risks. The harm that has resulted is real and documented, and its continuation in the face of that documentation represents a choice, by platform companies, by regulators, and by societies that have not yet decided how seriously to take the question of what kind of information environment children deserve to grow up in.

That question is worth taking seriously precisely because the stakes are not abstract. The children navigating algorithmic environments today are developing the habits of mind, the emotional regulation capacities, the social comparison frameworks, and the relationship to information and entertainment that they will carry through their adult lives. What is being shaped in those years of algorithmic exposure is not just their media consumption habits but their understanding of themselves and of other people, their tolerance for complexity and boredom, their capacity for sustained attention, and their sense of what the world is like and what they are worth within it. The algorithm does not care about any of this. That is precisely why everyone else should.