The End of the Internet: Online Age Verification and Its Enemies
From Alexander Shatov on Unsplash
By Sander McComiskey ‘26
The internet is ending, and a new age of tyranny is dawning. Or, at least, that’s the allegation hurled from the political fringes as nations across Europe and the Americas finally move to regulate the West’s last untamed frontier: the digital world.
In a Financial Times op-ed early this year, the libertarian tech magnate Peter Thiel pilloried Australian legislation that bars youth under sixteen from opening a social media account. He labeled the bill’s age verification mandate the “beginning of the end of internet anonymity” and the stuff of “Orwellian dictatorships in East Asia and Eurasia,” rather than Western liberal democracy. Unsurprisingly, Thiel’s views are emblematic: countless libertarian politicians, public intellectuals, and think tanks have skewered the recent surge of internet regulation on both sides of the Atlantic. They warn that the conquest of the digital realm will endanger the anonymous, autonomous conduct that it has long protected.
But surprisingly, strident criticisms of internet regulation also emanate from a second political group at odds with Thielian libertarians: the far left. Writing in the New York Times, Lux Alptraum cautions that age verification “laws could mean the beginning of the end of something truly precious: the internet as an uncensored place to explore human desire in a way that’s safe and private.” Alptraum criticizes these laws from a commitment to the individual right to explore and define one’s own identity, free from social or legal constraints enforced by people with “mores and tastes that might be more censorious, uptight or even bigoted” than one’s own. Rejection of internet regulation on these grounds is commonplace in progressive corners, allying leftists and the libertarian right in a motley pincer movement of sorts assailing age verification laws from both extremes.
Both critiques are ideological in nature, a fact which makes the third, and final, source of opposition to the burgeoning internet regulation movement unique. It is the mainstream core of tech policy experts and institutions, whose objections to age verification mandates center around practicality and functionality. These experts commend the movement’s intentions, but wince at the breadth of its aims and the technological means it adopts. Instead of sweeping regulation, they urge confining prohibitions to areas of clear, proven harm. Rather than enforcing them with age verification techniques that are accurate and difficult to circumvent, they recommend weaker mechanisms that pose fewer risks to privacy and anonymity. Given the role these experts play in shaping Western technology policy, it’s this third source of resistance that poses the most serious obstacle to maximalist internet regulation.
But somehow, maximalism is enjoying startling success. Comprehensive digital regulation currently boasts levels of public support unmatched by nearly any other live policy issue. Huge supermajorities—often over 80% of respondents—support potent restrictions on minors’ internet use, and that support persists across ideologies and nationalities. This sentiment isn’t just broad but deep, as evidenced by the proliferation of activist parental organizations that agitate for restrictions on digital technology use.
In response, democratic governments have ambitiously attempted to match policy to public opinion. Australia passed a state-enforced ban on social media account ownership for youth under 16. New York forbade digital platforms from exposing minors to algorithmic, addictive feeds. Utah ordered app stores to proactively enforce their own age restrictions for apps, rather than treating them as mere suggestions.
In Western nations where sclerotic inaction in the face of popular consensus is so often the rule, digital regulation has proven the exception. It has also engendered fierce opposition. This article explains why.
Why Regulate the Internet?
Some historians date the West’s original digital sin to the Clinton era, when political leaders chose not just to allow private industry to take the lead in commercializing the internet, but to abstain from a supervisory role as the digital ecosystem grew. These writers argue that by declining to actively structure markets for information technology, the government allowed firms to monopolize the digital commons, choking off innovation and disempowering citizens. Crucially, though, this narrative overlooks the fact that the state’s pullback extended not only to economic regulation—ensuring competitive, functional markets—but to social regulation: making the digital sphere conducive to the well-being of individuals, society, and democracy. As historian Matthew Crain writes, the strategy of “letting the private sector lead meant not only deregulation but also an abdication by the government of nearly all operational responsibility to see to it that [information and] communications systems served a public good.”
This pullback had multiple causes, ranging from Supreme Court decisions that outlawed most regulation of online conduct to a cultural climate inhospitable to government intrusion into personal decision-making. Predictably, its result was a generation of digital products designed to fulfill a single goal—profit maximization—without concern for the personal and social consequences external to the profit function. While those digital technologies have produced an abundance of information and communication, they have also wrought serious harms that aren’t inevitable byproducts of technological progress.
The first set of harms are psychological—specifically, the dramatic decline in youth mental health that coincided with the integration of smartphones and social media into everyday life. Rates of suicide, anxiety, and depression among minors have skyrocketed since 2012, and while definitively proving a causal relationship to digital technology is by nature difficult, the weight of the academic literature—including correlational analyses, experimental studies, and qualitative analyses of teens’ experiences with social media—is persuasive. Notably, the idea that smartphones and social media are dangerous to minors’ mental health is entirely plausible to minors themselves: 48% of U.S. teens say that social media has a “mostly negative” effect on “people their age,” while only 11% say the effect is “mostly positive.” The consequences stretch far beyond headline rates of anxiety and depression: digital technology has also been connected to a spike in eating disorders and insidious social comparison, moderate to severe behavioral addiction, and feelings of loneliness and isolation.
The second set of harms that result from unregulated digital technology are “epistemic,” or knowledge-related. Over the past two decades, news consumption and political discussion has migrated from traditional venues—newspapers, broadcast television, radio—to digital platforms. Many argue that this disintermediation of epistemic gatekeepers has resulted in a more democratic information environment—for example, Peter Thiel in the op-ed mentioned above. There’s something to this point. But even those who welcome the demise of legacy media should recognize that digital platforms are, for structural reasons, drastically underperforming their potential in ushering in a new age of open, participatory discourse.
That’s because these online forums for speech and socialization are not structured to facilitate productive dialogue or person-to-person connection. Instead, they aim simply to maximize engagement, a different and often competing goal. Algorithmic feeds reward outrageous over thoughtful input; short posts promote snappy, reductive takes over messy nuance; monetization formulas prod users to create content that many will interact with rather than a few will value highly; unlimited information and nonexistent vetting places the burden on the individual to check the veracity of each post. Digital platforms shape discourse to achieve a particular end, and that end is not the health of individuals and democratic society.
Psychological and epistemic harms are both related to a third kind of concern about digital technology: attentional harms. A voluminous literature explains that the fundamental commodity with which digital platforms finance their business is human attention. Unsurprisingly, this has meant the advent of business models that treat human attention as if it were commensurable with other outputs and inputs, rather than a finite and essential mental capacity. For example, Meta, the world’s sixth-largest company, pulls in next to nothing in non-advertising revenue. Nearly all of its annual $165 billion comes from selling the opportunity to show ads to users, meaning that the platform’s revenue is monotonically linked to the amount of time it can keep its users on the platform, or in other words, the number of hours it can seduce us to glue our eyes to our screens rather than the outside world. Of course, firms’ incentives are in tension with those of its customers, who find fulfillment and meaning primarily in physical reality.
In markets with different business models—like payments for fixed amounts of a good, or subscription packages—platforms make money by building something that users value. However, in attention markets without prices that are funded only by digital advertising, platforms make money by compelling the user to maximize the portion of their finite time and attention they spend on the product. Here as well, digital markets are simply not structured in a way that aligns firms’ profit-seeking with individual and social benefit.
These three sets of harms—psychological, epistemic, attentional—are easy to discuss because they have an objective, and in some cases quantifiable, character. By contrast, it’s harder to debate the fuzzier consequences of digital technology, those that relate to aspects of life that have immense moral importance, even if we can’t mathematically calculate their abundance. Think of, for instance, the decline of in-person socialization caused by the proliferation of abundant digital content and virtual communication. Or the inequality between children whose parents have the time, resources, and knowledge to protect them from adverse consequences of digital technology and those with parents and schools too over-worked and under-resourced to combat it. We can disagree over exactly how much these moral allegations matter, but they're equally important to account for in examining the effect of the government’s unwillingness to regulate digital technology.
Digital Technology and Decision-Making
An exposition like this one that lays out the manifold harms of unregulated digital technology inevitably receives a single rebuttal, especially from the aforementioned libertarians: if these technologies are so harmful, then why are people, including children, freely choosing to use them? The answer is that they aren’t, really.
After a flurry of research from psychologists and behavioral scientists, it’s become crystal clear that many digital technologies target weaknesses in human decisional faculties to induce users to interact with technology in a manner and for a duration out of sync with their long-term values and goals. For example, variable rewards schedules, which reward a certain action with a payoff only some of the time, have long been known to foster compulsive behavior. For that reason, they have become a centerpiece of digital technology design, the mechanism behind our urge to constantly check our inboxes and refresh our feeds. Similarly, “infinite scroll” or autoplay features take advantage of status quo decisional biases to extract more of users’ time. Push notifications interrupt and intrude on analog activities; social engagement via likes and comments encourages users to invest in their online persona; engagement-maximizing algorithms find the content that users struggle to resist and bathe them in it. It is an immensely difficult task to exercise deliberate agency in the face of sophisticated digital technologies built by the world’s largest firms, some of humanity’s sharpest minds, and trillions of dollars. It’s an even tougher task for minors without fully developed decision-making faculties.
The aggregate effect of these decisional manipulations is gargantuan. Economists estimate that a third of social media usage is due entirely to self-control problems, rather than a preference to spend time online. In other words, 45 minutes spent on social media each day by 4.5 billion global users are the product of compulsion, not free choice. Studies have also found that up to a quarter of users display at least moderate symptoms of addiction to social media. And about half of teens say they spend “too much” time on social media, indicating a desire for a different relationship with digital technology than the one that it desires for them.
But psychological manipulation isn’t the only reason why engagement with digital technology isn’t freely chosen. The social pressure to buy a smartphone and join social media is likely an even stronger force. Researchers tested this theory by asking college students how much they’d need to be paid to unilaterally deactivate their social media accounts for a month. The average answer was about $50 per account, which would imply huge gains in social welfare from consuming these free services. But when the researchers asked those same subjects how their answer would change if most students at their university deactivated their accounts too, the subjects answered: we’d pay you for that. The value these platforms seem to provide is, in reality, a collective action trap: youth say they’d be better off if certain digital technologies didn’t exist at all, but given that they do, they’re forced to spend time on them to avoid being left out.
This problem of coercion is worsened by the monopolization of social media markets by a few companies. Consumers can’t choose between different firms that must compete to provide the best product; instead, they have to acquiesce to the terms offered by powerful corporations, or go without high-quality digital technology altogether. More generally, users can’t rely on free exchange to align firms’ incentives with those of consumers, due to the idiosyncratic defects of digital markets: imperfect competition, misaligned agency, collective action traps, failures of rational choice, and individual and social externalities.
Because of these psychological, social, and commercial pressures, users’ decisions to engage with digital technology don’t imply that those technologies are beneficial to individuals and society. But there’s another reason that implication doesn’t hold: digital technology doesn’t just affect users on an individual basis, it fundamentally restructures the society they live in. The explosion of free digital content shrinks offline social networks by disincentivizing in-person socializing, causes the atrophy of communal organizations and third spaces, and alters the social character of childhood. In short, the widespread diffusion of digital technology has made analog life a less appealing option; its aggregate harms are thus greater than the sum of those to each user in isolation. For all these reasons, we can’t in this case rely only on individual choice to produce optimal outcomes. Social problems demand social solutions.
That is the argument for internet regulation, and for age verification laws specifically. They target the users with the weakest decisional faculties who are most vulnerable to psychological, social, and commercial manipulation and who require experience of life without unfettered digital technology before they can freely choose it. Age-gating the internet is by no means the entirety of the tech reform agenda; governments must do far more to structure a system from which good technologies naturally emerge. But it is a crucial first step.
Framed this way, you might think online age verification to be a measure to which few would object. But you would be wrong.
The Ideological Cases Against Age Verification
To recapitulate, the two most fervent rejections to age verification mandates are purely ideological. They emanate from opposite ends of the political spectrum: the libertarian right, which prizes individual autonomy and detests government overreach, and the progressive left, with its concern for the right to explore and create one’s own identity free of stigma or pressure.
To begin with the right, the allegation that age verification laws represent an abnormal expansion of state power and diminution of parental and child autonomy is simply false. Age-based restrictions on consumption and activity are the norm in the offline world. Minors face an array of restrictions on their ability to take actions that society thinks they shouldn’t—for instance, consume alcohol, use drugs, gamble, purchase analog pornography, consent to sex, marry, sign binding contracts, and much more. It’s only in the digital realm that they play by an unusual set of rules.
The argument that age verification laws unduly limit individual autonomy is stronger, as these mandates do limit minors’ choices. But they do so in order to foster a healthy capacity for agency. Cordoning off parts of the internet helps release minors from psychological and social pressure to frequent them. It also gives youth a baseline experience of a world that doesn’t revolve around digital technology, a prerequisite for rationally deciding whether to continue living in that world or not. And in addition to these agency-related benefits, age verification mandates help combat the psychological, attentional, epistemic, and moral harms of digital technology mentioned above.
The case from the left begins on firmer ground, as it seems true that some internet regulation has been motivated by the desire to give communities and parents greater control over the information their children absorb, including with respect to identity and sexuality. Of course, any laws that take aim at particular groups or identities deserve to be firmly rejected. But currently, children don’t have control over the information environment that shapes their identity; private companies do. As a general rule, it’s hard to say that digital platforms should decide what content children can and can’t see, rather than democratic governments and—through parental consent requirements—families. And again, even if you reject that argument, any limitations on children’s ability to explore their identity must be weighed against the many other benefits these laws bring.
Ultimately, to be swayed by one of these two critiques, one simply has to have an unusual moral worldview. Either formal, negative autonomy is valued over all else, or the capacity for unconstrained identity formation is. Neither of those views is very appealing or very popular. And in democracies, policy should track the moral vision of the public, not the commitments of idiosyncratic ideologues.
The Technocratic Case Against Age Verification
The most influential source of resistance to internet regulation has, by contrast, made every effort to shed itself of ideological baggage. An article published by The Verge, a leading tech news publication, titled “Welcome to the ‘papers, please’ internet,” typifies the arguments made by tech policy experts against age verification laws. The first misstep, they claim, is instituting overbroad restrictions that stretch beyond the few areas of clear, proven harm, such as online harassment or extortion. States are instead implementing prohibitions that don’t derive from a fully-formed social scientific consensus (for instance, efforts to improve mental health by limiting social media use) or, even worse, moralistically flailing at age-old scarecrows (like online pornography).
The second misstep is enforcing those restrictions with ineffective, invasive age verification techniques. These methods are allegedly often “trivially simple” to circumvent—for instance, by snapping a photo of a video game avatar rather than one’s own face to submit for machine-learning age estimation, or by disguising one’s location with a VPN. These methods are also threats to online privacy and anonymity, and they herald an internet where users’ offline identity will be linked to their online activity and their data vulnerable to all who want it.
To begin with the first purported misstep, the assertion that digital technology’s negative effects on youth is a “complicated and unsettled question” is misleading at best. In addition to the literature cited above, here’s a survey of over a hundred academic experts who overwhelmingly assent to many of the allegations of harm made by leading psychologists. This question is only “unsettled” in the most academic sense of that term—meaning, the precise nature and magnitude of the causal relationship between, say, social media use and mental health outcomes is a live research topic rather than definitive science. But it would be absurd to hold public policy to the same empirical standards as claims made in a college psychology textbook. Governments should act when they have reason to believe they can make their citizens better off or when the preponderance of the evidence tilts in a certain direction, not when they achieve perfect causal identification worthy of publication in a top journal.
More generally, this argument falls victim to the fallacy that states should only act to prevent harms that they can quantify. Internet regulation isn’t only motivated by the literature connecting social media use and negative psychological outcomes; it also aims to promote in-person socialization, an invaluable aspect of human life. And to instill agency, so that minors can eventually make reasoned decisions about digital activity. And to safeguard finite human attention. And to shield children from turbocharged social comparison. The list of unquantifiable justifications goes on and on.
Everyone recognizes that states can act on the basis of intangible ideals—for example, human dignity or equality—rather than needing a proven causal connection to a quantifiable outcome. That’s why the founders didn’t need to demonstrate that speech protections cause lower rates of anxiety and depression before ratifying the First Amendment. When democratic governments have reason to believe that legislation will improve lives, whether quantitatively or qualitatively, they need no additional justification to act.
The second alleged misstep—that the technical means by which states enforce age gates are ineffective and invasive—is just as far off-base. Mostly, that’s because novel age verification techniques exist that are nearly perfect on every dimension. The foremost example is digital ID signaling technologies, which allow users to download a virtual copy of a state ID through a mobile app, and then use that app to send a binary signal to a website: yes, this user is overage, or no, they are not. Because the IDs are state-run, they’re perfectly accurate. Because they reveal no information but the user’s age, they’re perfectly anonymous. And because they don’t track the users’ activity, they’re perfectly private. Digital IDs are inexpensive for firms and free for users; they can serve all residents regardless of immigration status; and ID acquisition isn’t a significant barrier to access, as fewer adults lack government ID than smartphones.
The use of digital IDs for online age verification is still nascent, but they are available in a quarter of U.S. states and under development in the E.U. When they are widely adopted, online age verification will become a solved problem. And supplementary techniques are more than sufficient to handle the few edge cases in which digital IDs can’t be applied. Of course, given the ability to evade age gates through technical means such as VPNs, age verification will never be one hundred percent effective. But that isn’t the goal. The goal is to maximize the friction of accessing age-gated sites and to disestablish their use as a social norm.
Ultimately, while this third source of opposition avoids taking objectionable ideological stances as do the first two, it fails to support its contentions. Thus, in a sense, the Thielian and leftist objections are more coherent, while the third view collapses along with its premises.
The Digital Upswing
The point of this detour through detailed rejections of each objection is to show that none of them undermine the case for online age verification. But it also helps demonstrate why the internet regulation movement has enjoyed such success: the public is rejecting the unpopular moral ideals that have until now guided digital regulation in the West. Obviously, that includes the rejection of worldviews that value autonomy or unfettered exploration of identity over all else. But it also includes the rejection of the ideal of technocracy, the idea that the exercise of democratic power over the digital sphere should be limited by experts’ sense of an effort’s propriety or the statistical validity of the tests upon which the argument lies.
The West is waking up to the fact that the digital technologies on which we spend nearly half of our conscious lives profoundly influence our experience, actions, and identities. In a democracy, the public—not firms nor experts nor the market—should have final say over the nature of that influence. Citizens should reason morally about our technologies and act collectively on that basis. The burgeoning movement of commonsense internet regulation, from New York to Louisiana, from the UK to Brazil, shows that we finally are.
The author has participated in several legislative and regulatory projects across the U.S. related to internet regulation and age verification, including as an advocate, expert witness, and government employee.