How Facebook's 'Like' Button Transformed the Internet and Why Algorithms Must Go
Facebook's 'Like' Button Ruined Internet - Time to Ditch Algorithm

The Facebook 'Like' Button That Reshaped the Digital World

On 9 February 2009, a simple feature appeared that would fundamentally alter the internet's trajectory. The Facebook 'Like' button, initially described as an "easy way to let people know that you enjoy it," marked the end of chronological timelines and ushered in the algorithm-driven era of social media.

The Algorithmic Shift

This innovation replaced digital scrapbooks with a system prioritizing popularity over recency. Users' feeds transformed from personal updates about friends, family, and pets to curated content from celebrities, brands, and topic pages. This shift introduced now-commonplace terms like 'viral,' 'content creator,' and 'influencer,' effectively ending the era of social networks and beginning the age of social media.

Other platforms including Instagram and Twitter adopted similar approaches, but TikTok dramatically intensified this paradigm with its 'For You' feed. Widely regarded as the most aggressively optimized system for user engagement, TikTok's algorithm represents the culmination of this technological evolution.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Devastating Consequences

The algorithm's destructive potential became evident in 2016 when military-linked accounts in Myanmar exploited Facebook to spread hate speech against the Rohingya minority. A United Nations investigator later described the company's algorithm as a "beast" that fueled ethnic cleansing on a massive scale.

These systems have become tools for monetization, division, and radicalization. A leaked internal Facebook study revealed that more than half of users who joined extremist groups on the platform did so because the algorithm recommended them. Movements including QAnon and anti-vaccination campaigns thrived on algorithms designed to maximize clicks, while vulnerable individuals have descended into despair-filled feeds with tragic consequences.

The Impact on Mental Health

Amnesty International's 2025 report, titled 'Dragged into the Rabbit Hole,' documented how children as young as 13 were plunged into a "toxic cycle" of mental health-related content within just five minutes of joining TikTok. The investigation revealed that relatively innocuous content—such as breakup posts or sad background music—could trigger algorithmic responses that rapidly escalate to dangerous material.

The system monitors every interaction: whether users swipe past, linger, replay, or 'like' content. Even spending a few extra seconds on particular videos results in more similar content appearing. Within 45 minutes, accounts were shown suicide-related material, according to Amnesty's findings.

One case study highlighted a teenager named Maelle, whose feed became dominated by videos normalizing and encouraging self-harm within weeks of joining TikTok. "At first, I used to go on there to have fun," she told researchers. "And then there was a song that came back lots of times... And gradually, I became interested in the lyrics... And bit by bit, it became something darker and darker, like death maybe isn't such a bad idea."

Tragic Outcomes and Legal Responses

While Maelle received help after her parents discovered her situation, others have not been so fortunate. Amnesty documented the case of Marie Le Tiec, who died by suicide after spiraling into an algorithm-fueled mental health crisis. "For these platforms, our children become products instead of human beings," her parents stated.

This case, along with the widely-reported death of British teenager Molly Russell and others mentioned in the Amnesty report, has prompted class action lawsuits from families seeking to hold social media companies accountable for their children's deteriorating mental and physical health.

Global Regulatory Responses

In response to these tragedies, countries worldwide are considering social media bans for children. Australia became the first nation to implement a total ban for users under 16 last year, with the United Kingdom, France, and others potentially following soon.

Pickt after-article banner — collaborative shopping lists app with family illustration

TikTok has stated it maintains more than 50 pre-set features designed to support "safety and well-being of teens" and "invests heavily in safe and age-appropriate teen experiences." Meta, which owns Facebook and Instagram, claims to share the goal of protecting young people but argues blanket bans are counterproductive, instead promoting its 'Teen Accounts' with automatic parental supervision—though regulators remain skeptical of these measures.

The Push for Broader Algorithm Reform

Despite corporate pledges, advocates hope bans on engagement-based algorithms will extend to all age groups. Organizations like the Center for Humane Technology support "resetting tech" to allow platforms to continue operating while outlawing exploitative business models.

These models have grown even more dangerous with generative artificial intelligence. AI-generated short videos now populate users' timelines, meaning computers not only select what people watch but also create the content they consume.

The 'Algo Brain' Phenomenon

Popular YouTuber PewDiePie, who earned millions through platform algorithms, warns these tools create "algo brain"—depleting attention spans and destroying self-agency. In a recent video urging viewers to break free from algorithmic feeds, he asked: "The key is intent. If you go around your life not making your own choices, then who the heck are you?"

Historical Resistance and Current Reality

When Facebook introduced its algorithmic feed in 2009, boycott calls emerged. Similar uproar followed when Twitter and Instagram adopted comparable systems—yet platform popularity continued growing. As former Facebook chief technology officer Bret Taylor observed: "It was always the thing that people said that they didn't want, but demonstrated that they did by every conceivable metric."

Eliminating algorithms will never serve corporate interests, making regulatory action and user responsibility essential to restoring genuine social networks that prioritize human connection over optimized content.