In mid-September, The Wall Street Journal published internal documents disclosed from Facebook regarding the harmful effects of Instagram on teenage girls. According to the internal report, the app increases the prevalence of body image issues and suicidal thoughts among adolescents. The company was even planning to introduce an Instagram account so kids could attract more users, which was recently scrapped in light of the scandal. What a shock.
Social media platforms have become so dangerous in large part because of the algorithms that determine what content users see. Facebook’s algorithms prioritize content that engages people, which is most often done out of outrage. Provocative posts inspire users to comment and interact with a post. This leads to the prioritization of dangerous content.
These algorithms radicalize people to extremism and often contribute to the spread of disinformation. They fueled QAnon, the anti-vax movement, and the electoral plots that led to the January 6 insurgency. Facebook has also been used as a tool by authoritarian governments around the world. This includes strengthening far-right campaigns in the United States and Brazil, stimulating genocide-like violence in Ethiopia and Myanmar, and the possibility of spying and surveillance by China and Iran, as well as serving as a platform for Russian international influence operations and recruitment by terrorist groups.
Facebook claims that it would be irrational to use this type of algorithm since advertisers avoid associating with harmful content. But the proof is in the pudding. Facebook is full of harmful content and advertisers keep pouring in.
Facebook not only has its own platform, but also Instagram and WhatsApp, and the company has a habit of using predatory practices to capitalize on the social media market.
Since the leak, former Facebook product manager Frances Haugen has become the whistleblower in a 60 minutes interview and has since testified before Congress on how the company has chosen profit over the well-being of its users and has deliberately hidden the damage caused by its platform.
Facebook, unfortunately, has no incentive to choose its users over profits. The company is protected from repercussions by Section 230 of the Communications Decency Act of 1996, which states that online intermediaries cannot be held responsible for information posted on their platform. This means that Facebook can self-regulate. Obviously, the social network is not doing enough to handle the rampant human and drug trafficking on the platform, and they are leaving much of the misinformation, cyberbullying and violence accessible.
October 4, the day after Haugen’s 60 minutes interview and a day before his testimony to Congress, Facebook and all of its subsidiary platforms suffered a major outage for about six hours. There was another blackout a few days later.
Sometimes this story felt like a well-choreographed soap opera script. Haugen timed his interview perfectly to make sure people were familiar with the documents. Then Facebook failed, highlighting our company’s dependence on the app, and its flaws were exposed to the government the next day.
Nevertheless, the global blackout has shown the public utility of Facebook in many countries. WhatsApp, Facebook’s encrypted messaging platform, is required for communication in Brazil, India and Indonesia and is crucial in war-torn Afghanistan and Syria. Small businesses around the world are relying on Facebook and its associated apps, and many are reporting lost revenue due to the outage. For some in developing countries, Facebook is actually their only link to the Internet.
Facebook sites are unquestionably crucial around the world. Based on Haugen’s testimony, the company and its CEO and Chairman Mark Zuckerberg have the power and authority to perpetuate and abuse people’s dependence on its services.
The controversy is not new to Facebook. In 2018, it was discovered that Facebook had exposed the data of millions of users to the British company Cambridge Analytica without their consent, to sell it to right-wing campaigns in the United States and the United Kingdom in 2016. This scandal showed that the users themselves were products of Facebook.
The company has since tightened data security, but continues to use, collect and share people’s data with third parties for targeted advertising purposes. Even though this scandal exposed a massive privacy breach by Facebook and many threatened to quit in response, the company continued with just a small fine and no drop in its monthly active users.
We grew up with the rise of social media, and it’s a staple in our lives, which makes it hard to take a step back. Social media is an essential way to connect and share information. Even on campus, many student organizations and university departments use Instagram to share events and information. Our relationship with social media became even clearer during the pandemic, when online platforms became the only way to connect with each other.
Facebook is a ubiquitous part of our lives despite its many wrongdoings. Its policies harm people and nations in tangible ways and shape discourse around the world.
Despite this, there has previously been little will to make a difference in the United States due to lobbying and lack of consensus on the solution. Global legislation has primarily targeted the growing responsibility for self-regulation, with the notable exception of the European Union’s General Data Protection Regulation, implemented in 2018 in response to the Cambridge Analytica scandal, which regulates storage and the use of data of people in the region.
We need to start disengaging from Facebook’s platforms in a meaningful way if we are to see real and lasting change. Otherwise, this cycle of power and influence will continue, unchecked, for generations.
Social media is a useful tool, connecting people across the world, but it should not be used to the detriment of humanity.
Ruhika Chatterjee is a junior from Princeton, NJ, studying molecular and cellular biology.