As social media platforms increasingly become the global gatekeeper, deciding what we see and don’t see, who has a voice and who is suppressed, the myriad decisions they make each day in deleting content and suspending accounts is facing increasing scrutiny for the way in which those many small choices profoundly affect our shared global discourse and understanding of the world around us. Three recent events put the impact of these choices in stark relief: the Rohingya crisis, corruption claims in China and assault allegations in the US.
Last month a wave of media reports claimed that Rohingya activists attempting to document and share what they said were the conditions and atrocities they faced, were having their Facebook posts deleted and their accounts suspended and that the company was not being responsive to their requests to have the content restored. Given that Facebook in particular is increasingly becoming the global news frontpage with an outsized influence on what news we see, and don’t see, when it begins systematically removing content, that content for all purposes ceases to exist to much of the world.
As US Supreme Court Justice Anthony Kennedy put it earlier this year, social media sites “for many are the principal sources for knowing current events … speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to ‘become a town crier with a voice that resonates farther than it could from any soapbox.’”
When asked about the Rohingya activist posts, a Facebook spokesperson responded by email that “We allow people to use Facebook to challenge ideas and raise awareness about important issues, but we will remove content that violates our Community Standards. … In response to the situation in Myanmar, we are only removing graphic content when it is shared to celebrate the violence, versus raising awareness and condemning the action. We are carefully reviewing content against our Community Standards and, when alerted to errors quickly resolving them and working to prevent them from happening again.”
The spokesperson further clarified that for all posts it reviews, the company has native language speakers who are aware of and understand the local context of each situation to ensure that its policies are correctly applied. However, given the relatively small size of its reviewer workforce, it is likely that this language expertise and contextual knowledge varies dramatically by geography, language, culture and situation and makes it likely that members of minority groups will be far less represented on its reviewer teams.
When pressed on how Facebook determined that the deleted posts “celebrate[d] the violence” when media reports seemed to suggest that many of the posts being removed and accounts being suspended were of Rohingya activists documenting atrocities on the ground, a spokesperson would respond only that the company acknowledges making mistakes.
Yet, such mistakes can have grave consequences. In light of the media’s infinitesimally short attention span, social media is one of the very few outlets oppressed groups have to document their daily lives and to try to build awareness of their suffering, as well as to reach out to groups which might be able to help with both immediate and long-term needs.
Thus, removal of such documentaries from a social media platform can have the same effect as airbrushing that history away, making it invisible to an easily distracted world and depriving those involved of a voice to tell the world their side of a conflict. While a social media platform removing a photo of a nude art sculpture might be unfortunate, the effective wholesale blocking of countless posts and activists documenting a humanitarian crisis has a very real and profound impact on society’s awareness of that crisis and in turn the ability of affected groups to generate the kind of public outcry that could generate change.
In short, the growing influence of platforms like Facebook means the digital decisions they make can profoundly affect the real world, with real life-and-death human consequences when it comes to crises.
This imbalance of power between activists and the platforms they use to document and spread the word of what they experience and uncover spans beyond humanitarian crises. At the end of last month a Chinese activist who has used Facebook to publish accusations of what he claims is corruption by Chinese government officials had his account suspended by the company on the grounds that he had “publish[ed] the personal information of others without their consent.”
While the company noted to the Times that the suspension was based on a complaint that had been lodged about the posts, it declined to identify whether the Chinese government was behind the complaint. When asked specifically whether Facebook had conversations about the posts with representatives or affiliates of the Chinese government prior to suspending the user, a company spokesperson responded by email that the company was explicitly declining to comment on whether the Chinese government was behind the suspension. He clarified that all reports of violations of its community guidelines are treated confidentially and thus even if a national government official formally requested that specific content be removed, the company will not disclose that.
The company further clarified that it applies a very different standard than traditional news reporting in how it handles the publication of personal information. While major news outlets like the Times may publish certain personal information about public officials when reporting on allegations of wrongdoing, Facebook emphasized that its community guidelines do not apply such a “news standard” to its platform, meaning that professional journalists, citizen journalists and activists are not treated any differently than ordinary users when writing about issues of public interest.
This itself is a critical distinction that portends a foreboding future for investigative journalism and public accountability. News outlets can adhere to standard journalistic practice and accepted norms when publishing stories on their own websites, but as Facebook becomes a gateway to the news and tries to become a native publishing platform rather than merely an external link sharing site, journalism standards will be forced to give way to Facebook’s arbitrary and ever-changing rules. Instead of occupying a privileged role in the information ecosystem, journalists will be subject to the same restrictions as an arbitrary citizen and where journalistic firewalls between advertisers and content may not be so strong, meaning that content guidelines could curtail reporting over time that is viewed negatively by advertisers.
Both of these examples reflect ongoing events. What happens when a public interest breaking news story bursts onto the scene, with large numbers of involved individuals coming forward to share what they claim are their experiences and knowledge about the event in question? How do social media companies handle their role as publisher of criminal allegations which the other party may vehemently deny, as well as the deluge of harassment and hate speech that often follows in the wake of such allegations? How does a company balance giving voice to formerly voiceless potential victims, while preventing their platforms from being used to launch false attacks or hate speech?
Earlier this week, Twitter suspended the account of a prominent actress speaking out against sexual assault who claimed she herself was the victim of assault. Only after a massive public backlash did the company backpedal and clarify that “her account was temporarily locked because one of her Tweets included a private phone number,” followed by the now-routine response “We will be clearer about these policies and decisions in the future.” The company did not respond to a request for comment, but the suspension follows what has become a disturbing trend among social media companies: suspend unpopular voices speaking in Twitter’s words “truth to power” only to reverse themselves and blame either technical or human error or state that the suspension was correct, but that they will try to communicate their policies better in future.
This raises the question of why social media companies don’t provide more detail when they suspend an account. In Rose McGowan’s case, as in most, the only detail provided by the company was that the actress could “Delete Tweets that violate our rules,” yet it did not provide a list of the offending tweets or why they were viewed as violations. In the case of the Rohingya activists, Facebook identified the posts in question, but provided no detail as to why they were viewed as being in violation and even in public statements to the media provided only vague remarks that the posts violated policy, but declined to state specifically which rules the posts were deemed to have violated.
After all, if the companies are serious about wanting users to adhere to their guidelines and are genuinely interested in allowing users to correct their errors and restore their accounts, it has to provide them guidance as to what they are doing wrong and how to fix it. Doing so would also help the companies themselves better educate their users and provide a better experience where users have a path forward to correct legitimate errors.
Indeed, from personal experience overseeing several large human reviewer initiatives, forcing reviewers to explicitly identify the specific written guidelines they rely on to make a given categorization decision is tremendously useful both in forcing them to be explicit in their own mind as to what they are relying on and creating an audit trail that can be used by management to refine and adjust problematic areas of policy. In the case of social media companies, forcing their reviewers to explicitly flag each of the policy sections violated by a given post would force them to be explicit in their reasoning process, would allow Facebook and Twitter to track in realtime which areas of their guidelines are under stress and would allow users to better understand what they are doing wrong or to more effectively contest incorrect removals and suspensions.
Of course, withholding such information offers companies a convenient “out” when their actions generate public scrutiny or outcry. By not providing a list of specific posts or rules being violated, companies can blame simple error for a controversial removal or retroactively identify a less charged reason to have suspended a user.
This is especially important given that social media companies are, at the end of the day, commercial for-profit enterprises, rather than non-profit public good spaces. This means they are beholden to advertisers who have increasingly pushed back against having their ads run alongside controversial or negative content. In response, the major social platforms have adopted community standards that in ways mirror Russia’s old “50% positive news” rule.
Just this past week Twitter blocked an advertisement by Rep. Marsha Blackburn, allegedly calling it “an inflammatory statement that is likely to evoke a strong negative reaction,” only to backtrack in the face of criticism even from Facebook’s Sheryl Sandberg, who said “when you cut off speech for one person, you cut off speech for all people.” Twitter’s ultimate response was that “After further review, we have made the decision to allow the content in question from Rep. Blackburn's campaign ad to be promoted on our ads platform … While we initially determined that a small portion of the video used potentially inflammatory language, after reconsidering the ad in the context of the entire message, we believe that there is room to refine our policies around these issues.” The company did not respond to a request for further clarification on why it changed its ruling in this case.
Whether “we will be clearer about these policies and decisions in the future” or “we believe that there is room to refine our policies around these issues” or “when alerted to errors quickly resolving them and working to prevent them from happening again,” social media companies tend to quickly dismiss the impact of deleting posts or suspending users, offering only that they will be clearer in the future or refine their policies, yet given their resources and outsized influence on the public conversation, the companies have done surprisingly little to offer users more insight into why their posts or accounts are removed or to make it easier to appeal wrong decisions. At the same time, the companies’ growing power in shaping the public information environment means that each wrong decision can have very real human impact in crisis situations.
As the Times recently put it, social media companies are each day “making decisions about who gets a digital megaphone and who should be unplugged from the web.” One of the great promises of social media was that it would give a voice to all, especially those who have never had one before. The reality is that instead, the same great powers who have always had a voice have had theirs amplified a millionfold, while the voiceless remain silent and the few who find their voice can just as quickly have it taken away it an instant through an opaque process with little recourse. Our vision of a grand utopia of a democratic public square where all may speak has descended back to the reality from whence it came of private walled spaces where the elites broadcast to the masses who may speak only in turn.