It has been a sad couple of weeks when it comes to abuse on social media. Perhaps most infamously, last week Facebook’s live video capability was used to broadcast to the world the torture of a young man in Chicago. Last month a female senator in Mexico was beaten in the streets of Mexico City and after posting images of her injuries online became the subject of a vicious harassment campaign on Twitter using the hashtag “#GolpearMujeresEsFelicidad” (“Beating Women Is Happiness”). Just a month earlier, calls to assassinate President Trump propagated widely on Twitter. What can we learn from these three incidents about how social media platforms view online abuse in 2017?
In the Facebook case, the live streamed attack was viewed by more than 16,000 people over the 30 minutes it was broadcast, with many asking why the platform did not intervene to stop it. The company did not respond to a request for comment, but the Guardian notes that when it reached out for comment, Facebook refused to tell it whether anyone had reported the video as abusive while it was airing, though the Guardian notes that the video had many comments suggesting that users were horrified with it.
This suggests that in addition to its manual abuse flagging button, Facebook might also take into account user comments to automatically flag live videos whose comments contain a high density of words expressing alarm or disgust. This would ensure that such videos undergo a manual review even if users do not explicitly flag them as concerning and eliminate the possibility of a video spreading because users are too afraid to report it.
Given the length of time the video was allowed to continue for, many users were able to save it and repost and share it extensively across the platform. In a statement issued in the video’s aftermath, Facebook offered “We do not allow people to celebrate or glorify crimes on Facebook and have removed the original video for this reason. In many instances, though, when people share this type of content, they are doing so to condemn violence or raise awareness about it. In that case, the video would be allowed.”
This raises the critical role “intent” plays in what Facebook decides to remove and what it allows to stay. At the same time, it also misses the immense issue that even well-meaning reproduction of a video of abuse still revictimizes the victim. Having a video depicting graphic abuse republished and shared tens or even hundreds of thousands of times means that the video continues to spread and be viewed by more people, depicting someone in a moment of ultimate vulnerability. Even if every one of those republications were with the intent to “condemn violence or raise awareness about it,” at the end of the day the person depicted being abused in the video is still being revictimized with each share. Facebook did not respond to a request for a comment on its stance on revictimization or sensitivities around posting abusive content even to condemn it.
If Facebook wanted to block most reposts and shares of such content, it would be relatively trivial for it to do so. Facebook, along with Google, Twitter and Microsoft all pledged last month to use the same digital fingerprinting technology used to fight child pornography and apply it to banning reposts of extremist content that the platforms have removed. Similar technology is used to scan posted videos on platforms like YouTube for illegally uploaded copyright material and remove it. While there are still ways to subvert such fingerprinting, it would be technologically straightforward for Facebook to block the majority of reposts of content that its reviewers have flagged as banned. This suggests that allowing republication of banned content represents a senior-level policy decision in Facebook, rather than stemming from a simple technical limitation.
Muddying the waters, Facebook has been very quick to remove other content from its platform, such as an iconic Vietnam War photograph and an image of a Italian public square because it featured a nude 16th century statue. In both cases the platform apologized for removing the images, but the speed with which they removed these images and the totality of their removal (deleting even a head of state’s repost of the Vietnam War image) suggests that content Facebook finds offensive can be readily removed.
When I posed these questions to Twitter for comment, a company spokesperson responded by copy-pasting Twitter’s Abusive Behavior Policy into the response email and linking to it. The spokesperson further specifically called out this statement from the Policy: “we do not tolerate behavior that crosses the line into abuse, including behavior that harasses, intimidates, or uses fear to silence another user’s voice.”
When I responded back to the spokesperson and asked if this meant that the tweets in question were therefore not considered to be abusive under Twitter’s guidelines since the company had not immediately removed them despite widespread calls for it to do so, the spokesperson responded back with “You have our statement.”
This raises the fascinating question of whether, through its statement, Twitter is asserting that the tweets in question had been reviewed and were simply not considered to be in violation of its abuse policies at the time or whether the tweets would have been considered in violation, but it simply lacked the internal resources to catch them quickly enough. It is noteworthy that instead of issuing a statement confirming that the tweets in question were indeed a violation of its policies, the company chose to point to its Abusive Behavior Policy that states that anything considered by Twitter to be a violation of that policy will be immediately removed.
What does this mean for the future of abuse on social media? At the most basic level, when you have a global platform that reaches across many very diverse countries you are going to bring very different cultures into a collision course with each other, meaning that one person’s artistic photograph of a 500-year-old statue in a public Italian city square might instead be deeply offensive “sexually explicit” pornography to another. On a second level, however, you have plenty of cases of what most might agree to be clear-cut abuse: a live video of actual torture of another person, calls for violence against women or calls to assassinate the president (the latter of which is also a violation of American federal law). Both Twitter and Facebook could leverage technology to enforce blanket bans on reposts of content they flag as banned and tools like deep learning could do a lot of help autonomously identify content much faster, such as flagging live broadcasts as they air.
Each time the platforms miss something, the typical response from the companies tends to be along the line of limited resources – that the platforms process so much content that they simply lack the human review resources to go through all that content. Yet, when it comes to other fields like food safety, we don’t argue that salmonella outbreaks are perfectly acceptable because it would cost too much for companies to invest in the equipment, training and processes to avoid it. We understand that there is always a risk of an outbreak, but we expect that food processing companies will pay the costs to avoid it to the best that technology and human capability permits today.
When it comes to social media, that raises the question of whether social platforms like Twitter and Facebook should be legally required to hire sufficient human review personnel to review every flagged piece of content within a certain time duration, such as 15 minutes or less. The monetary cost of hiring the number of human reviewers needed to offer that level of response time would likely be substantial and would severely harm the financial bottom line of these companies. That, in turn, would likely lead to targeted and sustained investment from companies like Twitter and Facebook in better automated tools for content removal and better workflows that they simply do not have the monetary incentives to develop at this time.
On the other hand, this would also create the perverse incentive for social media platforms to adopt a “remove first” policy in which they would remove content at the first report and only after review and confirmation restore it, similar to what DCMA’s Safe Harbors provisions requires of them for copyrighted content. Extending the Safe Harbors provision to abuse would likely have an incalculable and immediate impact on removing abuse from social media, but would itself create a whole new category of abuse in which trolls would flag every post by those they don’t like, causing the victim’s posts to be constantly removed. It would also place cultural differences at the forefront and intensify differences of opinion as to what precisely constitutes “abuse” and what is a removable offense.
Putting this all together, today there simply is no incentive for social media platforms to fight online abuse. Those abusive posts still bring in considerable ad revenue and the more content that is posted, good or bad, the more ad money goes into their coffers. Advertisers could fight back against this by placing mandates on their ad buys that their ads cannot appear next to abusive posts, but it is unclear how this might be enforced and who might arbitrate what constitutes “abusive” posts. In the end, until there are external pressures that force the platforms into combatting abuse, it is unclear if we will see any real changes in 2017.