Most researchers and advocates define cyberbullying as something along the lines of "a repeated (but not always), intentional act of aggression mediated through some form of electronic contact." But how do we translate an academic definition into actually understanding the kind of messages people are sending and receiving online? For example, a bully may make derogatory comments on several of someone’s Instagram photos, but other viewers may see only one of those comments and mistakenly think of it as a one-time incident that will go away, instead of a repeated offense. Or, in a different case, two people might be sharing an inside joke that may look like bullying or harassment to outsiders who can see this post on social media. As these examples show, online situations are often ambiguous making it more difficult to identify bullying online than offline.
These kinds of dilemmas complicate recognizing cyberbullying for online bystanders. Online bystanders are people who witness aggression online and could potentially intervene, but often they don’t know either a victim or a perpetrator, are unsure about the relationship between them, and have very little, if any, context for the exchange. Without context, online bystanders have to rely on message content only, can feel uncertain of when and how to respond, and, consequently, are less likely to intervene and call out a bully.
Our recent study has looked at how factors like repetition, number of offenders, and re-sharing of messages impact how people perceive bullying messages on sites like Twitter. We found that online bystanders are more likely to recognize bullying when there are multiple bullies involved, each with his or her own bullying tweet. When the situation is more ambiguous – either because there is only a single bully or several bullies re-tweet the same content – bystanders are less clear about whether they are witnessing cyberbullying or not, and, consequently, less prone to take an action, like flagging a cyberbullying post. This suggests that it’s important for potential bystanders to see more context of a cyberbullying incident, e.g., interaction history, people involved, posts exchanged, if we hope to motivate them to become online upstanders. For example, social media platforms could implement a feature that allows viewers to see more context – “context on demand” – for incidents that look as potential cyberbullying to help them make decisions of whether to flag the post or intervene in some other way.
How can we get bystanders to help stop cyberbullying?
The problem of cyberbullying recognition is only the first step towards mobilizing online bystanders into action. According to recent statistics (link is external), 70% of adult internet users report having witnessed some form of online harassment. That’s a lot of onlookers – and a lot of people who could potentially step in to stop harassment. However, getting people to actually take action is a complicated process. Even when bystanders recognize cyberbullying, it is by no means a guarantee that they will actually respond. As with bystander apathy documented in numerous studies offline, people are reluctant to take action and intervene even when they recognize a situation as an emergency. Bystander research tells us that this is because we each assume that someone else will step in. Furthermore, because we don’t know who else is watching online, we also don’t know whether others have already responded.
One critical step in getting bystanders to help stop cyberbullying is getting them to accept responsibility for helping. There are many reasons why a bystander may not accept responsibility for cyberbullying. One of them is that bystanders may not recognize the hurt that cyberbullying can cause. Educating individuals about damaging effects of cyberbullying, the effectiveness of intervening, and increasing empathy for victims of cyberbullying can help generate feelings of personal responsibility. Another reason people may avoid responsibility is due to a lack of accountability for their actions. On social networking sites, bystanders can scroll past a cyberbullying incident without anyone ever knowing they saw the bullying. Creating situations that make bystanders feel less anonymous online may encourage more accountability.
How can we use AI or automated systems to help stop bullying?
Many new cyberbullying prevention tools used by social media sites rely on artificial intelligence or machine learning techniques. These tools teach computer programs to detect certain words or behaviors, such as bullying language on social media. For example, YouTube recently reported that its automated flagging algorithms are responsible for removing over 80% of the 8.2 million videos that have violated the site’s community guidelines.
These automated tools are good at finding and removing certain kinds of negative content before they cause harm. However, like many humans responsible for moderating content, they aren’t great at evaluating context or navigating the subtleties of human interaction. An AI system wouldn’t be likely to tell the difference between a joke between friends and a genuine instance of cyberbullying.
A promising future direction for AI-supported cyberbullying prevention is to understand how people and automated systems can work together. We’ll need to investigate questions surrounding how AI systems interpret human language, and how people interpret the inner workings of AI systems. We’ll also have to tackle the fact that AI systems are trained to do their jobs based on data gathered from the actions of humans, meaning that they’ll likely make the same kind of mistakes humans do.
Cyberbullying and anti-social behavior in general are tough problems that are exacerbated by the scale and spread of modern social media. The answers to these cyberbullying questions are multifaceted. Educational programs in schools, as well as help from parents and teens, are needed to help address cyberbullying. The development of new design and computer systems from social media companies should be aimed at minimizing exposure and solutions to cyberbullying. Finally, implementation of policy and legislation should work together with educational and design solutions to create a well-rounded approach to this complicated problem.