Social Media Content Reporting

Part
01
of three
Part
01

Content Reporting UX 1

Instagram, Snapchat and Music.ly all allow users to report content which they find inappropriate or offensive. Within the reporting algorithm, the platforms allow users to be extremely specific about why they are reporting the content.

Instagram, Snapchat and Music.ly are all extremely popular social media based platforms which allow users to upload their own content. However, in order to maintain standards and keep their platforms as inclusive as possible, each app provides users with the option to report content which they find inappropriate or offensive.

What follows is a detailed, step by step description of how users can report inappropriate or offensive content on each of the three apps. However, after an extensive and exhaustive which included both Snapchat and Music.ly's apps and websites and multiple other reputable resources there was no information available to indicate what percentage of content is flagged as inappropriate or offensive on either of these platforms.

Instagram

Instagram has a well-established set of community guidelines which focus on making the app a safe place where users can post their own images at the same time always following the law. They ask users to respect everyone using the app, to not spam each other and to not post nudity. Inappropriate and offensive content is considered to be that which does not follow the community guidelines.

For users to report a post that they find offensive or inappropriate, they click on the three dots above a post. They can then navigate towards "report", they are then asked whether the content is "spam" or whether it is "inappropriate". From this, the user can choose their reason for reporting which includes options such as "hate speech or symbols", "nudity or pornography" or "I just don't like it".

If an option which will offend a wider audience such as "hate speech or symbols" is chosen then Instagram will provide a list of the kinds of images they remove, such as "posts with captions that encourage violence or attack anyone based on who they are". If the description provided by Instagram is correct for the user then they can click the "report" button. If the user selects the option for "I just don't like it" then they are given the option to block the user, so they won't see their content anyone.

After clicking the "Report" button, users will be shown a message saying "Thanks for reporting this [post]".

Snapchat

Snapchat provides an extremely user friendly offensive content reporting system. Unlike Instagram, Snapchat does not have specific guidelines to be followed. However, users can report any post which they find to be offensive.

In order for a user to report content as offensive users can click and press down on an image or video until a flag image appears in the bottom left corner. This flag can be clicked and the user will be taken to a page which provides the user with a list of options for why they are reporting the content, such as "threatening, violent or concerning", include "harassment or hate speech" or "they are pretending to be me". Once the user has chosen an option, they are navigated to another set of options. For example, if "harassment or hate speech" is chosen, the user will be navigated to a list of options which include "I am being bullied or harassed" or "It's hate speech targeting me or another group". Once the user has selected an option they can provide more information in a text box. Once this is submitted a message will pop up saying "Report Sent!' with a "Thank you" note from Snapchat and a further option to block the user who uploaded the offensive content.

Music.ly

Music.ly allows users to report offensive content by clicking on the "share" button, which will bring up a number of options including "report". Similar to Instagram, Music.ly will then ask whether the content is being reported because "its spam" or because "its inappropriate". When the user has chosen "its inappropriate" they can choose a specific reason for this inappropriateness, including "harassment or bullying", "self injury" or "violence and harm". The user will then be navigated towards a page which describes which kinds of posts Music.ly removes, such as "We remove: posts that contain credible threats". There is then the option to continue by clicking "report". Alongside this button there is an option to block the user if they do not want to see their content or receive messages from them anymore.

In summary, Instagram, Snapchat and Music.ly take users reporting offensive and inappropriate content very seriously. All three platforms allow users to be very specific regarding their reasons for reporting the content. In addition to reporting content, all users are given the option to block the user who uploaded the offensive content.

Part
02
of three
Part
02

Content Reporting UX 2

INTRODUCTION
Facebook, Twitter and Flipagram have established community standards and policies on the types of content they considered to be unacceptable. These unacceptable content include abusive/bullying posts, nudity and sexual activity, hate speech, violence and graphic content, "spammy" contents, and unlawful content.

Although Facebook and Twitter may proactively remove such content, users can also report content themselves. In this review, we will provide a step-by-step process for reporting such content on each of these social media platforms. Screenshots provided are taken directly from each website. We will also briefly comment on what happens when a user submits such report.

CONTENT REPORTING ON FACEBOOK
As a general rule, users can report inappropriate or offensive contents on Facebook by using the "Report" link. This link is available to report profile, posts, photos and videos, messages, pages, groups, ads, events, fundraisers, questions, comments or even something that the users cannot see because they don't have a Facebook account.

The "Report" link can be accessed by clicking the "overflow" (…) icon or a "down-arrow" icon located on the right side of a profile or post. For example, the first screenshot on Facebook Report Steps shows how to find the "Report" link to report a post. Once a user clicks the "Report post" option, a new screen will pop up that asks why the user wants to report this post (second and third screenshots).

Facebook also gives alternative means to handle this post besides reporting it: blocking the page that creates the post, hiding posts from that page or sending a message to that page to remove the post (last screenshot).

When users flag something, Facebook will review and remove the content if it is deemed to be violating their Community Standards. The users will remain anonymous, so the persons who receive the complaints will not know who reports them. The users can also monitor the status of their reports, see when and what decisions have been made and cancel their reports if they change their minds.

For users who want to file a report but don't have a Facebook account, Facebook provides a form that the users can fill out and send.

CONTENT REPORTING ON TWITTER
Twitter is a platform where users can express whatever is on their minds. Therefore, it doesn't screen or remove potentially offensive content, as long as it adheres to Twitter Rules, which are quite similar to Facebook's Community Standards. It also doesn't mediate content or intervene when there is a dispute between users.

Twitter encourages users to keep an open mind and to put things in context whenever they see content they may consider as offensive. It suggests users to interact directly with the person who posts such content by replying to the tweet or sending a direct message. Users can also simply "unfollow" or block the person with content they don't like.
If users feel that the content violates Twitter Rules or Twitter's Terms of Service, they can report it. The types of content that can be reported include a tweet, an individual direct message and a direct message conversation.

The method users can use to report unacceptable content on Twitter is similar to that on Facebook. Users can find a "Report Tweet" link under the "down-arrow" located on the top of the tweet, hover over an individual direct message until the "Report" link appears or click the "more" icon on a direct message conversation.

As an example, we provide screenshots on reporting a tweet on Twitter Report Steps. Once users click the "down-arrow" on the top of a tweet (screenshot on page 1), they can see the "Report Tweet" option. After they click this option, a screen will pop up asking for the reasons for reporting this tweet (screenshot on page 2). If users select "It's abusive or harmful," Twitter will ask additional information using a comment box (screenshot on page 3).

When users report a tweet or a direct message, the reported tweet/direct message will disappear from their timeline or inbox. Twitter will follow-up by providing recommendations on how to improve their Twitter experience.

CONTENT REPORTING ON FLIPAGRAM
Flipagram doesn't provide as elaborate explanation on content reporting as Twitter or Facebook does. The process also seems to be much simpler on Flipagram where users can report inappropriate content by clicking "Report Inappropriate" option on either a comment or a Flipagram.

To report a comment, users need to swipe left on the comment to make the "Report Inappropriate" option appear. Once the option is selected, it seems that a report will be sent to the Flipagram administrator.

Users who want to report a Flipagram or a user need to click the "overflow" (…) option to make the "Report Inappropriate" option appear and click it. Screenshots for these are available directly on Flipagram's website. Another screenshot on how to report a Flipagram is also available on its website.

There is no detailed information on what happens after a user submits a report on Flipagram.

CONCLUSION
Facebook, Twitter and Flipagram provide similar methods to report inappropriate or offensive content. Users can report a wide variety of content on these social media platforms. The report will be anonymous on Facebook and Twitter, while not much information is found on Flipagram.




Part
03
of three
Part
03

Content Reporting Overview

The methods social media apps use to oversee content are currently all based on retroactive actions. This means that content managers decide to delete content after it was flagged by users. Some apps such as Musica.ly and Snapchatt don't seem to have any kind of specific system in place which leads to a lot of cases of abuse, especially lately. The main trends when it comes to content managing include governments worldwide putting more pressure on companies to remove content promptly if they don't want to be fined. On top of that, there is an increase in proactive content management and use of AI in social apps.

We compile an overview of content reporting for the following apps: Facebook, Twitter, Instagram, Music.ly, Flipagram, and Snapchat. The report includes methods social media apps use to oversee content, support systems these apps have in place for individuals whose likeness is being used without their permission, and trends in content moderation for social media apps. For some apps, limited information was available as companies like to keep their inner workings under the wraps. However, we provide an extensive overview of the information that was available, particularly on Facebook, Instagram and Snapchat.

Methods social media apps use to oversee content

Different apps have different systems in place which are set up to control the publishing of content.

Facebook currently 4,500 employees who are defined as content moderators. They plan to hire another 3,000. The moderators receive two weeks of training and after that are equipped with a prescriptive manual for dealing with different situations they might encounter. This group of employees works on identifying problems after they occurred, while Facebook has automatic systems which are designed to root out extreme content before it is posted on the app, and it is particularly focused on child sexual abuse and terrorism. Moderators do not do any kind of proactive work. Overall, in the large majority of cases, Facebook does not scrutinize content before it is uploaded. Instead, Facebook relies on reporting tools they have in place. These tools allow users to report content, and then their team of moderators retroactively removes the content from the app.

Recently, Facebook, Google, Twitter and Microsoft all agreed to "take down extremist contents within two hours of them being uploaded". Facebook went ahead and designed an AI software which is designed to detect the language and stylistics of Al Qaeda and IS. Facebook reports that the system has been effective in 99% of cases when it comes to detecting extreme posts.

In May 2017, Facebook's internal guidebook to dealing with offensive content was leaked. One of the sources for The Guardian commented that "Facebook cannot keep control of its content. It has grown too big, too quickly."

In summer 2017, Instagram announced they started using AI to handle offensive comments. The app now uses machine learning to identify comments that can be flagged as offensive. AI then looks at the reply's context to identify any further offensive comments. The system takes down the comment for everyone except the person who wrote it, therefore basically making the comment invisible to everyone except the creator. The system is referred to as DeepText and it is able to sort through mass amounts of text and then create classification rules. The system will also be used to show more similar content to users based on what posts they choose to engage with.

Last year, Twitter announced they are changing their approach to moderating content on their app. Now they use "three-pronged approach to combating abusive and sensitive content on the platform by introducing a safer search option, collapsing abusive and low-quality tweets and preventing repeat offenders from rejoining the platform". The update also enables Twitter to finally identify the users which have been previously blocked but are trying to open new accounts. Twitter is now able to use a blocked user’s account history, login history, device history to permanently block such people.

Musica.ly doesn't share what kind of content management process they put in place, but the app has recently been under fire as they keep struggling to moderate and filter dangerous content. The app only recently banned self-harm and sexual tags, but teenagers are finding ways around it by using creative hashtags.

Flipagram is an app owned by Toutiao, a Chinese news aggregator, and therefore so little information is available when it comes to the app. We found no mentions on how app regulates content, but considering strict laws in China regarding media censorship, there certainly must be a system in place.

Snapchat also doesn't share the specifics regarding their content reporting and content management. The most recent news from Snapchat is regarding the Discover feature where publishers pay for space in order to be able to showcase their content. The guidelines for Discover now "explicitly restrict publishers from posting questionable pictures on Discover that do not have news or editorial value. Snapchat also clarified guidelines that prevent publishers from including reports or links to outside websites that could be considered fake news, saying that all content must be fact-checked and accurate".

Support systems apps have in place for individuals whose likeness is being used without their permission

The main support system Facebook and Instagram have in place for cases of abuse such as revenge porn are the guidelines written by the company which are then enacted by the moderators who oversee users' reports. However, these guidelines are extremely confusing, and in some cases also allow for a lot of content to still stay up. Here are examples of content that is allowed to be published and isn't considered to be violating Facebook and Instagram's guidelines:
1. Photos of "non-sexual physical abuse and bullying of children do not have to be deleted unless there is a sadistic or celebratory element".
2. Photos of animal abuse are allowed to be shared. Only extremely upsetting photos are marked as disturbing.
3. All handmade art "showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not".
4. Videos of abortions are not violating any guidelines, as long as they don't show nudity.
5. Facebook allows people to "live stream attempts to self-harm because it doesn’t want to censor or punish people in distress".

Snapchat has recently been in the news regarding child porn problems arising from the app. In November 2017, reports surfaced about pedophiles preying on children by using Snapchat and introducing themselves as teenagers in order to get children to send them nude photos. Currently there are six criminal cases against pedophiles who used Snapchat in order to find and exploit teens. Police officers note that if Snapchat is involved in the case as the medium of communication, it makes it very hard for them to find the perpetrators as Snapchat features disappearing content which is hard to trace.

Trends in content moderation for social media apps

The main trend when it comes to content moderation for social media apps refers to governments actively coming up with laws and regulations in order to sanction social media companies that do not remove offensive content promptly. The leading example of this is Germany's justice ministry which "proposed imposing financial penalties of up to €50m on social media companies that are slow to remove illegal content".

The second trend that is arising is the pressure on social media apps to increase the level of monitoring when it comes to content that is posted on the app. The last year and different reports of abuse and misuse of social media that resulted in criminal prosecution pushed apps to play a more active role when it comes to content moderation. This will lead to companies taking a more proactive, rather than a reactive approach which has been in use so far.

The last trend which has also been mentioned in paragraphs above is the increasing use of AI instead of employee work hours to identify inappropriate and offensive content. Currently, AI can read both text and imagery but the accuracy varies and AI can be extremely biased: "One shortcoming of A.I., in general, when it comes to moderating anything from comments to user content, is that it’s inherently opinionated by design". However, AI seems to be the only answer to the increasing amount of content posted on social media networks.

CONCLUSION

An overview of content reporting is provided for a series of apps, and includes different methods that social media apps use to oversee content, support systems these apps have in place for individuals whose likeness is being used without their permission, and trends in content moderation for social media apps.
Sources
Sources