Facebook clarifies how it polices social network for lurid content
Facebook will still rely on users to report offensive or inappropriate posts, but as of Sunday night, they have more guidance on what the company considers a violation of its standards.
Keeping scandalous photos, harassing language and violent content off its network has been a priority for Facebook for years. But some of its 1.39 billion users have been critical about the haphazard application of content guidelines.
In a blog post Sunday, Facebook officials responded with an updated guidebook that adds clarity and examples to terms such as “hate speech” without changing the standards themselves.
Any flagged content is reviewed by a moderator regardless of whether one person reports it or 100. The content gets removed if it’s in violation of the standards or a law.
“In order to treat people fairly and respond to reports quickly, it is essential that we have policies in place that our global teams can apply uniformly and easily when reviewing content,” Facebook writes. “As a result, our policies can sometimes be more blunt than we would like and restrict content shared for legitimate purposes. We are always working to get better at evaluating this content and enforcing our standards.”
In the case of nudity, displaying genitals or “focusing in” on a fully exposed butt isn’t okay. But breastfeeding images or post-surgery breast photos are OK. Same for nudes in art. Sexually explicit passages may also be deleted, Facebook warns.
Attacking people based on gender, race and other traits is unacceptable. But hate speech can be linked to or re-posted for the purpose of raising awareness. Facebook expects users to mention that rationale though.
Facebook also wants users to sign up with their “authentic identity.”
“When people stand behind their opinions and actions with their authentic name and reputation, our community is more accountable,” according to the guidelines.
Chat with me on Twitter @peard33