





Social media. Millennials use it. Tech-savvy business people use it. Quirky grandparents use it. Generation Z grew up with it.
Internet users in 2018 for the most part use social media for work or leisure. Everyday influencers expose users to diverse and new content. Sounds good, right?
Well, only in theory. According to a study by Amnesty International, Twitter is failing female users with inconsistent and borderline nonexistent prevention methods against online abuse.
“For far too long Twitter has been a space where women can too easily be confronted with death or rape threats, and where their genders, ethnicities and sexual orientations are under attack,” said Kate Allen, director of Amnesty International UK.
Sites like Twitter, YouTube, Reddit, Instagram, and Facebook allow people to post updates and feel included in what’s happening out in the world. Users can be as active and creative as they please. 68 percent of adults use Facebook daily, and 94 percent of users between the ages 18-24 use YouTube.
On one hand, these platforms can be beneficial for businesses, brands, and entrepreneurs trying to garner a larger following because it is a great way to receive feedback and interact with consumers directly.
But there’s a dark side to the Internet. Spam, trolls, and overall toxic content is posted between users daily. It’s easy – users just need to have an account in order to post. And plenty of mean commenters get away with violating site policies because of inconsistent reprimanding or vague policies that allow for loopholes.
Amnesty International’s study revealed women get the worst of online harassment with death threats, rape threats, stalking, and racist or transphobic remarks. According to the US Department of Justice, 70 percent of cyberstalking victims have been women.
The Royal Society for Public Health conducted a similar 2017 study where they asked 1,500 young adults how they felt using Instagram and Snapchat, two visually intensive platforms. The majority stated they had feelings of inadequacy and self-loathing. These findings emulated the social pressures people deal with due to social media. It can happen directly with negative comments and bullying between users, but it can also happen indirectly, as people are constantly exposed to unrealistic expectations of what we should own, wear, eat, and look like.
It’s impossible to demand that social media sites fix the indirect issues. For the most part, general negative comments are taken with a grain of salt, and a lot of prominent social media figures make light of them.
But for trolls who intentionally submit posts putting others down or post content that is blatantly and highly offensive, there needs to be better methods for ensuring the safety of users.
Reddit has a downvoting method of regulation. If enough people dislike a post, it is removed from the thread. On Tumblr, YouTube, and Instagram, users can enable a blacklist of keywords, where users can specify certain words they don’t want to see and posts containing those words will be hidden.
But here’s the catch: no filter system is foolproof because, obviously, all words can’t be excluded. Negative comments can and do slip through the cracks.
Something has to be improved upon – whether it’s a particular coding alteration or manual regulation on posts with a significant amount of downvotes or dislikes. To sit idly by and let the online trolls continue will only create more toxicity. The problem with little-to-no regulation on comments means that people and bots can get away with posting offensive, hurtful, false, and even dangerous content. Internet users should feel safe online, no matter what.
Featured Image by Jason Howie on Flickr
Attribution 2.0 Generic (CC BY 2.0)
Sign Up For Our Newsletter