Content Trends that Demand Moderation
Having an online presence is no longer a luxury reserved for tech-savvy companies and global brands. It’s 2021, and by now most organizations realize that a strong online presence is imperative. From local e-commerce marketplaces to worldwide online communities, most brands have a blog or social media channel at the very least.
At the same time, content creators are churning out photos, videos, comments, and more at an unprecedented rate. To say that anyone may publish anything at any time is not an exaggeration. Unfortunately, this includes cyberbullies, online trolls, phishing scammers, and spammers.
And as platforms and chat features become more immediate and accessible than ever, there are some new content trends on the block that come with their own set of risks. Here are the top 5 content trends so far in 2021, and what moderating each means for the safety of your brand and online community:
1. Memes
The mashup of images with some form of text is nothing new. Memes as content are just as popular as ever and increasingly problematic. While memes can be fun and amusing, all memes aren’t suitable for all audiences. And although many a meme relies on sarcasm and satire to make a joke, there is a rise in memes that use offensive words or have underlying tones of discrimination or racisim.
A meme used for this purpose will likely be obvious to an experienced human moderator, who is trained to look at the context of an image that incorporates some form of text. It’s this subtlety, however, that makes memes difficult for AI to moderate. AI uses programming and techniques to first break down an image into aspects of picture and text elements, and then analyze each component separately.
2. Live and In-App Chats
From the customer service rep who is berated by a customer in live chat to the gamer who takes it a little too far and verbally harasses others during in-game chats, in-app chats can do more harm than good if left unmoderated. But simply blocking words that are “bad” isn’t enough anymore. Clever users can convey ill-will with the use of seemingly appropriate words and phrases.
Live unboxing videos, customer service in-app chats, and live-streamed jam sessions are all excellent ways to engage any audience. It’s up to you, however, to go the extra mile to keep the live stream on brand, while also protecting viewers from potentially offensive content posted by other users.
3. Nuance in Speech or Images
If we’ve learned anything over the past year, it’s that unforeseen cultural, political, or religious issues require sensitivity when they emerge. This means that content moderation policies may have to change quickly to address gray areas.
What may have been harmless content six months ago might be offensive in light of current events. In this case, AI can begin the work of identifying content that is blatantly harmful, while content that falls into gray areas should be escalated to human moderators that can distinguish nuance in speech or images. Even then, it can be difficult for human moderators to decide how to address content that could easily be interpreted as supportive (positive) or sarcastic (negative). In these instances where it may be unclear which way moderation should go, communication and detailed guidelines become essential.
4. Spam and Machine-Generated Content
Human creators with insensitivity or worse, ill-intentions, aren’t the only content challenge in 2021. Machine-generated content has become more sophisticated, presenting additional problems for existing platforms.
Malicious organizations are becoming better at bypassing a platform’s account verification mechanism, enabling them to upload content that compromises your audience’s experience. Whether the source be AI, a spam bot, or a real user, platforms need to filter all content to combat these growing threats and remove damaging content.
5. Content in Other Languages
There is a growing need for content moderation that is multilingual and AI that recognizes an array of languages, as well as the social contexts of the cultures associated with these languages.
AI, however, can have trouble moderating the proliferation of foreign language content. For example, Facebook’s AI moderation ostensibly can’t interpret many languages used on the site. What’s more, Facebook’s human moderators can’t speak languages used in some foreign markets Facebook has moved into. The result is that users in some countries are more vulnerable to harmful content.