Sitemap

Request a Demo

PowerReviews Stands Behind 100% Human Moderation

Reading Time: 2 minutes

It’s no surprise that with the advancements in modern technology, there’s a constant goal of having these technologies replace what once required human interactions. While these advancements are a great win in some areas of software services, here at PowerReviews we stand behind the commitment to moderate 100% of the user-generated (UGC) reviews, questions, answers, images and videos collected by our clients with our human moderation team. Though we use anti-fraud technology and profanity filters, those technologies don’t replace human moderation; instead, the two complement each other.

In the industry of supporting UGC for a brand or retailer, moderation is key in not only maintaining the authenticity of that content, but overall relevance and appropriateness as well. This is what builds consumer trust, and ultimately turns browsers into buyers.

Machine-Learning Isn’t Enough
Machine-learning, or reactive moderation, is becoming the trend in the UGC collection space. This translates to replacing a properly trained human moderator with a computer that’s been run through basic trial/error tests to “learn” how to identify irrelevant or inappropriate content. While this may be acceptable to some clients, vendors who are currently utilizing machine-learning in place of human moderation may not be disclosing this fact to the client. Each piece of content that is machine-moderated leaves a consumer at risk of reading a significant moderation slip-up at a frequent rate. It just takes one bad shopping experience on your site to lose that consumer to one of your competitors.

Context is Key
When emphasizing the importance of human moderators, training to an understanding context is our main goal. Of course, we use sophisticated profanity filters that can catch everyday obscenities and common inappropriate slang. But after the filter, the human moderator plays a significant role in understanding the actual context of content that might be inappropriate in one way, but completely relevant and appropriate in another.

Let’s take for example the word “screw.” With the many home improvement clients PowerReviews supports, this word commonly appears in customer reviews. While technology might mistakenly reject all content with the word  “screw” as profane, our human moderators would be able to understand the use of the word, and determine if it’s being used in a product-relevant way.

Uncovering Actionable Insights
Not only does 100% human moderation give a higher likelihood that the true context of a reviewer’s comments are held against our moderation standards, but it also allows the opportunity for the moderator to tag deeper insights.  Along with our standard moderation process of approving or rejecting a review, we have also trained our team to dig deeper into reviewers’ comments and identify common themes. These can include recurring flaws, customers experience with a product, innovative ways to improve products, as well as positive commentary that could be repurposed in a brand’s marketing material or store-front displays.

PowerReviews is certainly not against looking at more efficient ways of moderating content for our wide-ranging client base. Having said that, it’s important to know that there is a difference between being more efficient and putting your site at risk. Before we go any further, simply ask yourself, is the risk worth it to you?

Post a Comment

No comments yet.

Kristal Akhavan

Kristal has over six years of experience managing user generated content, both from an operational and a client-facing perspective. She currently manages onboarding and overall performance of the PowerReviews moderation team, as well as the fraud management system and product development on behalf of the Content Operations team.