Remotehey

Work anywhere, Live anywhere

Odixcity Consulting - remotehey
Odixcity Consulting

Content Moderator (Trust and Safety Specialist)

spain / Posted
APPLY

Job Title: Content Moderator (Trust & Safety Specialist)

Location: Remote (Worldwide)

Job Summary: The Content Moderator is responsible for reviewing and monitoring user-generated content to ensure compliance with platform policies, legal requirements, and community standards. This role plays a vital part in maintaining a safe, respectful, and engaging online environment by identifying and removing content that violates guidelines, including abuse, harassment, hate speech, misinformation, graphic material, or other harmful content.

Responsibilities

  • Review and act on reported content, including text, images, and videos, ensuring it meets platform guidelines. Focus will be on high-priority queues and edge cases that require human judgement.
  • Monitor daily queues to identify new patterns of abuse (e.g., new spam techniques, coordinated hate campaigns) and escalate them to the Policy team immediately.
  • Provide feedback on moderation tool efficiency. Suggest changes to workflows that can increase review speed without sacrificing accuracy.
  • Maintain a high accuracy rate (95+) on all moderation decisions. Participate in calibration sessions with the team to ensure consistency in applying policies.
  • Provide constructive feedback to Policy teams when guidelines are unclear or conflict with real-world context, helping to refine the rulebook for thousands of moderators.
  • Investigate cases where content was removed or accounts were suspended, making final determinations on reinstatement requests with a focus on fairness and due process.
  • Serve as a designated responder during “red alert” situations, such as graphic live-streamed events or coordinated harassment campaigns.

Requirements

  • Minimum of 2 years of experience in Content Moderation, Trust & Safety Operations, or community Management for a major tech/social media platform.
  • Ability to spot subtle violations that automated systems miss (e.g., hate symbols hidden in images).
  • Comfortable using moderation tools (e.g., Hive, Besedo, Salesforce) and Google Workspace.
  • Experience handling spikes in content volume during global events or viral challenges.
  • High level of emotional fortitude. Must be comfortable reviewing disturbing content (violence, hate speech, adult content) and have proven strategies for digital wellness.
  • Deep understanding of regional nuances, cultural sensitivities, and historical contexts. Ability to distinguish between hate speech and protected political speech, or between violent extremism and documentary/news content.
  • Proven track record of maintaining quality metrics while processing a high volume of content (e.g., 80-100+ pieces per hour). Ability to stay focused during repetitive tasks without losing attention to detail.