Trust & Safety Content Annotation Analyst
at Discord
San Francisco Bay Area
Discord is used by over 200 million people every month for many different reasons, but there’s one thing that nearly everyone does on our platform: play video games. Over 90% of our users play games, spending a combined 1.5 billion hours playing thousands of unique titles on Discord each month. Discord plays a uniquely important role in the future of gaming. We are focused on making it easier and more fun for people to talk and hang out before, during, and after playing games.
Discord seeks a Trust & Safety Content Annotation Analyst to strengthen our content moderation systems to more accurately and effectively enforce our Community Guidelines at scale. Reporting to our Core Initiatives team, you'll annotate user-reported content to train machine learning models that detect harmful behavior. This role requires flexibility across diverse policy domains, so the ideal candidate balances analytical precision with good judgment and a calm demeanor, even when handling sensitive or potentially distressing content. You'll apply operational rigor to audit ML outputs, investigate the root causes of policy interpretation errors, and communicate your findings to stakeholders across Trust & Safety. Through this work, you'll expand knowledge of policy areas and safety processes while aiding responsible, large-scale user protection.
This position is temporary.
What you'll be doing
- Annotate user-reported content with precision and consistency to build high-quality training datasets that capture the nuance ML models need to make accurate decisions.
- Quickly align with internal interpretations to apply policy judgments across multiple harm categories, including social engineering, threats, graphic content, teen safety, and harassment.
- Work with policy teams to navigate edge cases and ambiguous content, ensuring your annotations reflect current Community Guidelines and platform context.
- Audit ML model decisions to spot misclassification patterns, then investigate why automated and human judgments diverge.
- Partner with ML engineers and policy stakeholders to share insights on model performance, propose new annotation frameworks, and flag content categories requiring stronger detection.
What you should have
- 2+ years of experience in trust & safety or policy work in a social online platform environment.
- A track record of strong judgment and adaptability across a broad range of trust & safety harm types which we enforce via our Community Guidelines.
- Ability to focus within structured policy taxonomies and annotation frameworks while maintaining consistency: you’ll be reviewing a lot of content, and experience with deep focus to quickly get through a backlog will be crucial.
- A calm, resilient demeanor when handling sensitive or potentially distressing content.
- A strong curiosity around online culture and communication, and the nuances surrounding online speech.
- Strong communication skills to effectively document annotation rationales and convey findings and recommendations to a wide range of stakeholders, such as policy, quality assurance, and machine learning engineers.
- The ability to go between specific annotation decisions and broad operational trends, evidenced by previous exposure to quality assurance, root cause analysis, process improvement, or operational excellence initiatives.
Bonus points
- Hands-on experience with data annotation or dataset creation for machine learning applications.
- Familiarity with prompt engineering and ongoing iteration on LLMs.
- Familiarity with Discord or similar community-based platforms as a user or moderator.
- Experience working on globally distributed, hybrid work teams.
Candidates must reside in or be willing to relocate to the San Francisco Bay Area (Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solano, and Sonoma counties).
The US hourly rate range for this full-time position is $51.92 to $58.40 + benefits. Our salary ranges are determined by role and level. Within the range, individual pay is determined by additional factors, including job-related skills, experience, and relevant education or training. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include benefits.
Why Discord?
Discord plays a uniquely important role in the future of gaming. We're a multiplatform, multigenerational and multiplayer platform that helps people deepen their friendships around games and shared interests. We believe games give us a way to have fun with our favorite people, whether listening to music together or grinding in competitive matches for diamond rank. Join us in our mission! Your future is just a click away!
Discord is committed to inclusion and providing reasonable accommodations during the interview process. We want you to feel set up for success, so if you are in need of reasonable accommodations, please let your recruiter know.
Please see our Applicant and Candidate Privacy Policy for details regarding Discord’s collection and usage of personal information relating to the application and recruitment process by clicking HERE.
