AI Safety & Responsibility Policy Manager

🔒 Confidential Employer
Posted 3 May 2026
LOCATION
Remote
TYPE
Full-time
LEVEL
Mid-Senior level
SALARY
£200,000 / year
CATEGORY
Technology
This employer holds a UK Home Office sponsor license — sponsorship for this specific role is at the employer’s discretion

SKILLS

Content Policy Trust & Safety Policy Content Moderation Machine Learning Written Communication Data Analysis Policy Drafting LLM Prompt Engineering

FULL DESCRIPTION

AI Safety & Responsibility Policy Manager

[Employer hidden — view at passion-project.co.uk] - Remote - Full-time

Compensation: $150K – $200K

About the role

As [Employer hidden]’s products evolve and become increasingly capable, and as our user base expands, the need for clear and accurate content policies has never been greater. We're looking for a Policy Manager to own and evolve the policies that govern what our AI systems can and cannot do—across our consumer products, enterprise offerings, and third-party model integrations.

This role will own the creation, iteration, maintenance, and implementation of [Employer hidden]’s content policies, working to capture genuine harms without unnecessarily blocking legitimate use cases. As we invest more heavily in LLM-based moderation, the quality of our policy will increasingly determine the quality of our moderation, and this role ensures that policy evolves along with the product and surrounding environment.

What you’ll do

  • Own and maintain [Employer hidden]'s content policies, balancing user safety, creative expression, and operational feasibility
  • Translate policies into LLM prompts and continuously iterate to drive accuracy improvements
  • Track shifts in cultural and market norms and new use cases, and continuously evaluate and update policies accordingly
  • Build frameworks for ongoing policy review, ensuring policy remains nimble, accurate, and appropriate for [Employer hidden]’s user base
  • Serve as the policy subject matter expert for key internal partners, particularly our enterprise enablement team
  • Translate [Employer hidden]’s policies into clear documentation for internal teams, enterprise and API partners, and end users
  • As part of a small, collaborative AI Safety and Responsibility team, contribute to work across the broader team as priorities and needs evolve

What you’ll need

  • 5+ years of experience in content policy, trust & safety policy, or a closely related field at a technology company
  • Strong understanding of content moderation systems
  • Hands-on experience using machine learning systems for content moderation, policy enforcement, or risk assessment
  • Excellent written communication skills, including ability to translate nuanced policy positions into clear, functional documentation
  • Strong ability to use data/statistics to inform policy decisions
  • Comfort with ambiguity and a track record of making principled decisions in fast-moving environments
  • Ability to act as a self-starter, taking initiative to identify opportunities to improve or build on processes and work products
  • Collaborative working style with the ability to influence cross-functional teams without direct authority

Nice to Have

  • Familiarity with generative AI products, including video, image, or avatar-based applications
  • Experience using LLMs for policy drafting and/or enforcement
  • Experience in a high-growth startup environment where T&S/policy infrastructure was being built from scratch

Working at [Employer hidden]

Great things come from great teams. We’d love to hear from you.

We’re committed to creating a space where our employees can bring their full selves to work and have equal opportunity to succeed. So regardless of race, gender identity or expression, sexual orientation, religion, origin, ability, age, veteran status, if joining this mission speaks to you, we encourage you to apply.

Sign up free — access 45,000+ UK sponsor-licensed jobs