Research Assistant - Human Influence

🔒 Confidential Employer
Posted 7 May 2026
LOCATION
London
TYPE
Contract
LEVEL
Entry-level
SALARY
£75,000 / year
CATEGORY
Science & Research
This employer holds a UK Home Office sponsor license — sponsorship for this specific role is at the employer’s discretion

SKILLS

Python R Statistical Modeling Experimental Design Machine Learning Natural Language Processing Communication AI Safety Awareness

FULL DESCRIPTION

Research Assistant - Human Influence

London, UK - [Employer hidden — sign up to reveal] - 6-month fixed-term contract - Hybrid - £65,000–£75,000 per annum

About [Employer hidden — sign up to reveal]

[Employer hidden — sign up to reveal] is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK [Employer hidden — sign up to reveal] is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.

Deadline: Monday 1st June 2026, end of day, anywhere on Earth.

Team Description

The Human Influence team studies when, why, and how frontier AI systems influence human attitudes and behaviour. The team's mandate is to build a rigorous, world-class evidence base for the safe and responsible development of frontier AI. We measure the impacts of frontier AI systems on human users, to identify risks to human agency and wellbeing; and develop mitigation strategies. This includes research on persuasion, manipulation, deception, advice-giving, theory of mind, anthropomorphism, sycophancy, and socioaffective human–AI relationships.

Our team includes top technical talent from academia and frontier AI companies. Our projects combine methods from computational social science, AI safety and security, cognitive science, behavioural science, computer science, machine learning, and data science. Many of our projects involve conducting careful and rigorous human–AI interaction experiments and randomised controlled trials (RCTs).

  • Ability to run large-scale RCTs
  • Access to API credits for autograders/agentic workflows and large compute budgets
  • Opportunity to work with world-class talent
  • Opportunity to lead-author or co-author publications

As an example of our work, we recently completed the largest-ever study on the persuasive capabilities of conversational AI (Science publication), a large-scale study on how people use and follow personal advice from AI chatbots (arXiv), and a longitudinal study on how anthropomorphic AI facilitates human-AI relationship building (arXiv).

Role Description

Successful candidates will work with our Research Scientists to design and run studies that answer these important questions. The role is particularly suitable for candidates with an interest in pursuing a research career (e.g. recently graduated MSc students or early-stage PhD students). We encourage applications from candidates who are excited about this opportunity, but who may not meet all the stated criteria.

We are especially excited about candidates with experience in one or more of these areas: computer science, machine learning, AI; computational social science; data science (especially natural language processing); human–computer interaction; psychology; cognitive science.

This is a full- or part-time, fixed-term contract (6-months) in London.

Required Skills and Experience

  • Completed bachelor's degree in a relevant field
  • Knowledge about frontier models and how they are trained and evaluated
  • Experience planning and conducting human experiments (ideally longitudinal and/or large-scale online studies)
  • Strong coding skills (in Python and/or R)
  • Strong knowledge of advanced statistical modelling methods (e.g. multilevel/hierarchical/mixed regression models)
  • Strong verbal and written communication, experience working on a collaborative research team, and interpersonal skills
  • Demonstrable interest in the societal impacts of AI

Desired Skills and Experience

  • Published work related to societal impacts of AI, evaluation of AI systems, or relevant work in a related field
  • Experience working on model or system evaluations and other AI safety projects
  • Experience evaluating or training multimodal AI models
  • Experience fine-tuning language models
  • Experience with reinforcement learning (especially RLHF or reward modelling)
  • Enrolled in an MSc/PhD (or equivalent years of industry experience) in AI safety, computer science, data science, social or political science, economics, cognitive science, criminology, security studies, or another relevant field
  • Front-end software engineering skills to build UI for studies with human participants

Salary and Benefits

Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary. This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.

The full range of salaries: Level 3: £65,000–£75,000; Level 4: £85,000–£95,000; Level 5: £105,000–£115,000; Level 6: £125,000–£135,000; Level 7: £145,000.

Benefits include: impact you couldn't have anywhere else, resources & access (pre-release access to frontier models, compute), growth & autonomy (5 days off, learning stipends, conference funding), life & family (modern central London office, hybrid working, 25 days annual leave, 8 public holidays, generous parental leave, 28.97% pension contribution, cycling to work discounts, etc.).

Nationality and Eligibility

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements.

Security Clearance

Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance. Additionally, there is a strong preference for eligibility for counter-terrorist check (CTC) clearance. Some roles may require higher levels of clearance.

Application Process

Apply using the Greenhouse application form on this page. Upload your resume/CV and answer the required questions. Please note the use of AI in applications: all examples must be truthful and from your own experience.

Sign up free — access 45,000+ UK sponsor-licensed jobs