EdTech Jobs

This position has been filled

This job is no longer accepting applications. Browse open EdTech jobs or view current openings at Handshake or search for Staff AI Research Scientist - Data Quality jobs.

Handshake

Staff AI Research Scientist - Data Quality

Handshake
🇺🇸San Francisco, CaliforniaHybrid$350K–$420K/yr5mo ago

Summary

Lead research on data quality frameworks for LLM alignment and post-training techniques, shaping the future of AI through innovative data systems and evaluation methods at Handshake AI.

Key Responsibilities: Design and implement systems for identifying noisy data in human feedback datasets, lead cross-functional collaboration to translate research into production-ready RLHF and DPO pipelines, and mentor junior researchers on data-centric evaluation and reward modeling.
Skills & Tools: Expertise in machine learning, NLP, RLHF, Python, data quality infrastructure, and strong engineering proficiency; excellent communication and leadership skills are essential.
Qualifications: PhD or equivalent in relevant field with 5+ years of experience including leadership in LLM post-training or data quality research; familiarity with data valuation and AI safety is a plus.
Location: Hybrid in San Francisco, California, United States
Compensation: $350,000 – $420,000/year

Job Description

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Fast Facts

Join Handshake AI as a Staff AI Research Scientist, focusing on data quality for large language model (LLM) alignment and leading innovative research initiatives in a fast-growing AI economy.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Responsibilities: Lead high-impact research on data quality frameworks, design systems for data identification, and collaborate cross-functionally to translate research into production-ready pipelines for LLMs.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Skills: Expertise in machine learning, NLP, RLHF, Python, and data quality infrastructure; strong engineering proficiency and communication skills are essential.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Qualifications: PhD or equivalent in relevant fields with 5 years of experience, including leadership in LLM post-training or data quality research; familiarity with data valuation and AI safety is a plus.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Location: This position is based in San Francisco, California, United States.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Compensation: $350000 - $420000 / Annually




About Handshake AI

Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired.

Handshake AI is a human data labeling business that leverages the scale of the largest early career network.We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale.

This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.

Now’s a great time to join Handshake. Here’s why:

  • Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide.
  • Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs.
  • World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few.
  • Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.

About the Role

As a Staff Research Scientist, you will play a pivotal role in shaping the future of large language model (LLM) alignment by leading research and development at the intersection of data quality and post-training techniques such as RLHF, preference optimization, and reward modeling.

You will operate at the forefront of model alignment, with a focus on ensuring the integrity, reliability, and strategic use of supervision data that drives post-training performance. You’ll set research direction, influence cross-functional data standards, and lead the development of scalable systems that diagnose and improve the data foundations of frontier AI.

You will:

  • Lead high-impact research on data quality frameworks for post-training LLMs — including techniques for preference consistency, label reliability, annotator calibration, and dataset auditing.
  • Design and implement systems for identifying noisy, low-value, or adversarial data points in human feedback and synthetic comparison datasets.
  • Drive strategy for aligning data collection, curation, and filtering with post-training objectives such as helpfulness, harmlessness, and faithfulness.
  • Collaborate cross-functionally with engineers, alignment researchers, and product leaders to translate research into production-ready pipelines for RLHF and DPO.
  • Mentor and influence junior researchers and engineers working on data-centric evaluation, reward modeling, and benchmark creation.
  • Author foundational tools and metrics that connect supervision data characteristics to downstream LLM behavior and evaluation performance.
  • Publish and present research that advances the field of data quality in LLM post-training, contributing to academic and industry best practices.

Desired Capabilities

  • PhD or equivalent experience in machine learning, NLP, or data-centric AI, with a track record of leadership in LLM post-training or data quality research.
  • 5 years of academic or industry experience post-doc
  • Deep expertise in RLHF, preference data pipelines, reward modeling, or evaluation systems.
  • Demonstrated experience designing and scaling data quality infrastructure — from labeling frameworks and validation metrics to automated filtering and dataset optimization.
  • Strong engineering proficiency in Python, PyTorch, and ecosystem tools for large-scale training and evaluation.
  • A proven ability to define, lead, and execute complex research initiatives with clear business and technical impact.
  • Strong communication and collaboration skills, with experience driving strategy across research, engineering, and product teams.

Extra Credit

  • Experience with data valuation (e.g. influence functions, Shapley values), active learning, or human-in-the-loop systems.
  • Contributions to open-source tools for dataset analysis, benchmarking, or reward model training.
  • Familiarity with evaluation challenges such as annotation disagreement, subjective labeling, or multilingual feedback alignment.
  • Interest in the long-term implications of data quality for AI safety, governance, and deployment ethics.

Perks

Handshake delivers benefits that help you feel supported—and thrive at work and in life.

The below benefits are for full-time US employees.

🎯 Ownership: Equity in a fast-growing company

💰 Financial Wellness: 401(k) match, competitive compensation, financial coaching

🍼 Family Support: Paid parental leave, fertility benefits, parental coaching

💝 Wellbeing: Medical, dental, and vision, mental health support, $500 wellness stipend

📚 Growth: $2,000 learning stipend, ongoing development

💻 Remote & Office: Stipends for home office setup, internet, commuting, and free lunch/gym in our SF office

🏝 Time Off: Flexible PTO, 15 holidays + 2 flex days, winter #ShakeBreak where our whole office closes for a week!

🤝 Connection: Team outings & referral bonuses

Explore our mission, values, and comprehensive US benefits at joinhandshake.com/careers.

Other Open Roles at Handshake

2w agoApply
Handshake

Cartographer/Digital Cartographer - AI Trainer

Handshake·🇺🇸United States

$125/hr

2w agoApply
2w agoApply