EdTech Jobs
Handshake

Research Scientist, Human AI Interaction and Evaluations

Handshake
🇺🇸In-Person - San Francisco, CA$180K–$280K/yri1h ago
Prep for this Role

Summary

Lead research on human-AI interaction and task-level benchmarking at Handshake, defining how AI systems support real professional work through jobs-to-be-done frameworks and empirical evaluation methods. Shape AI evaluation standards at scale across Fortune 500 companies, educational institutions, and AI labs.

Key Responsibilities: Design and conduct empirical studies measuring human activity in AI-mediated workflows, develop benchmarks for AI-as-collaborator systems, and establish methods for assessing task performance, quality, and economic impact. Lead strategy for professional-domain AI benchmarks, publish research papers, and create open-source evaluation tools and datasets.
Skills & Tools: Expertise in human-computer interaction (HCI), large language models (LLMs), experimental design, and task-level AI evaluation with strong ability to translate qualitative work understanding into measurable benchmarks. Strong publication record, Python/data analysis proficiency, and understanding of labor economics and productivity measurement.
Qualifications: PhD in HCI, computer science, psychology, or related field with 3+ years of research experience in AI evaluation, human-centered design, or similar areas. Published research in top-tier venues and demonstrated experience designing and conducting empirical studies preferred.
Location: San Francisco, California
Compensation: $180K–$280K/yr (estimated)

Job Description

About Handshake

Handshake is the career network for the AI economy. 20 million knowledge workers, 1,600 educational institutions, 1 million employers (including 100% of the Fortune 50), and every foundational AI lab trust Handshake to power career discovery, hiring, and upskilling, from freelance AI training gigs to first internships to full-time careers and beyond. This unique value is leading to unparalleled growth; in 2025, we tripled our ARR at scale.

Why join Handshake now:

  • Shape how every career evolves in the AI economy, at global scale, with impact your friends, family and peers can see and feel

  • Work hand-in-hand with world-class AI labs, Fortune 500 partners and the world’s top educational institutions

  • Join a team with leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, among others

  • Build a massive, fast-growing business with billions in revenue

About the Role

As a Research Scientist, Human–AI Interaction, you will play a pivotal role in defining how AI systems support real human work by leading research at the intersection of Human–Computer Interaction (HCI), Large Language Models (LLMs), and task-level benchmarking.

You will operate at the frontier of human-centered AI evaluation, with a focus on understanding what people actually do to accomplish meaningful work—and how AI systems change, accelerate, or reshape that activity. Your research will define jobs-to-be-done benchmarks, comparative evaluation frameworks, and empirical methods for measuring human effort, time, quality, and outcomes when working with AI copilots. Additionally, the Handshake AI platform is an interface used by thousands of the top subject matter experts in the world to evaluate AI systems, and offers numerous interesting HCI / HITL-AI research questions that will drive large business impact.

You’ll set research direction, establish standards for measuring human activity in AI-mediated workflows, publish papers and open-source code, and lead the development of rigorous, scalable benchmarks that connect human work, AI assistance, and real economic value.

You will:

  • Lead high-impact research on jobs-to-be-done benchmarks for AI systems, including:

    • Defining task taxonomies grounded in real professional and economic activities

    • Identifying what constitutes meaningful task completion, quality, and success

    • Translating qualitative work understanding into measurable, repeatable benchmarks

  • Develop methods to measure human activity in AI-mediated workflows

  • Design benchmarks to assess AI-as-a-collaborator/copilot, rather than autonomous agents / basic Q&A

  • Design and run empirical studies of how people use AI to solve tasks, including:

    • Controlled experiments and field studies measuring task performance

    • Instrumentation for capturing fine-grained interaction traces and outcomes

  • Drive strategy for professional-domain AI benchmarks, focusing on:

    • Understanding domain-specific workflows (e.g., analysis, writing, planning, coordination)

    • Grounding benchmark design in how work is actually performed, not idealized tasks

  • Build and prototype AI systems and evaluation infrastructure to support research and Data production, including:

    • LLM-powered copilots and experimental tools used for task-level measurement

    • Benchmark harnesses that evaluate both model behavior and human outcomes

    • Data pipelines for analyzing human–AI interaction at scale

    • The human-in-the-loop experience for Handshake fellows to produce effective evaluations and training data for frontier models, through structured UI/UX interactions with these models.

Desired Capabilities

  • PhD or equivalent experience in Human–Computer Interaction, Computer Science, Cognitive Science, or a related field, with a strong emphasis on empirical evaluation of interactive AI/LLM systems.

  • 3+ years of academic or industry research experience post-PhD, including leadership on complex research initiatives and analyzing data from a real AI product.

  • Strong publication record, with demonstrated impact in top-tier AI (NeurIPS, ICML, ICLR, ACL) and HCI (CHI) venues

  • Deep expertise in experimental design and measurement, particularly for:

    • Task performance and human activity

    • Comparative evaluation frameworks

    • Mixed-methods research grounded in real-world behavior

  • Strong technical and coding skills, including:

    • Python and data analysis / ML tooling

    • Experience building experimental systems and benchmark infrastructure

    • Familiarity working with LLM APIs, agent frameworks, or AI-assisted tooling

  • Proven ability to define and lead research agendas that connect human work, AI capability, and business or economic impact.

  • Strong collaboration skills, especially working across research, engineering, product, and UXR teams.

Extra Credit

Experience developing benchmarks or evaluation frameworks for human–AI systems or AI-assisted productivity tools.

  • Prior work on copilot-style systems, agentic workflows, or automation of professional tasks.

  • Familiarity with workplace studies, CSCW, or socio-technical systems research.

  • Contributions to open-source tools, datasets, or benchmarks related to task-level evaluation.

  • Interest in how AI reshapes labor, productivity, and the future of work.

Perks

Handshake delivers benefits that help you feel supported—and thrive at work and in life.

The below benefits are for full-time US employees.

🎯 Ownership: Equity in a fast-growing company

💰 Financial Wellness: 401(k) match, competitive compensation, financial coaching

🍼 Family Support: Paid parental leave, fertility benefits, parental coaching

💝 Wellbeing: Medical, dental, and vision, mental health support, $500 wellness stipend

📚 Growth: $2,000 learning stipend, ongoing development

💻 Remote & Office: Internet, commuting, and free lunch/gym in our SF office

🏝 Time Off: Flexible PTO, 15 holidays + 2 flex days, winter #ShakeBreak where our whole office closes for a week!

🤝 Connection: Team outings & referral bonuses

Explore our mission, values, and comprehensive US benefits at joinhandshake.com/careers.

More Jobs at Handshake

Handshake

UI/UX Visual Designer - AI Trainer

Handshake

$65K–$95K/yr

Handshake

Structural Engineer - AI Trainer

Handshake

$85K–$130K/yr