EdTech Jobs

This position has been filled

This job is no longer accepting applications. Browse open EdTech jobs or view current openings at Handshake or search for Red Teaming Domain Expert - AI Training jobs.

Handshake

Red Teaming Domain Expert - AI Training

Handshake
🇺🇸United StatesRemote$40–$60/hr4mo ago

Summary

Red Teaming Domain Expert who stress-tests AI models to identify vulnerabilities and ensure safety by crafting adversarial prompts that expose weaknesses in guardrails and model behavior.

Key Responsibilities: Craft creative adversarial prompts to challenge AI safety filters, discover evasion techniques, document experiments, and collaborate with engineers to strengthen model defenses while exploring edge cases that provoke harmful or incorrect outputs.
Skills & Tools: Strong hands-on experience with multiple LLMs, creative adversarial problem-solving, clear written communication, familiarity with jailbreak or evasion techniques, and ability to work with disturbing or graphic content.
Qualifications: No technical background required; preferred qualifications include prior red teaming experience, background in creative fields like writing or gaming, and familiarity with digital security concepts.
Location: Fully remote, United States
Compensation: $40 – $60/hourly

Job Description

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Fast Facts

Join Handshake AI as a Red Teaming Domain Expert on a contract basis, where you'll creatively stress-test AI models to ensure their robustness and safety by identifying vulnerabilities through adversarial prompts.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Responsibilities: Responsibilities include crafting creative prompts to challenge AI guardrails, documenting experiments, collaborating with engineers and researchers, and exploring edge cases that provoke harmful or incorrect outputs.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Skills: Required skills include strong experience with multiple LLMs, creative problem-solving, clear written communication, ability to handle disturbing content, and a deep curiosity about AI.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Qualifications: Preferred qualifications include prior red teaming experience, a background in creative fields like writing or gaming, and familiarity with digital security concepts.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Location: Fully remote, based in the United States, with variable time commitments of 10–20 hours per week.

liETtVLaARqgmMEbYzHNNLIzUPcdfPrwhYtVK7Qa.png Compensation: $40 - $60 / Hourly




About Handshake AI

Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired.

Handshake AI is a human data labeling business that leverages the scale of the largest early career network.We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale.

This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.

Now’s a great time to join Handshake. Here’s why:

  • Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide.
  • Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs.
  • World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few.
  • Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.

About the Role

As a Red Teamer, you will stress-test AI models by intentionally trying to break them. Instead of checking whether an answer is correct, you’ll design creative, adversarial prompts that expose vulnerabilities—unsafe content, bias, broken guardrails, or unexpected behaviors. Your work directly supports AI safety and model robustness for leading research labs.

This role requires creativity, curiosity, and an ability to think like an adversary while operating with strong ethical judgment. No technical background is required. What matters most is how you think, how you write, and how you problem-solve.

This is a remote contract position with variable time commitments, typically 10–20 hours per week.


Day-to-day responsibilities include

  • Crafting creative prompts and scenarios to intentionally stress-test AI guardrails
  • Discovering ways around safety filters, restrictions, and defenses
  • Exploring edge cases to provoke disallowed, harmful, or incorrect outputs
  • Documenting experiments clearly, including what you tried and why
  • Reviewing and refining adversarial prompts generated by Fellows
  • Collaborating with engineers, tutors, and researchers to share findings and strengthen defenses
  • Working with potentially disturbing content, including violence, explicit topics, and hate speech
  • Staying current on jailbreaks, attack methods, and evolving model behaviors

Desired Capabilities

  • Strong hands-on experience using multiple LLMs
  • Intuition for crafting prompts; familiarity with jailbreak or evasion techniques is a plus
  • Creative, adversarial problem-solving skills
  • Clear and thoughtful written communication
  • Ability to tolerate emotionally heavy or graphic content
  • Curiosity, persistence, and comfort with frequent failure in experimentation
  • Strong ethical judgment and ability to separate adversarial thinking from personal values
  • Self-directed, collaborative, and comfortable in feedback-heavy environments
  • You go deep into unusual interests (fandoms, niche internet cultures, gaming exploits, Wikipedia rabbit holes, etc.)
  • You come from a creative background, writers, visual artists, etc
  • You are obsessed with AI and can’t stop talking about it

Extra Credit

  • Prior red teaming, moderation, or adversarial testing experience
  • Background in writing, gaming, improv, or niche internet subcultures
  • Experience documenting complex processes or research
  • Familiarity with safety, trust & safety, or digital security concepts

Additional Information

  • Engagement: Contract, remote, variable time commitment
  • Schedule: Flexibility required, with some evening or weekend availability
  • Location: Fully remote (no visa sponsorship available)
  • Technical Requirements: Personal device running Windows 10 or macOS Big Sur 11.0+ and reliable smartphone access

Other Open Roles at Handshake

2w agoApply
Handshake

Cartographer/Digital Cartographer - AI Trainer

Handshake·🇺🇸United States

$125/hr

2w agoApply
2w agoApply