Position: AI Red-Teamer — Adversarial AI Testing; English
Type: Hourly contract
Compensation: $50–$111 per hour
Location: Remote
Commitment: Full-time or part-time, flexible project-based engagement
Role Responsibilities
- Conduct adversarial testing of AI models, including jailbreaks, prompt injections, misuse cases, and exploit discovery
- Generate high-quality human evaluation data by annotating failures, classifying vulnerabilities, and flagging systemic risks
- Apply structured red-teaming frameworks, taxonomies, benchmarks, and playbooks to ensure consistent testing
- Produce clear, reproducible documentation such as reports, datasets, and adversarial test cases
- Support multiple customer projects, ranging from LLM safety testing to socio-technical abuse and misuse analysis
- Communicate identified risks and vulnerabilities clearly to technical and non-technical stakeholders
Requirements
- Strong experience in AI red-teaming, adversarial testing, cybersecurity, or socio-technical risk analysis
- Strong ability to probe systems creatively and systematically to uncover failure modes
- Strong experience using structured evaluation frameworks rather than ad-hoc testing
- Strong written communication skills with the ability to clearly document risks and findings
- Comfort working across sensitive content areas such as bias, misinformation, or harmful behaviors
- Ability to work independently in a remote, fast-moving project environment
Application Process (Takes 20 Mins)
- Upload your resume.
- Complete an interview.
- Submit a short form.