← All roles
Research · Internship · Remote (Global)

AI Security Research Intern

Spend 12 weeks attacking, defending, and instrumenting agent systems alongside our research team. Output: a publishable threat-model contribution.

What you’ll do

  • Replicate published agent-attack techniques and extend them against our reference stack.
  • Build evaluation harnesses for agent guardrails and policy engines.
  • Co-author a public write-up of your findings.

What we’re looking for

  • Pursuing a degree in CS, security, or a related field.
  • Prior project work in security CTFs, red-teaming, or applied ML safety.
  • Strong writing skills — you'll publish.

Apply for this role

Applications are reviewed weekly. We aim to respond within 7 business days.

We use your application only to evaluate you for this role.