Mindgard is a London-based startup specializing in AI security. 

We’ve spun-out from a leading UK university, and our mission is to secure the future of AI against cyber attacks targeting Deep Learning, GenAI, and LLMs. This is an unsolved challenge globally, and we are among the world’s first to offer a solution to this rapidly growing problem. 

We’ve raised $4M from an excellent group of investors, released our first product offering: Mindgard AI Security Labs, and continue to build a team of engineers to join us on our journey. 

We’re seeking a Research Scientist to join our R&D team, who is passionate about working on practical security problems within AI/ML.

What you will be doing:

  • Design, evaluate, and implement adversarial ML attacks and detection techniques.
  • Work and collaborate with the R&D and engineering teams to push/translate adversarial ML techniques into production software for AI red teaming.
  • Uncover ML security threats, analyse data and discover feature commonality
  • Engage in research collaboration, publications, and conference attendance.
  • Keep the company update on state-of-the-art research in adversarial ML.

Who you are:

  • PhD in Computer Science, with a specialism within AI/ML security and/or adversarial ML.
  • An excellent track record of high quality publications in top AI/ML or cyber security venues. We care about quality over quantity, and research that has industry application.
  • Good programming skills in languages such as Python.
  • Ability to supervise PhD students. We have a cohort of PhD students that you would have the opportunity to work and publish with.
  • Optimism, kindness, and excellent communication skills.
  • Ability to lead and contribute to research projects.

Benefits: 

  • Competitive salary 
  • 33 days vacation 
  • Flexible working options 
  • Learning & development budget 
  • Company equity 
Mindgard - AI Secured  | Product Hunt