Cyber Security researcher
About the Team
As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.
The AI Security Institute’s Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.
We are building a cross-functional team of cybersecurity researchers, machine learning researchers, research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations. As such to scale up we require all candidates to be able to evaluate frontier AI systems as they are released.
JOB SUMMARY
As a Cyber Security Researcher at AISI your role will range from helping design our overall research strategy and threat model, to working with research and infrastructure engineers to build environments and challenges against which to benchmark the capabilities of AI systems. You may also be involved in coordinating teams of internal and external cyber security experts for open-ended probing exercises to explore the capabilities of AI systems, or with exploring the interactions between narrow cyber automation tools and general purpose AI systems.
Your day-to-day responsibilities could include:
- Designing CTF-style challenges and other methods for automatically grading the performance of AI systems on cyber-security tasks.
- Advising ML research scientists on how to analyse and interpret results of cyber capability evaluations.
- Writing reports, research papers and blog posts to share our research with stakeholders.
- Helping to evaluate the performance of general purpose models when they are augmented with narrow red-teaming automation tools such as Wireshark, Metasploit, and Ghidra.
- Keeping up-to-date with related research taking place in other organisations.
PERSON SPECIFICATION
You will need experience in at least one of the following areas:
- Proven experience related to cyber-security red-teaming such as:
- Penetration testing
- Cyber range design
- Competing in or designing in CTFs
- Developing automated security testing tools
- Bug bounties, vulnerability research, or exploit discovery and patching
- Communicating the outcomes of cyber security research to a range of technical and non-technical audiences.
This role might be a great fit if:
- You have a strong interest in helping improve the safety of AI systems.
- You are active in the cyber security community and enjoy keeping up to date with new research in this field.
- You have previous experience building or measuring the impact of new automation tools on cyber red-teaming workflows.
Core requirements
- You should be able to spend at least 4 days per week on working with us.
- You should be able to join us for at least 24 months.
- You should be able to work from our office in London (Whitehall) for parts of the week, but we provide flexibility for remote work.
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
- Level 3 - Total Package £65,000 - £75,000 inclusive of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280.
- Level 4 - Total Package £85,000 - £95,000 inclusive of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505.
- Level 5 - Total Package £105,000 - £115,000 inclusive of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195.
- Level 6 - Total Package £125,000 - £135,000 inclusive of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230.
- Level 7 - Total Package £145,000 inclusive of a base salary £68,770 plus additional technical talent allowance of £76,230.
This role sits outside of the DDaT pay framework given the scope of this role requires in-depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
#J-18808-Ljbffr