Snabbfakta

    • London

Ansök senast: 2024-08-23

Research Scientist / Research Engineer - Autonomous Systems (AI Safety Institute)

Publicerad 2024-06-24

Job summary

About the AI Safety Institute

The AI Safety Institute is the first state-backed organisation focused on advancing AI safety for the public interest. We launched at the Bletchley Park AI Safety Summit in 2023 because we believe taking responsible action on this extraordinary technology requires a capable and empowered group of technical experts within government.��

We have ambitious goals and need to move fast.��

  • Develop and conduct evaluations on advanced AI systems. We will characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.�
  • Develop novel tools for AI governance. We will create practical frameworks and novel methods to evaluate the safety and societal impacts of advanced AI systems, and anticipate how future technical safety research will feed into AI governance.�
  • Facilitate information exchange. We will establish clear information-sharing channels between the Institute and other national and international actors. These include stakeholders such as policymakers and international partners.�
  • Our staff includes senior alumni from OpenAI, Google DeepMind, start-ups and the UK government, and ML professors from leading universities. We are now calling on the world�s top technical talent to join us. This is a truly unique opportunity to help shape AI safety at an international level.�

    As more powerful models are expected to hit the market over the course of 2024, AISI�s mission to push for safe and responsible development and deployment of AI is more important than ever.�

    What we value:

  • Diverse Perspectives: We believe that a range of experiences and backgrounds is essential to our success. We welcome individuals from underrepresented groups to join us in this crucial mission.�
  • Collaborative Spirit: We thrive on teamwork and open collaboration, valuing every contribution, big or small.�
  • Innovation and Impact: We are dedicated to making a real-world difference in the field of frontier AI safety and capability, and we encourage innovative thinking and bold ideas.�
  • Our Inclusive Environment: We are building an inclusive culture to make the Department a brilliant place to work where our people feel valued, have a voice and can be their authentic selves. We value difference and diversity, not only because we believe it is the right thing to do, but because it will help us be more innovative and make better decisions.�
  • Job description

    ABOUT THE TEAM

    The mission of the Autonomous Systems team is to prevent catastrophic risks from autonomous AI.

    The way we do this is by studying the space of potential risks from autonomous systems. We then build tools that measure and forecast this risk by interacting with frontier models. For example, we could be investigating the various ways an autonomous AI could prevent shutdown by exfiltrating its own weights and replicating itself on other hardware. We can then build tools to measure this risk as frontier models keep improving and conduct research into when exactly we believe the risk will present a material danger. Finally, we interact with other teams within the institute to make sure our research has real-world impacts on AI safety � through interaction with the key labs and policy recommendations.

    The Autonomous Systems Team is looking for exceptionally motivated and talented Research Scientists (RS) and Research Engineers (RE) to help scale up our team focussed on catastrophic risks from autonomous AI. Senior RS and RE positions are available for candidates with the required seniority and experience.

    You will work in one of the following research sub-teams:

  • Agents team. By giving frontier models the power to do things like chain-of-thought reasoning, running Python code or browsing the internet � models can already accomplish a surprisingly large variety of tasks. To help our other research teams investigate risks from agentic systems, it is therefore vital that we have in-house agentic systems that exceed state of the art from both academia and other open-source frameworks. This is where the agents team steps in � researching & engineering agent systems that outperform publicly available state-of-the-art systems.
  • Self-Improvement. Within the self-improvement team, we study and evaluate risks from uncontrolled self-improvement. This is the risk that models will increasingly become able to improve their own capabilities continuously and rapidly.
  • Auto-replication. The autonomous replication and adaptation team research loss-of-control style risks from autonomous AI replicating itself on other hardware or devices. Within this team you�ll be studying this threat model, collaboratively designing appropriate evaluations to measure this, and implementing these.
  • Manipulation & Deception. Can we effectively detect when autonomous systems are deceiving or manipulating their human overseers? Within the Manipulation & Deception team you�ll be driving forward state-of-the-art research on these threat models.
  • As a Research Scientist/Engineer, you will work in a small person team within one of the above fields. Your team is given huge amounts of autonomy to chase research directions & build evaluations that relate to your team�s over-arching threat model. This includes coming up with ways of breaking down the space of risks, as well as designing & building ways to evaluate them. All of this is done within an extremely collaborative environment, where everyone does a bit of everything.

    Within your team you will contribute to steering the team�s research direction and finding solutions to complex technical problems. Research Scientists will be expected to help improve the scientific rigour and quality of our research, so that it can be confidently used in influencing the actions of labs and our international partners. Research Engineers will spend most of their time doing collaborative research and writing high-quality research code.

    You�ll receive mentorship and coaching from your manager and will regularly interact with world-famous researchers and other incredible staff (including alumni from DeepMind, OpenAI and ML professors from Oxford and Cambridge).

    Person specification

    We are looking for some of the following skills, experience and attitudes. Some skills may lean towards more Research Scientist or Engineer profiles.

    For more engineer-leaning candidates:

  • Writing production quality code (at least 4 years� experience for a truly exceptional candidate, typically at least 10).
  • Strong track record of designing, shipping, and maintaining complex tech products and/or scientific and academic excellent ( papers at top-tier conferences)
  • Evidence of an exceptional ability to drive progress and build and maintain momentum
  • Improving standards across a team, through mentoring and feedback
  • Working across multiple research or engineering teams and helping to improve technical excellence and engineering culture and/or experience working within a research team that has delivered multiple exceptional scientific breakthroughs, in deep-learning or a related field.
  • Strong written and verbal communication skills
  • We are hiring individuals at a range of seniority and experience within this team, including in Senior Research Engineer / Research positions. Calibration on final title, seniority and pay will take place as part of the recruitment process. We encourage all candidates who would be interested in joining to apply.

    The following are also nice-to-have:

  • Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.
  • Direct research experience ( PhD in a technical field and/or spotlight papers at NeurIPS/ICML/ICLR).
  • Experience working with world-class multi-disciplinary teams, including both scientists and engineers ( in a top-3 lab).
  • Acting as a bar raiser for interviews
  • Benefits

    The Department for Science, Innovation and Technology offers a competitive mix of benefits including:

  • A culture of flexible working, such as job sharing, homeworking and compressed hours.
  • Automatic enrolment into the , with an average employer contribution of 27%.
  • A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30.
  • An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue.
  • Access to a range of retail, travel and lifestyle employee discounts.
  • The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period.
  • Liknande jobb

    Publicerad: 2024-07-04
    • Gillingham
    Publicerad: 2024-06-27
    • Liverpool
    Publicerad: 2024-06-20
    • Cambridge