Snabbfakta

    • London

Ansök senast: 2024-08-23

Research Scientist / Research Engineer (AI Safety Institute)

Publicerad 2024-06-24

Job summary

About the AI Safety Institute

The AI Safety Institute is the first state-backed organisation focused on advancing AI safety for the public interest. We launched at the Bletchley Park AI Safety Summit in 2023 because we believe taking responsible action on this extraordinary technology requires a capable and empowered group of technical experts within government.��

We have ambitious goals and need to move fast.��

  • Develop and conduct evaluations on advanced AI systems. We will characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.�
  • Develop novel tools for AI governance. We will create practical frameworks and novel methods to evaluate the safety and societal impacts of advanced AI systems, and anticipate how future technical safety research will feed into AI governance.�
  • Facilitate information exchange. We will establish clear information-sharing channels between the Institute and other national and international actors. These include stakeholders such as policymakers and international partners.�
  • Our staff includes senior alumni from OpenAI, Google DeepMind, start-ups and the UK government, and ML professors from leading universities. We are now calling on the world�s top technical talent to join us. This is a truly unique opportunity to help shape AI safety at an international level.�

    As more powerful models are expected to hit the market over the course of 2024, AISI�s mission to push for safe and responsible development and deployment of AI is more important than ever.�

    What we value:

  • Diverse Perspectives: We believe that a range of experiences and backgrounds is essential to our success. We welcome individuals from underrepresented groups to join us in this crucial mission.�
  • Collaborative Spirit: We thrive on teamwork and open collaboration, valuing every contribution, big or small.�
  • Innovation and Impact: We are dedicated to making a real-world difference in the field of frontier AI safety and capability, and we encourage innovative thinking and bold ideas.�
  • Our Inclusive Environment: We are building an inclusive culture to make the Department a brilliant place to work where our people feel valued, have a voice and can be their authentic selves. We value difference and diversity, not only because we believe it is the right thing to do, but because it will help us be more innovative and make better decisions.�
  • Job description

    As a Research Scientist or Research Engineer at AISI, you will help to set our direction for AI system evaluations, including large language models (LLMs), and build & maintain robust evaluation frameworks for these AI systems. You will lead and contribute to projects designed to be integrated into our evaluation suite, evaluating the capabilities and safeguards of cutting-edge models, as well as more speculative research work aimed at mitigations and system understanding. �

    We draw on a wide range of disciplines, and value a diversity of research expertise across our five workstreams. You will be primarily associated with one of our workstreams, however, sometimes your work will intersect multiple workstreams. Our workstreams include:�

  • Chem/bio: studying how LLMs and more specialised AI systems are advancing biological and chemical capabilities relating to harmful outcomes.� This includes potential uplift to novice actors and future scenarios like design of biological agents�
  • Cyber misuse: studying how LLMs and more specialised AI systems may aid in cyber-criminality and the adequacy of cybersecurity measures against AI systems�
  • Safeguards: evaluating the strength and efficacy of safety and security components of advanced AI systems against diverse threats which could circumvent safeguards�
  • Societal impacts: evaluating a range of impacts of advanced models that could have widespread implications for our societal fabric ( undermining trust in information, psychological wellbeing, cognitive wellbeing, unequal outcomes)�
  • Autonomous systems: Testing for precursors to loss of control by measuring relevant capabilities in long-horizon computer-based tasks. Examples are sub-tasks of autonomous replication, AI development and self-improvement, as well as adaptation to human attempts to intervene and the ability to profitably interact with and manipulate humans. This includes trajectories that start from a misuse event as well as cases of misalignment.�
  • Platform: The platform team supports AISI with cross-cutting infrastructure and tooling. From interfacing with corporate services to ensure that the Research Unit are able to work on appropriate physical devices, to managing cloud infrastructure, to writing open-source tooling like Inspect, which is used across all workstreams. We also bear a lot of technical responsibility for AISI�s security posture, and work closely with internal and external stakeholders to ensure that the Research Unit can work both effectively and securely.�
  • You will work closely with the Workstream Lead, and other Research Engineers and Research Scientists, as well as benefit from support from our cross-functional Platform Team.� You will also collaborate with external topic-level experts, contractors, partner organisations and policy makers to coordinate and build on external research.�

    There will be significant scope to contribute to the strategy of your workstream team and to design experiments with set-ups of increasing complexity.�

    This advert is for individuals with strong Research Scientist or Engineering backgrounds, but without existing priors or preference on which of our workstreams to join. If you have an interest in a specific workstream, we encourage you to look at whether these teams are advertising directly. Please do not apply for multiple RS/RE vacancies simultaneously.���

    Person specification

    We look for some of the following skills, experience and attitudes in a Research Scientist or Research Engineer.��

  • Relevant experience in industry, relevant open-source collectives, or academia in a field related to machine learning, AI, AI security, or computer security �
  • Experience in building software systems to meet research requirementsand have led or been a significant contributor to relevant software projects, demonstrating cross-functional collaboration skills.�
  • Knowledge of training, fine-tuning, scaffolding, prompting, deploying, and/or evaluating current cutting-edge machine learning systems such as large language models�
  • Knowledge of statistics�
  • A strong curiosity in understanding AI systems and studying the security implications of this technology�
  • Motivated to conduct research that is not only curiosity driven but also solves concrete open questions in governance and policy making�
  • Ability to work autonomously and in a self-directed way with high agency, thriving in a constantly changing environment and a steadily growing team, while figuring out the best and most efficient ways to solve a particular problem�
  • Bring your own voice and experience but also an eagerness to support your colleagues together with a willingness to do whatever is necessary for the team�s success and find new ways of getting things done within government�
  • Have a sense of mission, urgency, and responsibility for success, demonstrating problem-solving abilities and preparedness to acquire any missing knowledge necessary to get the job done�
  • Comprehensive understanding of large language models ( GPT-4). This includes both a broad understanding of the literature, as well as hands-on experience with things like pre-training or fine tuning LLMs.�
  • The following are also nice-to-have

  • Extensive Python experience, including understanding the intricacies of the language, the good vs. bad Pythonic ways of doing things and much of the wider ecosystem/tooling.��
  • Direct research experience ( PhD in a technical field and/or spotlight papers at NeurIPS/ICML/ICLR).��
  • Experience working with world-class multi-disciplinary teams, including both scientists and engineers ( in a top-3 lab).��
  • Experience acting as a bar raiser for interviews�
  • We are interesting hiring individuals at a range of seniority and experience within this team, including for Senior Research Engineer / Research Scientist positions. Calibration on final title, seniority and pay will take place as part of the recruitment process. We encourage all candidates who would be interested in joining to apply.�

    Core requirements

  • You should be able to spend at least 4 days per week working with us�
  • You should be able to join us for at least 12 months�
  • You should be able work from our office in London (Whitehall) for parts of the week, but we provide flexibility for remote work�
  • Benefits

    The Department for Science, Innovation and Technology offers a competitive mix of benefits including:

  • A culture of flexible working, such as job sharing, homeworking and compressed hours.
  • Automatic enrolment into the , with an average employer contribution of 27%.
  • A minimum of 25 days of paid annual leave, increasing by 1 day per year up to a maximum of 30.
  • An extensive range of learning & professional development opportunities, which all staff are actively encouraged to pursue.
  • Access to a range of retail, travel and lifestyle employee discounts.
  • The Department operates a discretionary hybrid working policy, which provides for a combination of working hours from your place of work and from your home in the UK. The current expectation for staff is to attend the office or non-home based location for 40-60% of the time over the accounting period.
  • Liknande jobb

    Publicerad: 2024-06-27
    • London
    Publicerad: 2024-06-28
    • Oxford