
Job Description from Civil Service Jobs
About the Role
Research Engineers build and maintain scientific software to enable high quality research. They are uniquely placed to bridge the world of software engineering and research, and at the AI Safety Institute will be involved in challenging and diverse projects at the cutting edge of advanced AI development.
As a Research Engineer you will either be embedded within one or more of our research teams, or you will sit in a cross-cutting group of Research Engineers within the Platform Team.
In either case you will be collaborating with research scientists and people running evaluations and user studies on the one hand, and with our Platform Team on the other. You might also on-board, run and improve existing evaluations from the wider research community, as well as up-scaling new evaluation methods developed in-house.
We draw on a wide range of disciplines, and value a diversity of research expertise across our five workstreams. You will be primarily associated with one of our workstreams (please specify in your application which you’re most interested in), however, sometimes your work will intersect multiple workstreams.
Chem/bio: studying how LLMs and more specialised AI systems are advancing biological and chemical capabilities relating to harmful outcomes. This includes potential uplift to novice actors and future scenarios like design of biological agents
Cyber misuse: studying how LLMs and more specialised AI systems may aid in cyber-criminality and the adequacy of cybersecurity measures against AI systems
Safeguards: evaluating the strength and efficacy of safety and security components of advanced AI systems against diverse threats which could circumvent safeguards
Societal impacts: evaluating a range of impacts of advanced models that could have widespread implications for our societal fabric (e.g. undermining trust in information, psychological wellbeing, cognitive wellbeing, unequal outcomes)
Autonomous systems: Testing for precursors to loss of control by measuring relevant capabilities in long-horizon computer-based tasks. Examples are sub-tasks of autonomous replication, AI development and self-improvement, as well as adaptation to human attempts to intervene and the ability to profitably interact with and manipulate humans. This includes trajectories that start from a misuse event as well as cases of misalignment.
The Platform Team will be providing the foundational infrastructure for our research projects. You will build on top of our platform to create bespoke, load-bearing infrastructure and tools for individual research projects. You will be able to independently run and analyse your own experiments to diagnose problems and understand our research work and tech stack in detail.
You will spend your time working not just on infrastructure code but also in the planning and execution of research projects, such as a wide range of evaluations of cutting-edge AI systems. This includes working on analysing and visualising the outcomes of complex evaluation or fine-tuning procedures and managing large data sets.
As a research engineer it is your responsibility to make the hard trade-offs between when code needs to be load-bearing enough to support multiple experiments and when it is better to write “good enough" code to quickly prove or disprove a hypothesis.
In this you will work very closely with our Research Scientists who will often be the main users for the tools you build.
Existing Civil Servants and applicants from accredited NDPBs are eligible to apply, but will only be considered on loan basis (Civil Servants) or secondment (accredited NDPBs). Prior agreement to be released on a loan basis must be obtained before commencing the application process. In the case of Civil Servants, the terms of the loan will be agreed between the home and host department and the Civil Servant. This includes grade on return.