Staff Threat Detection & Response Engineer

Staff Threat Detection and Response Engineer
London, UKAbout the AI Security Institute
The AI Security Institute is the world''s largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10, and we work with frontier developers and governments globally.We’re here because governments are critical for advanced AI going well, and AISI is uniquely positioned to mobilize them. With our resources and the UK government''s unique agility and international influence, this is the best place to shape both AI development and government action.About the Team:Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast, safely. We are founding the Security Engineering team in a largely greenfield cloud environment, we treat security as a measurable, researcher centric product. Secure by design platforms, automated governance, and intelligence led detection that protects our people, partners, models, and data. We work shoulder to shoulder with research units and core technology teams, and we optimise for enablement over gatekeeping, proportionate controls, low ego, and high ownership.What you might work on:Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safelyBuild provenance and integrity into the software supply chain (signing, attestation, artefact verification, reproducibility)Support strengthened identity, segmentation, secrets, and key management to create a defensible foundation for evaluations at scaleDevelop automated, evidence driven assurance mapped to relevant standards, reducing audit toil and improving signalCreate detections and response playbooks tailored to model evaluations and research workflows, and run exercises to validate themThreat model new evaluation pipelines with research and core technology teams, fixing classes of issues at the platform layerAssess third party services and hardware/software supply chains; introduce lightweight controls that raise the barContribute to open standards and open source, and share lessons with the broader community where appropriateIf you want to build security that accelerates frontier scale AI safety research, and see your work land in production quickly, this is a good place to do itRole Summary
Build and maintain a modern, mission-aware detection engineering practice. You’ll own AISI’s threat model, define detections that reflect AISI-specific risks, and collaborate with DSIT’s SOC to extend coverage and context. You’ll focus on signal quality, not alert volume. You will extend coverage to AI/ML surfaces, instrumenting the model lifecycle and AI platforms so threats to model weights, data pipelines, GPU estates, and inference endpoints are visible, correlated, and actionable.Responsibilities
Define and evolve AISI’s threat model, working with platform, research, and policy teamsWrite detection rules, correlation logic, and hunt queries tailored to AISI''s risk surfaceEnsure relevant signals are logged, routed, and contextualised appropriatelyMaintain detection playbooks, triage documentation, and escalation workflowsAct as a liaison between AISI engineering and DSIT''s central SOCEvaluate detection gaps and propose new signal sources or telemetry improvementsExtend the threat model to AI/ML: data/feature pipelines, training/finetuning, evaluations/release gates, registries, GPUs, and inference servicesDevelop detections for AI-specific risks: model weight custody/exfil (e.g., anomalous KMS decrypts, S3 access), registry tampering, dataset poisoning, training pipeline/image compromise, GPU abuse/cryptomining, and inference abuse (prompt injection/data exfil patterns, anomalous RAG connector access)Define hunts and correlations that tie AI safety/evaluation signals (red-team hits, eval regressions, release gate overrides) to security events and insider/outsider activityAuthor and rehearse AI-focused incident playbooks (weights leak, compromised model artefacts, inference abuse campaigns) with DSIT SOCProfile requirements
Strong understanding of detection-as-code, MITRE ATTandCK, log pipelines, and cloud signal sourcesAble to navigate outsourced SOC relationships while owning internal threat understandingFamiliarity with AWS CloudTrail, GuardDuty, KMS, S3 access logs, EKS/ECS audit, custom log ingestion; exposure to SageMaker/Bedrock or equivalent a plusCurious, methodical, and proactive mindsetPractical grasp of AI/ML attack surfaces and telemetry needs (model registries, weights custody, GPU/accelerator fleets, inference gateways, vector stores)Familiarity with AI threat frameworks (e.g., MITRE ATLAS, OWASP Top 10 for LLMs) desirableDetection engineering mindset focused on signal quality and measurable coverageFamiliarity with MITRE ATTandCK and detection pipelinesUnderstanding of cloud-native telemetry and logging gapsAbility to collaborate with outsourced SOCsInstrumenting and detecting threats across AI/ML workloads (weights, datasets, training/inference) and correlating safety and security signalsSalary and BenefitsWe are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.Level 3 - Total Package £65,000 - £75,000 inclusive
of a base salary £35,720 plus additional technical talent allowance of between £29,280 - £39,280Level 4 - Total Package £85,000 - £95,000 inclusive
of a base salary £42,495 plus additional technical talent allowance of between £42,505 - £52,505Level 5 - Total Package £105,000 - £115,000 inclusive
of a base salary £55,805 plus additional technical talent allowance of between £49,195 - £59,195Level 6 - Total Package £125,000 - £135,000 inclusive
of a base salary £68,770 plus additional technical talent allowance of between £56,230 - £66,230Level 7 - Total Package £145,000 inclusive
of a base salary £68,770 plus additional technical talent allowance of £76,230This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.There are a range of pension options available which can be found through the Civil Service website.
#J-18808-Ljbffr
Other jobs of interest...
Perform a fresh search...
-
Create your ideal job search criteria by
completing our quick and simple form and
receive daily job alerts tailored to you!