Machine Learning Evaluation Engineer

AI Evaluation, Research Methods, Python, LLMObservabilitySalary range£60,000-£80,000 p.a. + equity, depending on experience (up to £100,000 forcandidates with exceptional relevant experience)ApplyEmail us at work@writewithmarker.com and tell us a little bit about yourselfand your interest in the future of writing, along with your CV or a link to your CV site.What is Marker?
Marker is an AI-native Word Processor – a reimagining of Google Docs and Microsoft Word.Join us in building the next generation of agentic AI assistants supporting serious writers in their work.We are a small, ambitious company using cutting-edge technology to give everybody writing superpowers.What you''ll do at Marker
We are looking for someone with a couple of years experience in academia or industry who can help us bringrigour and insight to our AI systems through evaluation,research, and observability. You''ll work directly with Ryan Bowman (CPO) to help us understand and improvehow our AI assists writers. Here are some examples of areas you will be working in:Design and implement evaluation frameworks for complex, subjective AI outputs (like writing feedbackthat''s meant to inspire rather than just correct)Build flexible evaluation pipelines that can assess quality across multiple dimensions - from humanpreference to actual writing improvementResearch and prototype new evaluation methodologies for creative and subjective AI tasksCollaborate with our engineering team to integrate evaluation insights into our development processHelp define what "quality" means for different AI outputs and create metrics that actually matter forour usersWork on challenging problems like: "How do we automatically evaluate whether an AI comment successfullyencourages thoughtful revision?"What we can offer
A calm, human-friendly work environment among kind and experienced professionalsFun, creative, novel, and interesting technical work at the intersection of AI research and productdevelopmentAn opportunity to work with and learn about the latest advancements in AI evaluation and language modelsDirect collaboration with leadership to shape how we understand and improve our AI systemsAs much responsibility and growth opportunities as you want to take onAre you a good fit for this role?
In order to be successful in this role, you will recognise yourself in the following:You have experience with AI/ML evaluation methodologies and can speak the language of AI researchYou''ve worked hands-on with language models and understand the challenges of evaluating subjective,creative outputsYou are a self-starter willing to work independently and at speed - we imagine a 2-week experimentcadence at most.You are familiar with and have worked on related technical systems (evaluation pipelines, datacollection tools) but don''t need to be a full-stack engineer. You won''t be expected to build these alone!You think critically about what metrics actually matter and aren''t satisfied with vanity metricsYou''re comfortable working with ambiguous problems where the "right answer" isn''t obviousYou have some programming experience (Python preferred) and can work independently on technical projectsYou''re interested in the intersection of AI capabilities and human creativityAn exceptional candidate for this role would be able to demonstrate some of thefollowing:Experience building evaluation systems for generative AI in production environmentsKnowledge of TypeScript and ability to integrate with our existing systemsBackground in human-computer interaction, computational creativity, or writing researchExperience with A/B testing, statistical analysis, and experimental designFamiliarity with modern AI observability and monitoring toolsPublished research or deep interest in AI evaluation methodologiesInterest in writing (fiction, non-fiction, essays)However, you are NOT expected to:Be a senior software engineer - we''re looking for someone who can build evaluation systems, notarchitect our entire backendHave solved every evaluation problem before - this is cutting-edge work and we''re figuring it outtogetherBe experienced with every library in our stack from day one - you''ll work closely with Ryan and ourengineering teamHave a specific degree - we value practical experience and research ability over credentialsOur stack
You''ll be working with the following technologies:Our AI engine uses a range of models, including self-hosted and fine-tuned open source models, as wellas latest reasoning models from Anthropic and OpenAIEvaluation and research tools built primarily in Python, with integration into our TypeScriptinfrastructureOur agentic AI execution platform is written in TypeScript, hosted on Cloudflare WorkersStandard ML tooling: various evaluation frameworks, data analysis tools, and monitoring systemsOur text editor frontend is a web application built with React, TypeScript and ProseMirrorApply now!
Interested? Email us at work@writewithmarker.com with your CV (or a link to your CV site).Tell us a little bit about yourself and why you''d like to work at Marker!Please note that this role is currently only available based in ourLondon hub, and at this time we are not able to sponsor work visas in the UK.
#J-18808-Ljbffr
Other jobs of interest...




Perform a fresh search...
-
Create your ideal job search criteria by
completing our quick and simple form and
receive daily job alerts tailored to you!