โ† Back to jobs

ML Platform / MLOps Engineer

Profluent
FULL_TIME Remote ยท US US Posted: 2026-05-11 Until: 2026-06-10
Apply Now โ†’
You will be redirected to the original job posting on BeBee.
Apply directly with the employer.
Job Description
Profluent is an AI-first protein design company. Founded in 2022, we develop deep generative models to design and validate novel, functional proteins to revolutionize biomedicine. Based in Emeryville, CA, we are backed by leading investors including Altimeter Capital, Bezos Expeditions, Spark Capital, Insight Partners, Air Street Capital, AIX Ventures, and Convergent Ventures, and have raised over $150M to date. As we continue to push the boundaries of what is possible, we're seeking an ML Platform / MLOps Engineer on the machine learning team to build and operate the infrastructure that powers our machine learning systems. You will work closely with machine learning scientists, protein design scientists, and engineers to enable reliable, scalable platforms for training, evaluating, and deploying large-scale generative biology models. As an early member of the company, you'll have significant ownership over the systems and tools that enable our research team to move quickly from experiments to production models. What You'll Work On Infrastructure supporting large-scale generative models for proteins Systems that process massive biological datasets Experimentation platforms that enable rapid iteration by ML researchers Production services powered by machine learning models Responsibilities Develop infrastructure that enables researchers to run large-scale ML training and inference workloads reliably and efficiently on GPU clusters Implement and maintain security best practices across our ML infrastructure, including access control, secrets management, and least-privilege policies Monitor and optimize infrastructure performance, reliability, and cost Evaluate different open source infrastructure solutions and cloud providers Build and maintain machine learning pipelines to support model inferen