← All jobs

Senior Machine Learning Infrastructure Engineer

PhysicsX · London, United Kingdom

onsitefull-timesenior level

About this role

About us

PhysicsX is a deep-tech company with roots in numerical physics and Formula One, dedicated to accelerating hardware innovation at the speed of software.
We are building an AI-driven simulation software stack for engineering and manufacturing across advanced industries. By enabling high-fidelity, multi-physics simulation through AI inference across the entire engineering lifecycle, PhysicsX unlocks new levels of optimization and automation in design, manufacturing, and operations — empowering engineers to push the boundaries of possibility. Our customers include leading innovators in Aerospace & Defense, Materials, Energy, Semiconductors, and Automotive.

Note: We are currently recruiting for multiple positions, however please only apply for the role that best aligns with your skillset and career goals.

The Role

The Senior ML Infrastructure Engineer will extend and operate the infrastructure that powers our research model training, fine-tuning, and serving pipelines. You will be embedded within our Research function, partnering directly with ML engineers and research scientists to ensure they can train Large Physics Models efficiently and reliably at scale.

Team Context

In this role, you will be vertically embedded in Research, working daily with:

  • Research Scientists who determine the model architectures and methods
  • ML Engineers who implement and develop the models
  • Simulation Data Engineers who are accountable for upstream data pipelines

You will have end-to-end responsibilities over the research infrastructure, with the autonomy to make architectural decisions and the responsibility to keep data flowing reliably.

Horizontally, you will be part of an infrastructure engineering group responsible for infrastructure across the company.

What you will do

Training Infrastructure

  • Design and operate distributed training infrastructure for neural operator architectures (Transolver, Point Cloud Transformer, etc.) on our large NVIDIA DGX B200 platform.
  • Optimize training pipelines for throughput, fault tolerance, and cost efficiency, including checkpointing strategies, gradient accumulation, and multi-node synchronization.
  • Build and maintain experiment tracking and observability systems that give researchers clear visibility into training runs, hyperparameter sweeps, and model performance.

Data I/O and Performance

  • Solve data loading bottlenecks for large-scale mesh datasets.
  • Optimize data pipelines for efficient I/O from cloud storage, including prefetching, caching, and format optimization.
  • Work with heterogeneous data sources of varying formats and resolutions.

Model Serving and Deployment

  • Build serving infrastructure for pre-trained LPMs, supporting both zero-shot inference and uncertainty quantification (Monte Carlo Dropout).
  • Design and implement model packaging pipelines for customer deployment. Models must run reliably in customer environments with fine-tuning capabilities.
  • Ensure reproducibility: any model checkpoint should be deployable with consistent behaviour.

Platform and Tooling

  • Improve developer experience for the Research team with fast iteration cycles, reliable CI/CD, clear debugging tools.
  • Collaborate with the broader Infrastructure team on shared patterns and standards.

What you bring to the table

  • Ability to scope and effectively deliver projects, prioritising activity as needed.
  • Problem-solving skills and the ability to analyse issues, identify causes, and recommend solutions quickly.
  • Excellent collaboration and communication skills, especially in a research setting. You can translate "the model isn't converging" into infrastructure hypotheses and solutions, and can bridge technical abstractions with implementations.
  • 5+ years of experience building and operating ML infrastructure at scale:
    • Deep expertise in distributed training: you've debugged NCCL hangs, optimized collective communication, and know when to use FSDP vs. DDP vs. pipeline parallelism
    • Strong systems fundamentals: Linux, networking (including domain specific NVLink and InfiniBand), storage I/O, profiling and performance optimization
    • Production experience with Kubernetes and SLURM for job orchestration on GPU clusters
    • Proficiency in Python and ML frameworks (PyTorch strongly preferred)
    • Experience with cloud GPU infrastructure; ideally CoreWeave or similar GPU/HPC-focused clouds

Ideally

  • Experience with geometric deep learning or neural operators, ****architectures that operate on meshes, point clouds, or graphs
  • Background in HPC for simulation engineering, familiarity with how CFD/FEA workflows generate and consume data
  • Experience building model serving infrastructure with latency and throughput requirements
  • Familiarity with experiment tracking tools (Weights & Biases, MLflow) and observability stacks (Prometheus, Grafana)
  • Experience packaging models for deployment into customer environments (containers, model registries, versioning)

 

What we offer

  • Equity options – share in our success and growth.
  • 10% employer pension contribution – invest in your future.
  • Free office lunches – great food to fuel your workdays.
  • Flexible working – balance your work and life in a way that works for you.
  • Hybrid setup – enjoy our new Shoreditch office while keeping remote flexibility.
  • Enhanced parental leave – support for life’s biggest milestones.
  • Private healthcare – comprehensive coverage
  • Personal development – access learning and training to help you grow.
  • Work from anywhere – extend your remote setup to enjoy the sun or reconnect with loved ones.
 
We value diversity and are committed to equal employment opportunity regardless of sex, race, religion, ethnicity, nationality, disability, age, sexual orientation or gender identity. We strongly encourage individuals from groups traditionally underrepresented in tech to apply. To help make a change, we sponsor bright women from disadvantaged backgrounds through their university degrees in science and mathematics. 
 
We collect diversity and inclusion data solely for the purpose of monitoring the effectiveness of our equal opportunities policies and ensuring compliance with UK employment and equality legislation. This information is confidential, used only in aggregate form, and will not influence the outcome of your application. 
 

About PhysicsX

PhysicsX is a physical AI company on a mission to accelerate innovation and overhaul what engineering and manufacturing look like today. We are building a new software stack to deliver deep AI enablement across the entire engineering lifecycle. We partner with leading organizations in aerospace & defense, automotive, semiconductors, materials, and energy & renewables, supporting them on some of their most critical and complex challenges. PhysicsX is headquartered in the United Kingdom, with offices in London and New York. Last year we raised $155 million as part of its Series B financing. The round was led by Atomico, with participation from Temasek, Nvidia, Siemens, Applied Materials, and July Fund, as well as continued support from existing investors including General Catalyst, NGP, Radius Capital, Standard Investments, and Allen & Co. Learn more about how we work: https://www.physicsx.ai/careers .

Jobb.ai is an independent skill benchmarking platform. Applications are submitted on the employer's official website.