ML Infra Engineer - Supercomputing
Physical Intelligence
Location
San Francisco
Employment Type
Full time
Location Type
On-site
Department
Machine Learning
Physical Intelligence builds general-purpose AI for the physical world. Training our models requires orchestrating thousands of accelerators across a heterogeneous fleet of GPU and TPU clusters — spanning different hardware generations, cloud providers, and cluster topologies.
Today, researchers often need to know which cluster to target, what resources are available, and how to configure their jobs accordingly. That doesn't scale. We need a scheduling and compute layer that makes the right placement decision automatically — routing jobs to the best cluster based on availability, hardware fit, cost, and priority — so researchers can focus entirely on the science.
This role owns that problem end-to-end: the scheduling systems, the placement logic, the cluster management layer, and the operational tooling that keeps it all running.
This is not cloud DevOps. It's not about standing up clusters and walking away. It's a systems role for people who care about intelligent resource allocation, utilization, fault tolerance, and making large-scale distributed training seamless.
The Team
The ML Infrastructure team supports and accelerates PI’s core modeling efforts by building the systems that make large-scale training reliable, reproducible, and fast. You will work closely with ML Infra (training systems), data platform, and research teams to ensure compute scheduling is never the bottleneck.
In This Role You Will
- Own Intelligent Job Scheduling and Placement: Design and build multi-tenant scheduling systems that automatically place training jobs on the best available cluster based on hardware requirements, topology, availability, cost, and priority. Support fair resource sharing across teams and projects with quota management, priority tiers, and preemption policies. Abstract away cluster differences so researchers submit jobs without needing to know where they will land.
- Scale Multi-cluster Orchestration: Build the control plane that manages the job lifecycle across diverse clusters (mixed GPU/TPU, multi-generation hardware, on-prem/cloud) and enables seamless job migration, failover, and re-scheduling.
- Optimize Accelerator Utilization and Efficiency: Monitor and optimize GPU/TPU utilization across the entire fleet. Implement priority, preemption, queueing, and fairness policies that balance research velocity with cost efficiency.
- Ensure Scaling and Stability: Implement fault detection, automatic recovery, and resilience for long-running multi-node training jobs. Manage health checking, node management, and scaling to thousands of accelerators.
- Support Inference and Robot Deployment: Extend scheduling and orchestration to inference workloads, including deploying models to edge devices on physical robots.
- Enhance Observability and Developer Experience: Build the dashboards, alerting, SLOs, and debugging tools necessary for researchers to understand job status and for the team to ensure high scheduling quality and cluster reliability.
What We Hope You’ll Bring
We’re intentionally flexible on exact background, but strong candidates usually have:
- Strong software engineering fundamentals
- Experience building or operating job scheduling / resource management systems at scale
- Experience with large-scale compute clusters (GPU and/or TPU)
- Familiarity with schedulers and orchestration systems (SLURM, Kubernetes, GKE, K3S, or internal equivalents)
- Comfort reasoning about resource allocation, bin-packing, priority scheduling, and multi-tenancy
- Understanding of how ML training workloads behave — long-running, multi-node, sensitive to stragglers, topology-dependent
- A bias toward owning systems end-to-end, from design to operation
- Enjoy working closely with researchers and unblocking fast-moving projects
Bonus Points If You Have
- Experience building multi-cluster or federated scheduling systems
- Experience with TPU infrastructure (GCP TPU slices, Multislice, GKE)
- Background in cluster resource managers (Borg, YARN, Mesos, or custom schedulers)
- Linux systems engineering, networking, and infrastructure-as-code
- NCCL/collective communication and topology-aware placement
- Experience with capacity planning and cloud cost optimization at scale
- Familiarity with JAX, PyTorch, or similar ML frameworks at the runtime/systems level
In this role you will help scale and optimize our training systems and core model code. You’ll own critical infrastructure for large-scale training, from managing GPU/TPU compute and job orchestration to building reusable and efficient JAX training pipelines. You’ll work closely with researchers and model engineers to translate ideas into experiments—and those experiments into production training runs.
This is a hands-on, high-leverage role at the intersection of ML, software engineering, and scalable infrastructure.
The Team
The ML Infrastructure team supports and accelerates PI’s core modeling efforts by building the systems that make large-scale training reliable, reproducible, and fast. The team works closely with research, data, and platform engineers to ensure models can scale from prototype to production-grade training runs.
In This Role You Will
- Own training/inference infrastructure: Design, implement, and maintain systems for large-scale model training, including scheduling, job management, checkpointing, and metrics/logging.
- Scale distributed training: Work with researchers to scale JAX-based training across TPU and GPU clusters with minimal friction.
- Optimize performance: Profile and improve memory usage, device utilization, throughput, and distributed synchronization.
- Enable rapid iteration: Build abstractions for launching, monitoring, debugging, and reproducing experiments.
- Manage compute resources: Ensure efficient allocation and utilization of cloud-based GPU/TPU compute while controlling cost.
- Partner with researchers: Translate research needs into infra capabilities and guide best practices for training at scale.
- Contribute to core training code: Evolve JAX model and training code to support new architectures, modalities, and evaluation metrics.
What We Hope You’ll Bring
- Strong software engineering fundamentals and experience building ML training infrastructure or internal platforms.
- Hands-on large-scale training experience in JAX (preferred), PyTorch.
- Familiarity with distributed training, multi-host setups, data loaders, and evaluation pipelines.
- Experience managing training workloads on cloud platforms (e.g., SLURM, Kubernetes, GCP TPU/GKE, AWS).
- Ability to debug and optimize performance bottlenecks across the training stack.
- Strong cross-functional communication and ownership mindset.
Bonus Points If You Have
- Deep ML systems background (e.g., training compilers, runtime optimization, custom kernels).
- Experience operating close to hardware (GPU/TPU performance tuning).
- Background in robotics, multimodal models, or large-scale foundation models.
- Experience designing abstractions that balance researcher flexibility with system reliability.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.