Find out on our blog
How to achieve truly serverless GPUs We are in the age of inference. Billion- to trillion-parameter neural networks are run on specialized accelerators at quadrillions of operations per second to generate media, author software, and fold proteins at massive scale. Inference workloads are more variable and less predictable than the training workloads that previously dominated. That makes them a natural fit for serverless computing, where applications are defined at a level above the (virtual) machine so that they can be more readily scaled up and down to handle variable load. But serverless computing only works if new replicas can be spun up quickly — as fast as demand changes, which can be at the scale of seconds. Naïvely spinning up a new instance of, say, SGLang serving a billion-parameter LLM on a B200 can take tens of minutes or stall for hours on GPU availability. At Modal, we’ve done deep engineering work over the last five years to solve this problem. In this blog post, we walk through what we did. There are four key ingredients: - Cloud buffers: maintain a small buffer of healthy, idle GPUs to take on new load - Custom filesystem: serve container images lazily out of a content-addressed, multi-tier cloud-native cache - Checkpoint/restore: fast-forward through CPU-side initialization by directly restoring processes into memory - CUDA checkpoint/restore: fast-forward through GPU-side initialization by directly restoring CUDA contexts into memory Together, they take AI inference server replica scaling from multiple kiloseconds to just tens of seconds. We’ve shared bits and pieces of this work along the way, because we believe that secrecy is a bad moat. And if more people learn how to use GPUs efficiently, there will be more available in the market for us! But this blog post represents the first time we’ve put the entire story together in one place. We hope it convinces you that our system is worth buying into — or joining us to build it. Why care about serverless GPUs? To maximize GPU Allocation Utilization for inference workloads. First, let’s frame the problem clearly. GPUs are expensive and scarce, so we want to maximize their utilization, where “utilization” is the following unitless quantity: Utilization := Output achieved ÷ Capacity paid for There are many ways to measure utilization — to define output and capacity. The most sophisticated and most stringent here is probably “Model FLOP/s Utilization”, which divides raw algorithmic operation requirements by aggregate arithmetic bandwidth. This is catnip for engineers. It’s also especially critical for “hero run” large-scale training, so it draws a lot of investment and attention, e.g. recently as everyone dunked on xAI’s ~10% MFU. But at the other end of the stack, there’s a more basic form of utilization that wrecks the relationship between achieved output and allocated capacity for inference workloads, GPU Allocation Utilization: GPU Allocation Utilization := GPU-seconds running application code ÷ GPU-seconds paid for Aside on "GPU Utilization" terminology nvidia-smi and similar tools is in between these two extremes. It reports the fraction of…

