PipelineRLuses vLLM as the inference engine for rollout generation. The inference engine samples tokens and returns token logprobs; the trainer uses those logprobs to compute policy ratios, KL, clip rate, entropy, and reward. Any discrepancy in how those logprobs are computed can change the training dynamics. This is the train-inference mismatch we needed to eliminate during the vLLM V0 to V1 migration. TL;DR.vLLM V1 matched our vLLM V0 reference after we fixed four things: processed rollout logprobs, V1-specific runtime defaults, the inflight weight-update path, and the fp32lm_headused for the final projection. We fixed the backend behavior before changing the RL objective. The reference run used vLLM0.8.5; the V1 runs used vLLM0.18.1. Figure 1 shows the final result. The red run is the initial V1 attempt, and the green run is the final V1 run after the fixes described below. vLLM V1 is a substantial rewrite of the V0 engine. Our migration target was therefore deliberately narrow: Those metrics came from a GSPO training run, the objective used for this experiment. The same class of mismatch can surface in PPO, GRPO, or any online RL system that treats rollout-side logprobs as part of the optimization target. The initial V1 run showed the problem clearly. The trainer-side logprobs and reward moved away from the V0 reference early in training. The same pattern appears in the trainer metrics. Clip rate is the easiest signal to read in the initial comparison. We separated the possible causes into three layers: We initially suspected the third category too early. The useful diagnosis came from treating the first two as backend behavior problems and ruling them out first. The first issue was semantic. vLLM V1 returns logprobs from the raw model outputs by default, before logits post-processing such as temperature scaling, penalties, and top-k/top-p filtering. PipelineRL expected logprobs from the processed distribution used by the sampler. This removed the obvious mean offset in rollout logprobs. The training curves still showed a gap relative to the known-good reference, so the next issue had to be in the inference path. The policy-ratio plot shows this directly. Onceprocessed_logprobsis on for V1, the mean policy ratio stays centered extremely close to1.0across all three runs. That establishes the mean-bias fix. The remaining mismatch shows up in clip rate, KL, entropy, and downstream training behavior. The early V1 run mixed the engine version with V1 runtime defaults: For the parity run, we made these choices explicit: Prefix caching deserves a separate note. It is normally a correctness-preserving inference optimization for a fixed model state. In this online RL setup, it was a V1-only difference in cache lifetime and reuse relative to the V0 reference path. The actor was also handling repeated prefixes, concurrent requests, async scheduling, and inflight weight updates. A prefix-cache hit can reuse state computed before a weight update when the cache policy ignores the weight-update boundary. Disabling prefix caching removed one V1-only degree of freedom from the parity comparison. Weight synchronization also had to match the online-RL update model. One option was to make V1 stricter than V0 by draining requests and clearing caches at every update. That would answer a separate question. We first needed to verify that V1 could match the existing V0 behavior. Lag was a useful runtime diagnostic. The initial V1 path carries more persistent lag later in training than the corrected V1 run. The V1 backend fixes above removed the obvious migration issues, but final parity still required matching the numerical path used to compute logits. The trainer used an fp32lm_headfor the final projection. The rollout backend had to match that behavior. A closely related issue appears in theMiniMax-M1 technical report: their RL run showed a training/inference token-probability mismatch that they traced to the LM output head and fixed by computing the head in fp32. This matters because the RL update consumes token logprobs directly. Small changes in logits can become visible in policy ratios, KL, and clipping. The final projection precision is therefore part of the correctness surface for online RL. TheScaleRL paperlater includes fp32 logits/head computation as part of its RL recipe and ablates it as a useful design choice for large-scale RL. With the fp32lm_headpath included, reward gives a compact view of the final parity result. In Figure 6, the final V1 run tracks the V0 reference; the initial V1 attempt produces a clearly different reward curve. The negative results are important because they rule out common explanations. Objective-side corrections such as truncated importance sampling, importance-ratio reweighting, and related methods are useful tools. If rollouts are intentionally stale, generated asynchronously, or produced by a backend where equivalence to the trainer-side policy is unavailable, then some form of correction is often the right thing to add. The first problem here was inference correctness. After moving to V1, the rollout backend returned logprobs and runtime behavior that broke the trainer assumption. Adding an objective-side correction at that point would have mixed two questions: Those questions need to be separated. Otherwise an objective-side correction can compensate for broken inference-backend behavior, which makes the training curve harder to interpret. The current objective can still improve. After inference parity is restored, the next improvement is the usual async/off-policy cleanup: The main lesson from this migration is narrower: fix backend correctness first, then add corrections for the mismatch that remains.