Skip to content

KEP 5836: Add KEP for Scheduler Preemption for In-Place Pod Resize (alpha)#5932

Open
natasha41575 wants to merge 1 commit intokubernetes:masterfrom
natasha41575:scheduler-preemption
Open

KEP 5836: Add KEP for Scheduler Preemption for In-Place Pod Resize (alpha)#5932
natasha41575 wants to merge 1 commit intokubernetes:masterfrom
natasha41575:scheduler-preemption

Conversation

@natasha41575
Copy link
Contributor

KEP 5836: Scheduler Preemption for In-Place Pod Resize (alpha)

I have a PoC for the implementation here: kubernetes/kubernetes#137206

Issue link: #5836

Targeting 1.37 for alpha

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 23, 2026
@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. labels Feb 23, 2026
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG Scheduling Feb 23, 2026
@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Feb 23, 2026
@natasha41575 natasha41575 changed the title KEP 5836: Scheduler Preemption for In-Place Pod Resize (alpha) KEP 5836: Add KEP for Scheduler Preemption for In-Place Pod Resize (alpha) Feb 23, 2026
@natasha41575
Copy link
Contributor Author

/sig scheduling
/cc @tallclair @dom4ha @sanposhiho @macsko

@natasha41575 natasha41575 force-pushed the scheduler-preemption branch 2 times, most recently from 292e58e to 13f327f Compare February 23, 2026 22:58
4. **Snapshot Adjustment**: Temporarily remove the `Deferred` pod from the node snapshot to calculate required space
accurately.
5. **Calculate Victims**: Identify suitable preemption victims and then restore the pod to the snapshot.
6. **Update Status**: Report the success or failure of the preemption attempt in the pod status. If preemption is insufficient
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reporting success is probably not the end of the story, as we should describe how the resource accounting is done until the pod gets indeed resized on kubelet. Scheduler needs to assume somehow (see also scheduler assumption process in binding) that kubelet will accept the resize and keep blocking newly requested resources in memory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scheduler will automatically be blocked from using the new requested resources, because the scheduler uses max(spec, allocated, actual) when determining fit. Since a resize request is made by adjusting the spec resources, these resources are already considered "reserved" from scheduler perspective. I have a note about this under the Kubelet behavior section below. I can add one here too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably can take max, but only after scheduler accepts the resize (in the mentioned assume process). But at the time the scheduler notices the Deferred state, it will create some pod-alloc-to-schedule, but can't reserve resources until it initiates preemption (set nomination) and later accept it (assume resources). The assumption can be dropped once it receives a notification about the actual resize.

Another complication is that unlike the initial pod scheduling, I suspect the requested resources may change during this process. It's not obvious how scheduler should handle such situation. Let's consider that there are effectively two resize requests A and B. Scheduler could initiate preemption for A and set nomination (block resources until preemption finishes) or even assume (wait for the resize on the Kubelete side). When the following update B comes, we probably can't just update the cache, but will need to repeat the scheduling process, but without dropping the reservation hold for A.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dom4ha - are you sure this is really true and this is how scheduler works?

Looking at the code, it seems that for "deferred" resize requests, the resources that scheduler assumes are indeed max(spec, allocated, actual) as Natasha wrote above:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-helpers/resource/helpers.go#L294-L299

So if we have deferred upsize, scheduler is actually already using those resources as requested (the pod is obviously already assigned to node).

That BTW means that the in such case - the total request resources in NodeInfo may actually exceed the allocatable resources:
https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/framework/types.go#L427

For me it's debatable whether it really does what it is supposed to be. But if that works as intended, then Natasha seems to be right that we actually don't need to do anything...

@macsko - FYI and for your thoughts too

Copy link
Contributor Author

@natasha41575 natasha41575 Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the total request resources in NodeInfo may actually exceed the allocatable resources

Yes, this is intended. Deferred resizes are supposed to (and do) block the capacity, which may look like the Node is "overcommited" if you just look at the NodeInfo.

We actually discussed it in this issue, which we closed as WAI: kubernetes/kubernetes#135107 (comment). I think it is necessary that we keep doing this to prevent additional race conditions between kubelet/scheduler (and other components).

This means that the behavior today is that resizes are prioritized over scheduling new pods, so I don't think we need to change anything to make the scheduler reserve the resources.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I thought that we could change the behavior to not block scheduling of other things to exactly avoid kubernetes/kubernetes#135107 (comment). So assuming that blocking resources (in "deferred" state) is desired, then indeed there is nothing else that we'd need to do here.

There is still a question how scheduler could protect workloads for which it found a suitable placement (as part of the Workload Aware Scheduling process). Since kubelet is the SoT, the scheduler needs to proactively attempt to reserve necessary resources on the kubelet side, not other way around. This is exactly how we attempted to address this problem in [1], so we seem to be aligned with WAS as well.

[1] https://docs.google.com/document/d/1VdE-yCre69q1hEFt-yxL4PBKt9qOjVtasOmN-XK12XU/edit?resourcekey=0-KJc-YvU5zheMz92uUOWm4w&tab=t.0#heading=h.clxvs733rwyx

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the confirmation. I updated the wording under Kubelet Interaction and Resource Reservation to state explicitly that the scheduler does not need to do anything to reserve the resources; that they will already be reserved due to the nature of how the scheduler determines fit.


### ResizeUnschedulable Pod Condition

The Scheduler will own a new `ResizeUnschedulable` condition type in the pod status. This condition will be present only after
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC scheduler should keep trying to preempt after a preemption attempt failure similarly to how it keeps trying to schedule a pod which does not fit?

What should happen after preemption was triggered? Shall a pod wait in unschedulable queue until the preemption is finished? Note that in case of real pod scheduling, the pod is still considered unschedulable (here unresizable) until all victim pods were removed from the node. It has nominatedNodeName set to indicate that it's intendent to use resources the victims are about to free up. Once the resources are freed up and were not taken by any higher priority pod in the meantime, the nomination turns in to assignment and the unschedulable (here unresizable) condition is cleared.

I suspect we want to the resize to behave completely like pod scheduling, which means for instance setting the nominatedNodeName to indicate that this pod is going to use newly requested resources once victims disappear. Without nomination, it would be impossible to distinguish whether the pod is unresizable because there are no viable victims or because it is waiting for preemption to finish.

Note that it's important to keep the pod-under-resize until the preemption is finished, because it may always happen that a place reserved for the resize (using nomination) will be taken in the meantime by a higher priority pod, so scheduler may need to identify other victims or clear the nomination to indicate that the resize is indeed not feasible.

Copy link
Contributor Author

@natasha41575 natasha41575 Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC scheduler should keep trying to preempt after a preemption attempt failure similarly to how it keeps trying to schedule a pod which does not fit?

Yes, I think so. But I think it should try to be smart enough to only reattempt if something on the node changes that could cause preemption to succeed. I'll think about this and try to enumerate such conditions in the KEP.

For the rest of your comment, it seems related to the discussion above (#5932 (comment)).

Is the only reason to keep the pods in the scheduling queue until the preemption is finished to reserve the resources for the resize? If so let's continue the discussion above, because I still think we don't need to do anything more (unless I am missing something else). The resources are already reserved, due to (1) the pod is already bound to the node and (2) the scheduler is already today assuming resources as max(spec, allocated, actual):

https://github.com/kubernetes/kubernetes/blob/3f2ebc50eecfaeda23df4435dc82422fa65425ed/staging/src/k8s.io/component-helpers/resource/helpers.go#L287-L291

where spec in this case is the deferred upscaled resources.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think so. But I think it should try to be smart enough to only reattempt if something on the node changes that could cause preemption to succeed. I'll think about this and try to enumerate such conditions in the KEP.

This is exactly how the initial scheduling works as well in case pods are unschedulable, so there should be no need to have any logic dedicated to the resizing.

I think there will be time to discuss details, but just to highlight, we should also take into consideration cases like when a higher priority pod needs to use resources reserved by the pod-during-resize. We probably don't want to kill the orignal pod if it's not necessary. Scheduler also needs to keep track which pods is preempting what, so there are a few details we will have to have a closer look.

Copy link
Contributor Author

@natasha41575 natasha41575 Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is exactly how the initial scheduling works as well in case pods are unschedulable, so there should be no need to have any logic dedicated to the resizing.

Thank you! I took a quick look at the existing scheduling logic, and IIUC by having deferred resizes be treated as a FitError, this should allow us to reuse the same active queue / backoff queue / unschedulable pods management logic that is already implemented by the scheduler today, is that correct? I've added a section to the KEP to clarify this explicitly.

I think there will be time to discuss details, but just to highlight, we should also take into consideration cases like when a higher priority pod needs to use resources reserved by the pod-during-resize. We probably don't want to kill the orignal pod if it's not necessary.

I think I see the concern -- essentially the race between a Deferred resize and a new, higher-priority pod arriving. Is my understanding correct?

From the scheduler's view, once the spec is updated, the resources are already reserved as we discussed in #5932 (comment); it doesn't matter from the scheduler perspective whether the resize has been actuated by kubelet. I think that means that If a higher-priority pod comes in and the only way to fit it is by taking the space the Deferred pod is trying to grow into, the standard preemption logic applies. This might mean the resizing pod itself gets evicted if it’s the best victim candidate. I agree that we would rather not kill pods unnecessarily, but for the specific case that you are describing, this behavior seems consistent with the rest of the scheduler's logic. Perhaps I can add this to the known risks.

As a side note, this is why I have started conversations about introducing integration with node upsizing; it can prevent unnecessary disruptions such as this.

Scheduler also needs to keep track which pods is preempting what, so there are a few details we will have to have a closer look.

How is this handled with scheduler preemption today? I saw the nominatedPods map, so I wonder if that is the right way to track for resize as well, or do you think a separate mechanism is needed? I can see that conflating nominated pods with resize might be a bit confusing.

4. **Snapshot Adjustment**: Temporarily remove the `Deferred` pod from the node snapshot to calculate required space
accurately.
5. **Calculate Victims**: Identify suitable preemption victims and then restore the pod to the snapshot.
6. **Update Status**: Report the success or failure of the preemption attempt in the pod status. If preemption is insufficient
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dom4ha - are you sure this is really true and this is how scheduler works?

Looking at the code, it seems that for "deferred" resize requests, the resources that scheduler assumes are indeed max(spec, allocated, actual) as Natasha wrote above:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-helpers/resource/helpers.go#L294-L299

So if we have deferred upsize, scheduler is actually already using those resources as requested (the pod is obviously already assigned to node).

That BTW means that the in such case - the total request resources in NodeInfo may actually exceed the allocatable resources:
https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/framework/types.go#L427

For me it's debatable whether it really does what it is supposed to be. But if that works as intended, then Natasha seems to be right that we actually don't need to do anything...

@macsko - FYI and for your thoughts too

@wojtek-t wojtek-t self-assigned this Feb 27, 2026
@natasha41575 natasha41575 force-pushed the scheduler-preemption branch from 13f327f to 378d27f Compare March 24, 2026 18:46
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: natasha41575
Once this PR has been reviewed and has the lgtm label, please ask for approval from wojtek-t and additionally assign dom4ha for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@natasha41575
Copy link
Contributor Author

/assign @dom4ha

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

Status: Needs Triage

Development

Successfully merging this pull request may close these issues.

4 participants