KEP 5836: Add KEP for Scheduler Preemption for In-Place Pod Resize (alpha)#5932
KEP 5836: Add KEP for Scheduler Preemption for In-Place Pod Resize (alpha)#5932natasha41575 wants to merge 1 commit intokubernetes:masterfrom
Conversation
|
/sig scheduling |
292e58e to
13f327f
Compare
| 4. **Snapshot Adjustment**: Temporarily remove the `Deferred` pod from the node snapshot to calculate required space | ||
| accurately. | ||
| 5. **Calculate Victims**: Identify suitable preemption victims and then restore the pod to the snapshot. | ||
| 6. **Update Status**: Report the success or failure of the preemption attempt in the pod status. If preemption is insufficient |
There was a problem hiding this comment.
Reporting success is probably not the end of the story, as we should describe how the resource accounting is done until the pod gets indeed resized on kubelet. Scheduler needs to assume somehow (see also scheduler assumption process in binding) that kubelet will accept the resize and keep blocking newly requested resources in memory.
There was a problem hiding this comment.
The scheduler will automatically be blocked from using the new requested resources, because the scheduler uses max(spec, allocated, actual) when determining fit. Since a resize request is made by adjusting the spec resources, these resources are already considered "reserved" from scheduler perspective. I have a note about this under the Kubelet behavior section below. I can add one here too.
There was a problem hiding this comment.
We probably can take max, but only after scheduler accepts the resize (in the mentioned assume process). But at the time the scheduler notices the Deferred state, it will create some pod-alloc-to-schedule, but can't reserve resources until it initiates preemption (set nomination) and later accept it (assume resources). The assumption can be dropped once it receives a notification about the actual resize.
Another complication is that unlike the initial pod scheduling, I suspect the requested resources may change during this process. It's not obvious how scheduler should handle such situation. Let's consider that there are effectively two resize requests A and B. Scheduler could initiate preemption for A and set nomination (block resources until preemption finishes) or even assume (wait for the resize on the Kubelete side). When the following update B comes, we probably can't just update the cache, but will need to repeat the scheduling process, but without dropping the reservation hold for A.
There was a problem hiding this comment.
@dom4ha - are you sure this is really true and this is how scheduler works?
Looking at the code, it seems that for "deferred" resize requests, the resources that scheduler assumes are indeed max(spec, allocated, actual) as Natasha wrote above:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-helpers/resource/helpers.go#L294-L299
So if we have deferred upsize, scheduler is actually already using those resources as requested (the pod is obviously already assigned to node).
That BTW means that the in such case - the total request resources in NodeInfo may actually exceed the allocatable resources:
https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/framework/types.go#L427
For me it's debatable whether it really does what it is supposed to be. But if that works as intended, then Natasha seems to be right that we actually don't need to do anything...
@macsko - FYI and for your thoughts too
There was a problem hiding this comment.
the total request resources in NodeInfo may actually exceed the allocatable resources
Yes, this is intended. Deferred resizes are supposed to (and do) block the capacity, which may look like the Node is "overcommited" if you just look at the NodeInfo.
We actually discussed it in this issue, which we closed as WAI: kubernetes/kubernetes#135107 (comment). I think it is necessary that we keep doing this to prevent additional race conditions between kubelet/scheduler (and other components).
This means that the behavior today is that resizes are prioritized over scheduling new pods, so I don't think we need to change anything to make the scheduler reserve the resources.
There was a problem hiding this comment.
I see. I thought that we could change the behavior to not block scheduling of other things to exactly avoid kubernetes/kubernetes#135107 (comment). So assuming that blocking resources (in "deferred" state) is desired, then indeed there is nothing else that we'd need to do here.
There is still a question how scheduler could protect workloads for which it found a suitable placement (as part of the Workload Aware Scheduling process). Since kubelet is the SoT, the scheduler needs to proactively attempt to reserve necessary resources on the kubelet side, not other way around. This is exactly how we attempted to address this problem in [1], so we seem to be aligned with WAS as well.
There was a problem hiding this comment.
Thanks for the confirmation. I updated the wording under Kubelet Interaction and Resource Reservation to state explicitly that the scheduler does not need to do anything to reserve the resources; that they will already be reserved due to the nature of how the scheduler determines fit.
|
|
||
| ### ResizeUnschedulable Pod Condition | ||
|
|
||
| The Scheduler will own a new `ResizeUnschedulable` condition type in the pod status. This condition will be present only after |
There was a problem hiding this comment.
IIUC scheduler should keep trying to preempt after a preemption attempt failure similarly to how it keeps trying to schedule a pod which does not fit?
What should happen after preemption was triggered? Shall a pod wait in unschedulable queue until the preemption is finished? Note that in case of real pod scheduling, the pod is still considered unschedulable (here unresizable) until all victim pods were removed from the node. It has nominatedNodeName set to indicate that it's intendent to use resources the victims are about to free up. Once the resources are freed up and were not taken by any higher priority pod in the meantime, the nomination turns in to assignment and the unschedulable (here unresizable) condition is cleared.
I suspect we want to the resize to behave completely like pod scheduling, which means for instance setting the nominatedNodeName to indicate that this pod is going to use newly requested resources once victims disappear. Without nomination, it would be impossible to distinguish whether the pod is unresizable because there are no viable victims or because it is waiting for preemption to finish.
Note that it's important to keep the pod-under-resize until the preemption is finished, because it may always happen that a place reserved for the resize (using nomination) will be taken in the meantime by a higher priority pod, so scheduler may need to identify other victims or clear the nomination to indicate that the resize is indeed not feasible.
There was a problem hiding this comment.
IIUC scheduler should keep trying to preempt after a preemption attempt failure similarly to how it keeps trying to schedule a pod which does not fit?
Yes, I think so. But I think it should try to be smart enough to only reattempt if something on the node changes that could cause preemption to succeed. I'll think about this and try to enumerate such conditions in the KEP.
For the rest of your comment, it seems related to the discussion above (#5932 (comment)).
Is the only reason to keep the pods in the scheduling queue until the preemption is finished to reserve the resources for the resize? If so let's continue the discussion above, because I still think we don't need to do anything more (unless I am missing something else). The resources are already reserved, due to (1) the pod is already bound to the node and (2) the scheduler is already today assuming resources as max(spec, allocated, actual):
where spec in this case is the deferred upscaled resources.
There was a problem hiding this comment.
Yes, I think so. But I think it should try to be smart enough to only reattempt if something on the node changes that could cause preemption to succeed. I'll think about this and try to enumerate such conditions in the KEP.
This is exactly how the initial scheduling works as well in case pods are unschedulable, so there should be no need to have any logic dedicated to the resizing.
I think there will be time to discuss details, but just to highlight, we should also take into consideration cases like when a higher priority pod needs to use resources reserved by the pod-during-resize. We probably don't want to kill the orignal pod if it's not necessary. Scheduler also needs to keep track which pods is preempting what, so there are a few details we will have to have a closer look.
There was a problem hiding this comment.
This is exactly how the initial scheduling works as well in case pods are unschedulable, so there should be no need to have any logic dedicated to the resizing.
Thank you! I took a quick look at the existing scheduling logic, and IIUC by having deferred resizes be treated as a FitError, this should allow us to reuse the same active queue / backoff queue / unschedulable pods management logic that is already implemented by the scheduler today, is that correct? I've added a section to the KEP to clarify this explicitly.
I think there will be time to discuss details, but just to highlight, we should also take into consideration cases like when a higher priority pod needs to use resources reserved by the pod-during-resize. We probably don't want to kill the orignal pod if it's not necessary.
I think I see the concern -- essentially the race between a Deferred resize and a new, higher-priority pod arriving. Is my understanding correct?
From the scheduler's view, once the spec is updated, the resources are already reserved as we discussed in #5932 (comment); it doesn't matter from the scheduler perspective whether the resize has been actuated by kubelet. I think that means that If a higher-priority pod comes in and the only way to fit it is by taking the space the Deferred pod is trying to grow into, the standard preemption logic applies. This might mean the resizing pod itself gets evicted if it’s the best victim candidate. I agree that we would rather not kill pods unnecessarily, but for the specific case that you are describing, this behavior seems consistent with the rest of the scheduler's logic. Perhaps I can add this to the known risks.
As a side note, this is why I have started conversations about introducing integration with node upsizing; it can prevent unnecessary disruptions such as this.
Scheduler also needs to keep track which pods is preempting what, so there are a few details we will have to have a closer look.
How is this handled with scheduler preemption today? I saw the nominatedPods map, so I wonder if that is the right way to track for resize as well, or do you think a separate mechanism is needed? I can see that conflating nominated pods with resize might be a bit confusing.
keps/sig-scheduling/5836-scheduler-preemption-for-ippr/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5836-scheduler-preemption-for-ippr/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5836-scheduler-preemption-for-ippr/README.md
Outdated
Show resolved
Hide resolved
| 4. **Snapshot Adjustment**: Temporarily remove the `Deferred` pod from the node snapshot to calculate required space | ||
| accurately. | ||
| 5. **Calculate Victims**: Identify suitable preemption victims and then restore the pod to the snapshot. | ||
| 6. **Update Status**: Report the success or failure of the preemption attempt in the pod status. If preemption is insufficient |
There was a problem hiding this comment.
@dom4ha - are you sure this is really true and this is how scheduler works?
Looking at the code, it seems that for "deferred" resize requests, the resources that scheduler assumes are indeed max(spec, allocated, actual) as Natasha wrote above:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-helpers/resource/helpers.go#L294-L299
So if we have deferred upsize, scheduler is actually already using those resources as requested (the pod is obviously already assigned to node).
That BTW means that the in such case - the total request resources in NodeInfo may actually exceed the allocatable resources:
https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/framework/types.go#L427
For me it's debatable whether it really does what it is supposed to be. But if that works as intended, then Natasha seems to be right that we actually don't need to do anything...
@macsko - FYI and for your thoughts too
keps/sig-scheduling/5836-scheduler-preemption-for-ippr/README.md
Outdated
Show resolved
Hide resolved
13f327f to
378d27f
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: natasha41575 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
378d27f to
3e1c680
Compare
|
/assign @dom4ha |
KEP 5836: Scheduler Preemption for In-Place Pod Resize (alpha)
I have a PoC for the implementation here: kubernetes/kubernetes#137206
Issue link: #5836
Targeting 1.37 for alpha