-
Notifications
You must be signed in to change notification settings - Fork 17
Sequencing of actions #6
Description
More of an observation that an out-and-out issue but whilst experimenting with your lambda something struck me. In the main procedure you currently:
- Evaluate minimum state of the cluster
- Evaluate if scaling-in is appropriate on reserved memory grounds
- Evaluate if scaling-in is appropriate on reserved CPU grounds
- Terminate any already-drained host instances
So in our scenario I have two hosts running tasks (one per availability zone), so a cluster with a minimum and desired size of 2. I then manually trigger our scale-up alarm (which sets desired size to 4) to force some new hosts to be added. The on the first time I run the lambda then it sees the new hosts as being surplus, and starts draining them.
The interesting part is on the next run of the lambda where the minimum state evaluation (asg_on_min_state) doesn't take into account the number of draining instances about to be terminated (i.e. desired size is still deemed to be 4). Now as we step further into the code then (as i'm in currently only experimenting in a dev environment and the containers i'm running do pretty much nothing) then the reserved memory evaluation then actually decides to start draining one of my remaining minimum 2 active boxes! Finally it also terminates our two instances that were set to draining from the first run.
So with this kind of scenario in mind would it not make sense to make termination of drained instances one of the first things to be evaluated (or at least do it before evaluating minimum state so we have a desired size figure reflective of hosts that aren't on the cusp of termination) and not the last? Either that or make the asg_on_min_state procedure take into account instances that are draining/about to be terminated.