13 Commits

Author SHA1 Message Date
Mikkel Oscar Lyderik Larsen
5a543781d7 Introduce context for collector interface
Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
2024-05-21 14:00:31 +02:00
Alexander Didenko
c4a1e08fdf Update HPA to autoscaling/v2 apiVersion (#551) 2023-05-03 23:01:19 +02:00
Mikkel Oscar Lyderik Larsen
e2a922f110 Move generic functions to controller package
Move defaultTimeZone to a flag.

Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
2023-04-06 17:53:17 +02:00
zlawrence
c60253e90c add test for end date in incorrect format
Signed-off-by: zlawrence <zak.lawrence@zalando.de>
2022-12-21 14:08:44 +01:00
zlawrence
680e0feea1 fix nil pointer dereference
Signed-off-by: zlawrence <zak.lawrence@zalando.de>
2022-12-21 14:08:41 +01:00
zlawrence
8ba4d19c35 Add end date support for Repeating schedule. Added tests 2022-12-13 12:53:05 +01:00
zlawrence
37bf73c7fe Add end date support for OneTime schedule 2022-12-12 20:02:15 +01:00
Jonathan Juares Beber
fd4ead837e Make the number of ramp steps configurable
In #371 we introduced steps to make the scaling up possible even when
the HPA forces a 10% change. The problem is that 10% might not be
sufficient for some specific scaling scenarios.

For example, a an application targeting 12 pods and using a
ScalingSchedule with the value of 10000 to achieve that, will require a
target of 833. With 10 ramp steps the 90% bucket will return a metric of
9000 and the HPA calculates (9000/833) 10.8 pods, rounding to 11 pods.
Once the metric reaches the time to return 100% it will won't be
effective, since the change of the current number of pods (11) and the
desired one (12) is less than 10%.

This commit does not try to tackle this problem completely, since the
10% rule is not fixed, might change among different clusters and is also
dependent on the value given to each ScalingSchedule. Therefore, this
commit makes the number of ramp steps configurable via the
`--scaling-schedule-ramp-steps` config flag, defaulting to 10.

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2021-10-22 15:35:11 +02:00
Jonathan Juares Beber
8fe330941a Use 10 buckets on ScalingSchedule ramp-up/down
The HPA has a feature to do not scale up and down when the change in the
metric is less than 10%:

> We'll skip scaling if the ratio is sufficiently close to 1.0 (within a
> globally-configurable tolerance, from the
> `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1.

It could lead to pods scaling up to 10% less than the target for
ScalingSchedules and then not scaling to the actual value if the metric
calculated before was less than 10% of the target.

This commit uses 10 fixed buckets for scaling, this way we know the
metric returned during a scaling event is at least 10% more than a
previous one calculated during the period of ramp up. The same is valid
for the scaling down during a ramp-down

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2021-09-30 19:01:59 +02:00
Alexey Ermakov
c5411c74b7 Scheduled scaling: add an optional scaling window
Signed-off-by: Alexey Ermakov <alexey.ermakov@zalando.de>
2021-09-24 15:33:49 +02:00
Jonathan Juares Beber
0ad7296d56 Switch Schedule optional fields to pointers
The `Date` and `Period` fields inside the `Schedule` type of the
`[Cluster]ScalingSchedule` CRDs are optional, but in its current
configuration the generated clients can't update or create these
resources since the empty fields do not pass the validation.

This commit updates the `[Cluster]ScalingSchedule.Schedule[*]` `Date`
and `Period` field to pointers, this way a null value is not validated
and the clients are able to update and create resources. It also updates
the collector code and tests to reflect the change.

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2021-06-30 17:09:48 +02:00
Sandor Szücs
64a6b85c73 fix typo Saturday
Signed-off-by: Sandor Szücs <sandor.szuecs@zalando.de>
2021-05-26 12:03:07 +02:00
Jonathan Juares Beber
a382dbfe7b Create ScalingSchedule collector
This commit adds two new collectors to the adapter:
- ClusterScalingScheduleCollector; and
- ScalingScheduleCollector

Also, it introduces the required collectors plugins, initialization
logic in the server startup, documentation and deployment example
(including the helm chart). A new config flag is created,
`-scaling-schedule`, and allows to enable and to disable the collection
of such metrics. It's disabled by default.

This collectors are the required logic to utilise the CRDs introduced in
the #284 pull request. It makes use of the kubernetes go-client
implementations of a [Store][0] and [Reflector][1].

[0]: https://pkg.go.dev/k8s.io/client-go/tools/cache#Store
[1]: https://pkg.go.dev/k8s.io/client-go/tools/cache#Reflector

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2021-05-21 14:29:11 +02:00