Bug was introduced trying to solved the following issue:
> When scenarios where two HPAs reference the same object but have
different metric calculation (e.g. ingresses with different weights),
kube-metrics-adapter calculates the two metrics but always overrides
one of the metrics when saving it to the store.
This commit fixes a issue where without the added `return` the system
was continuing in an invalid states where `labels2metric` wasn't
properly initialized, causing the system to panic.
Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
Having the test filename prefix exactly as the tested file name helps systems
to identify the correlation.
Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
The metrics store, both the custom and external one, make heavy usage of
maps of strings to strings. This leads to a presumable confusing or at
least hard to read codebase.
This commit tries to increase readability and maintainability of the
metric stores by using custom types. This commit has no effect in
performance or functionality, it's pure aesthetics.
Co-authored-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
The current implementation of the metrics store for custom metrics uses
just the object name as key. When scenarios where two HPAs reference the
same object but have different metric calculation (e.g. ingresses with
different weights), kube-metrics-adapter calculates the two metrics but
always overrides one of the metrics when saving it to the store.
This commit implements the use of a labels hash in the custom metrics
store. This way, metrics are identified not just by the referenced
object, but also by its selectors.
Co-authored-by: Katyanna Moura <amelie.kn@gmail.com>
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
This commit upgrades all the Kubernetes dependencies to 1.21.5. It
includes regenerated clients and specs.
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
In #371 we introduced steps to make the scaling up possible even when
the HPA forces a 10% change. The problem is that 10% might not be
sufficient for some specific scaling scenarios.
For example, a an application targeting 12 pods and using a
ScalingSchedule with the value of 10000 to achieve that, will require a
target of 833. With 10 ramp steps the 90% bucket will return a metric of
9000 and the HPA calculates (9000/833) 10.8 pods, rounding to 11 pods.
Once the metric reaches the time to return 100% it will won't be
effective, since the change of the current number of pods (11) and the
desired one (12) is less than 10%.
This commit does not try to tackle this problem completely, since the
10% rule is not fixed, might change among different clusters and is also
dependent on the value given to each ScalingSchedule. Therefore, this
commit makes the number of ramp steps configurable via the
`--scaling-schedule-ramp-steps` config flag, defaulting to 10.
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
The HPA has a feature to do not scale up and down when the change in the
metric is less than 10%:
> We'll skip scaling if the ratio is sufficiently close to 1.0 (within a
> globally-configurable tolerance, from the
> `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1.
It could lead to pods scaling up to 10% less than the target for
ScalingSchedules and then not scaling to the actual value if the metric
calculated before was less than 10% of the target.
This commit uses 10 fixed buckets for scaling, this way we know the
metric returned during a scaling event is at least 10% more than a
previous one calculated during the period of ramp up. The same is valid
for the scaling down during a ramp-down
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
this commit updates the k8s dependencies from v0.20.5 to v0.20.9. It
also bundles other dependencies updates.
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
This commit add tests to the ZMON package when handling the key used on
the metrics query.
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>