Bug was introduced trying to solved the following issue:
> When scenarios where two HPAs reference the same object but have
different metric calculation (e.g. ingresses with different weights),
kube-metrics-adapter calculates the two metrics but always overrides
one of the metrics when saving it to the store.
This commit fixes a issue where without the added `return` the system
was continuing in an invalid states where `labels2metric` wasn't
properly initialized, causing the system to panic.
Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
Having the test filename prefix exactly as the tested file name helps systems
to identify the correlation.
Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
The metrics store, both the custom and external one, make heavy usage of
maps of strings to strings. This leads to a presumable confusing or at
least hard to read codebase.
This commit tries to increase readability and maintainability of the
metric stores by using custom types. This commit has no effect in
performance or functionality, it's pure aesthetics.
Co-authored-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
The current implementation of the metrics store for custom metrics uses
just the object name as key. When scenarios where two HPAs reference the
same object but have different metric calculation (e.g. ingresses with
different weights), kube-metrics-adapter calculates the two metrics but
always overrides one of the metrics when saving it to the store.
This commit implements the use of a labels hash in the custom metrics
store. This way, metrics are identified not just by the referenced
object, but also by its selectors.
Co-authored-by: Katyanna Moura <amelie.kn@gmail.com>
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
This commit upgrades all the Kubernetes dependencies to 1.21.5. It
includes regenerated clients and specs.
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
In #371 we introduced steps to make the scaling up possible even when
the HPA forces a 10% change. The problem is that 10% might not be
sufficient for some specific scaling scenarios.
For example, a an application targeting 12 pods and using a
ScalingSchedule with the value of 10000 to achieve that, will require a
target of 833. With 10 ramp steps the 90% bucket will return a metric of
9000 and the HPA calculates (9000/833) 10.8 pods, rounding to 11 pods.
Once the metric reaches the time to return 100% it will won't be
effective, since the change of the current number of pods (11) and the
desired one (12) is less than 10%.
This commit does not try to tackle this problem completely, since the
10% rule is not fixed, might change among different clusters and is also
dependent on the value given to each ScalingSchedule. Therefore, this
commit makes the number of ramp steps configurable via the
`--scaling-schedule-ramp-steps` config flag, defaulting to 10.
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>