Compare commits

...

20 Commits

Author SHA1 Message Date
Mikkel Oscar Lyderik Larsen
10dc42b9e9 Merge pull request #401 from zalando-incubator/update-deps-02-14
Update dependencies
2022-02-14 15:05:19 +01:00
Mikkel Oscar Lyderik Larsen
20db592a62 Use cdp-runtime for build (#398)
Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
2022-02-14 14:41:57 +01:00
Mikkel Oscar Lyderik Larsen
d0a3ea1934 Update dependencies
Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
2022-02-14 14:36:06 +01:00
Mikkel Oscar Lyderik Larsen
837e7b9c5d Update deps 2022 02 (#391)
* Update example

Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>

* Update dependencies 2022-02

Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
2022-02-01 18:13:48 +01:00
Katyanna Moura
9d8359b580 Merge pull request #380 from zalando-incubator/fix-metrics-map
Fix panic when trying to add key to nil map
2022-01-12 11:36:53 +01:00
Katyanna Moura
71a8e99d1f Fix panic when trying to add key to nil map
Bug was introduced trying to solved the following issue:
> When scenarios where two HPAs reference the same object but have
different metric calculation (e.g. ingresses with different weights),
kube-metrics-adapter calculates the two metrics but always overrides
one of the metrics when saving it to the store.

This commit fixes a issue where without the added `return` the system
was continuing in an invalid states where `labels2metric` wasn't
properly initialized, causing the system to panic.

Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
2022-01-12 11:29:38 +01:00
Katyanna Moura
4d4c70c553 Fix test filename to match related file
Having the test filename prefix exactly as the tested file name helps systems
to identify the correlation.

Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
2022-01-12 11:29:38 +01:00
Katyanna Moura
2ccd6903d9 Merge pull request #376 from zalando-incubator/custommetricsselector
Use labels hash in the custom metrics store and metrics store 💅
2021-11-25 10:24:44 +01:00
Jonathan Juares Beber
f58db31f98 Define stronger types for metrics store
The metrics store, both the custom and external one, make heavy usage of
maps of strings to strings. This leads to a presumable confusing or at
least hard to read codebase.

This commit tries to increase readability and maintainability of the
metric stores by using custom types. This commit has no effect in
performance or functionality, it's pure aesthetics.

Co-authored-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
Signed-off-by: Katyanna Moura <amelie.kn@gmail.com>
2021-11-24 17:41:03 +01:00
Jonathan Juares Beber
0bf8f5dd0f Use labels hash in the custom metrics store
The current implementation of the metrics store for custom metrics uses
just the object name as key. When scenarios where two HPAs reference the
same object but have different metric calculation (e.g. ingresses with
different weights), kube-metrics-adapter calculates the two metrics but
always overrides one of the metrics when saving it to the store.

This commit implements the use of a labels hash in the custom metrics
store. This way, metrics are identified not just by the referenced
object, but also by its selectors.

Co-authored-by: Katyanna Moura <amelie.kn@gmail.com>
Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2021-11-23 18:11:18 +01:00
Jonathan Juares Beber
e600557636 Merge pull request #375 from zalando-incubator/kube1.21
Upgrade Kubernetes and its friends to 1.21
2021-10-28 10:35:11 +02:00
Jonathan Juares Beber
0f06db7cdf Upgrade Kubernetes and its friends to 1.21
This commit upgrades all the Kubernetes dependencies to 1.21.5. It
includes regenerated clients and specs.

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2021-10-27 20:58:18 +02:00
Jonathan Juares Beber
1c9038b2cc Merge pull request #374 from zalando-incubator/configurable-buckets
Make the number of ramp steps configurable
2021-10-25 10:21:32 +02:00
Jonathan Juares Beber
fd4ead837e Make the number of ramp steps configurable
In #371 we introduced steps to make the scaling up possible even when
the HPA forces a 10% change. The problem is that 10% might not be
sufficient for some specific scaling scenarios.

For example, a an application targeting 12 pods and using a
ScalingSchedule with the value of 10000 to achieve that, will require a
target of 833. With 10 ramp steps the 90% bucket will return a metric of
9000 and the HPA calculates (9000/833) 10.8 pods, rounding to 11 pods.
Once the metric reaches the time to return 100% it will won't be
effective, since the change of the current number of pods (11) and the
desired one (12) is less than 10%.

This commit does not try to tackle this problem completely, since the
10% rule is not fixed, might change among different clusters and is also
dependent on the value given to each ScalingSchedule. Therefore, this
commit makes the number of ramp steps configurable via the
`--scaling-schedule-ramp-steps` config flag, defaulting to 10.

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2021-10-22 15:35:11 +02:00
Mikkel Oscar Lyderik Larsen
f46f801811 Merge pull request #373 from zalando-incubator/json-path-array
Handle more complex array in json path
2021-10-19 10:04:04 +02:00
Mikkel Oscar Lyderik Larsen
4acdf72ef7 Handle more complex array in json path
Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
2021-10-14 09:34:15 +02:00
Jonathan Juares Beber
e04cd10bfc Merge pull request #371 from zalando-incubator/scaling-chunks
Use 10 buckets on ScalingSchedule ramp-up/down
2021-10-01 10:39:57 +02:00
Jonathan Juares Beber
8fe330941a Use 10 buckets on ScalingSchedule ramp-up/down
The HPA has a feature to do not scale up and down when the change in the
metric is less than 10%:

> We'll skip scaling if the ratio is sufficiently close to 1.0 (within a
> globally-configurable tolerance, from the
> `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1.

It could lead to pods scaling up to 10% less than the target for
ScalingSchedules and then not scaling to the actual value if the metric
calculated before was less than 10% of the target.

This commit uses 10 fixed buckets for scaling, this way we know the
metric returned during a scaling event is at least 10% more than a
previous one calculated during the period of ramp up. The same is valid
for the scaling down during a ramp-down

Signed-off-by: Jonathan Juares Beber <jonathanbeber@gmail.com>
2021-09-30 19:01:59 +02:00
aermakov-zalando
0730c6ef1e Merge pull request #370 from zalando-incubator/schedule-scaling-window
Scheduled scaling: scale up/down slowly
2021-09-24 15:47:44 +02:00
Alexey Ermakov
c5411c74b7 Scheduled scaling: add an optional scaling window
Signed-off-by: Alexey Ermakov <alexey.ermakov@zalando.de>
2021-09-24 15:33:49 +02:00
25 changed files with 2364 additions and 667 deletions
+1 -1
View File
@@ -1,4 +1,4 @@
FROM registry.opensource.zalan.do/library/alpine-3.12:latest
FROM registry.opensource.zalan.do/library/alpine-3.13:latest
LABEL maintainer="Team Teapot @ Zalando SE <team-teapot@zalando.de>"
RUN apk add --no-cache tzdata
+51 -5
View File
@@ -720,12 +720,63 @@ The `ScalingSchedule` and `ClusterScalingSchedule` collectors allow
collecting time-based metrics from the respective CRD objects specified
in the HPA.
These collectors are disabled by default, you have to start the server
with the `--scaling-schedule` flag to enable it. Remember to deploy the CRDs
`ScalingSchedule` and `ClusterScalingSchedule` and allow the service
account used by the server to read, watch and list them.
### Supported metrics
| Metric | Description | Type | K8s Versions |
| ---------- | -------------- | ------- | -- |
| ObjectName | The metric is calculated and stored for each `ScalingSchedule` and `ClusterScalingSchedule` referenced in the HPAs | `ScalingSchedule` and `ClusterScalingSchedule` | `>=1.16` |
### Ramp-up and ramp-down feature
To avoid abrupt scaling due to time based metrics,the `SchalingSchedule`
collector has a feature of ramp-up and ramp-down the metric over a
specific period of time. The duration of the scaling window can be
configured individually in the `[Cluster]ScalingSchedule` object, via
the option `scalingWindowDurationMinutes` or globally for all scheduled
events, and defaults to a globally configured value if not specified.
The default for the latter is set to 10 minutes, but can be changed
using the `--scaling-schedule-default-scaling-window` flag.
This spreads the scale events around, creating less load on the other
components, and helping the rest of the metrics (like the CPU ones) to
adjust as well.
The [HPA algorithm][algo-details] does not make changes if the metric
change is less than the specified by the
`horizontal-pod-autoscaler-tolerance` flag:
> We'll skip scaling if the ratio is sufficiently close to 1.0 (within a
> globally-configurable tolerance, from the
> `--horizontal-pod-autoscaler-tolerance` flag, which defaults to 0.1.
With that in mind, the ramp-up and ramp-down feature divides the scaling
over the specified period of time in buckets, trying to achieve changes
bigger than the configured tolerance. The number of buckets defaults to
10 and can be configured by the `--scaling-schedule-ramp-steps` flag.
**Important**: note that the ramp-up and ramp-down feature can lead to
deployments achieving less than the specified number of pods, due to the
HPA 10% change rule and the ceiling function applied to the desired
number of the pods (check the [algorithm details][algo-details]). It
varies with the configured metric for `ScalingSchedule` events, the
number of pods and the configured `horizontal-pod-autoscaler-tolerance`
flag of your kubernetes installation. [This gist][gist] contains the code to
simulate the situations a deployment with different number of pods, with
a metric of 10000 can face with 10 buckets (max of 90% of the metric
returned) and 5 buckets (max of 80% of the metric returned). The ramp-up
and ramp-down feature can be disabled by setting
`--scaling-schedule-default-scaling-window` to 0 and abrupt scalings can
be handled via [scaling policies][policies].
[algo-details]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details
[gist]: https://gist.github.com/jonathanbeber/37f1f918ab7ef6101c6ce56cc2cef3a2
[policies]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#scaling-policies
### Example
This is an example of using the ScalingSchedule collectors to collect
@@ -826,8 +877,3 @@ Note that these number of pods are just considering these custom
metrics, the normal HPA behavior still applies, such as: in case of
multiple metrics the biggest number of pods is the utilized one, HPA max
and min replica configuration, autoscaling policies, etc.
These collectors are disabled by default, you have to start the server
with the `--scaling-schedule` flag to enable it. Remember to deploy the CRDs
`ScalingSchedule` and `ClusterScalingSchedule` and allow the service
account used by the server to read, watch and list them.
+3 -1
View File
@@ -1,7 +1,9 @@
version: "2017-09-20"
pipeline:
- id: build
overlay: ci/golang
vm_config:
type: linux
image: "cdp-runtime/go"
cache:
paths:
- /go/pkg/mod # pkg cache for Go modules
+7 -2
View File
@@ -1,10 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: clusterscalingschedules.zalando.org
spec:
@@ -37,6 +36,11 @@ spec:
spec:
description: ScalingScheduleSpec is the spec part of the ScalingSchedule.
properties:
scalingWindowDurationMinutes:
description: Fade the scheduled values in and out over this many minutes.
If unset, the default per-cluster value will be used.
format: int64
type: integer
schedules:
description: Schedules is the list of schedules for this ScalingSchedule
resource. All the schedules defined here will result on the value
@@ -96,6 +100,7 @@ spec:
value:
description: The metric value that will be returned for the
defined schedule.
format: int64
type: integer
required:
- durationMinutes
+7 -2
View File
@@ -1,10 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
controller-gen.kubebuilder.io/version: v0.8.0
creationTimestamp: null
name: scalingschedules.zalando.org
spec:
@@ -37,6 +36,11 @@ spec:
spec:
description: ScalingScheduleSpec is the spec part of the ScalingSchedule.
properties:
scalingWindowDurationMinutes:
description: Fade the scheduled values in and out over this many minutes.
If unset, the default per-cluster value will be used.
format: int64
type: integer
schedules:
description: Schedules is the list of schedules for this ScalingSchedule
resource. All the schedules defined here will result on the value
@@ -96,6 +100,7 @@ spec:
value:
description: The metric value that will be returned for the
defined schedule.
format: int64
type: integer
required:
- durationMinutes
+2 -2
View File
@@ -1,5 +1,5 @@
FROM registry.opensource.zalan.do/stups/alpine:latest
MAINTAINER Team Teapot @ Zalando SE <team-teapot@zalando.de>
FROM registry.opensource.zalan.do/library/alpine-3.13:latest
LABEL maintainer="Team Teapot @ Zalando SE <team-teapot@zalando.de>"
# add binary
ADD build/linux/custom-metrics-consumer /
+3 -1
View File
@@ -11,7 +11,9 @@ import (
func metricsHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
_, err := w.Write([]byte(fmt.Sprintf(`{"queue": {"length": %d}}`, size)))
log.Fatalf("failed to write: %v", err)
if err != nil {
log.Fatalf("failed to write: %v", err)
}
}
var (
+113 -25
View File
@@ -1,35 +1,123 @@
module github.com/zalando-incubator/kube-metrics-adapter
require (
github.com/NYTimes/gziphandler v1.0.1 // indirect
github.com/aws/aws-sdk-go v1.40.22
github.com/go-openapi/spec v0.20.3
github.com/aws/aws-sdk-go v1.42.52
github.com/influxdata/influxdb-client-go v0.2.0
github.com/influxdata/line-protocol v0.0.0-20201012155213-5f565037cbc9 // indirect
github.com/kubernetes-sigs/custom-metrics-apiserver v0.0.0-20201216091021-1b9fa998bbaa
github.com/mailru/easyjson v0.7.7 // indirect
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/common v0.26.0
github.com/prometheus/client_golang v1.12.1
github.com/prometheus/common v0.32.1
github.com/sirupsen/logrus v1.8.1
github.com/spf13/cobra v1.1.3
github.com/spyzhov/ajson v0.4.2
github.com/spf13/cobra v1.3.0
github.com/spyzhov/ajson v0.7.1
github.com/stretchr/testify v1.7.0
github.com/szuecs/routegroup-client v0.18.3
github.com/zalando-incubator/cluster-lifecycle-manager v0.0.0-20180921141935-824b77fb1f84
go.uber.org/zap v1.13.0 // indirect
golang.org/x/crypto v0.0.0-20201124201722-c8d3bf9c5392 // indirect
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/api v0.20.9
k8s.io/apimachinery v0.20.9
k8s.io/apiserver v0.20.9
k8s.io/client-go v0.20.9
k8s.io/code-generator v0.20.9
k8s.io/component-base v0.20.9
github.com/szuecs/routegroup-client v0.21.0
github.com/zalando-incubator/cluster-lifecycle-manager v0.0.0-20220201095549-bbdeecaa4fc1
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8
k8s.io/api v0.23.0
k8s.io/apimachinery v0.23.0
k8s.io/apiserver v0.23.0
k8s.io/client-go v0.23.0
k8s.io/code-generator v0.23.0
k8s.io/component-base v0.23.0
k8s.io/klog v1.0.0
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd
k8s.io/metrics v0.20.9
sigs.k8s.io/controller-tools v0.5.0
k8s.io/kube-openapi v0.0.0-20220124234850-424119656bbf
k8s.io/metrics v0.21.9
sigs.k8s.io/controller-tools v0.8.0
)
go 1.16
require (
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver v3.5.1+incompatible // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/coreos/go-semver v0.3.0 // indirect
github.com/coreos/go-systemd/v22 v22.3.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful v2.14.3+incompatible // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/felixge/httpsnoop v1.0.2 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/go-logr/logr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.6 // indirect
github.com/go-openapi/swag v0.21.1 // indirect
github.com/gobuffalo/flect v0.2.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt v3.2.2+incompatible // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/go-cmp v0.5.7 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
github.com/imdario/mergo v0.3.6 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/influxdata/line-protocol v0.0.0-20210922203350-b1ad95c89adf // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/spf13/afero v1.8.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
go.etcd.io/etcd/api/v3 v3.5.1 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.1 // indirect
go.etcd.io/etcd/client/v3 v3.5.0 // indirect
go.opentelemetry.io/contrib v0.20.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0 // indirect
go.opentelemetry.io/otel v0.20.0 // indirect
go.opentelemetry.io/otel/exporters/otlp v0.20.0 // indirect
go.opentelemetry.io/otel/metric v0.20.0 // indirect
go.opentelemetry.io/otel/sdk v0.20.0 // indirect
go.opentelemetry.io/otel/sdk/export/metric v0.20.0 // indirect
go.opentelemetry.io/otel/sdk/metric v0.20.0 // indirect
go.opentelemetry.io/otel/trace v0.20.0 // indirect
go.opentelemetry.io/proto/otlp v0.7.0 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.19.0 // indirect
golang.org/x/crypto v0.0.0-20220131195533-30dcbda58838 // indirect
golang.org/x/mod v0.5.1 // indirect
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
golang.org/x/sys v0.0.0-20220209214540-3681064d5158 // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect
golang.org/x/tools v0.1.9 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa // indirect
google.golang.org/grpc v1.43.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/apiextensions-apiserver v0.23.0 // indirect
k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c // indirect
k8s.io/klog/v2 v2.40.1 // indirect
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.27 // indirect
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)
go 1.17
+1006 -123
View File
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
+11 -1
View File
@@ -1,6 +1,8 @@
package v1
import (
"time"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -37,6 +39,10 @@ type ClusterScalingSchedule struct {
// ScalingScheduleSpec is the spec part of the ScalingSchedule.
// +k8s:deepcopy-gen=true
type ScalingScheduleSpec struct {
// Fade the scheduled values in and out over this many minutes. If unset, the default per-cluster value will be used.
// +optional
ScalingWindowDurationMinutes *int64 `json:"scalingWindowDurationMinutes,omitempty"`
// Schedules is the list of schedules for this ScalingSchedule
// resource. All the schedules defined here will result on the value
// to the same metric. New metrics require a new ScalingSchedule
@@ -59,7 +65,11 @@ type Schedule struct {
// returned for the defined schedule.
DurationMinutes int `json:"durationMinutes"`
// The metric value that will be returned for the defined schedule.
Value int `json:"value"`
Value int64 `json:"value"`
}
func (in Schedule) Duration() time.Duration {
return time.Duration(in.DurationMinutes) * time.Minute
}
// Defines if the schedule is a OneTime schedule or
@@ -1,3 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
@@ -147,6 +148,11 @@ func (in *ScalingScheduleList) DeepCopyObject() runtime.Object {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScalingScheduleSpec) DeepCopyInto(out *ScalingScheduleSpec) {
*out = *in
if in.ScalingWindowDurationMinutes != nil {
in, out := &in.ScalingWindowDurationMinutes, &out.ScalingWindowDurationMinutes
*out = new(int64)
**out = **in
}
if in.Schedules != nil {
in, out := &in.Schedules, &out.Schedules
*out = make([]Schedule, len(*in))
+27 -7
View File
@@ -20,6 +20,7 @@ package versioned
import (
"fmt"
"net/http"
zalandov1 "github.com/zalando-incubator/kube-metrics-adapter/pkg/client/clientset/versioned/typed/zalando.org/v1"
discovery "k8s.io/client-go/discovery"
@@ -55,22 +56,41 @@ func (c *Clientset) Discovery() discovery.DiscoveryInterface {
// NewForConfig creates a new Clientset for the given config.
// If config's RateLimiter is not set and QPS and Burst are acceptable,
// NewForConfig will generate a rate-limiter in configShallowCopy.
// NewForConfig is equivalent to NewForConfigAndClient(c, httpClient),
// where httpClient was generated with rest.HTTPClientFor(c).
func NewForConfig(c *rest.Config) (*Clientset, error) {
configShallowCopy := *c
// share the transport between all clients
httpClient, err := rest.HTTPClientFor(&configShallowCopy)
if err != nil {
return nil, err
}
return NewForConfigAndClient(&configShallowCopy, httpClient)
}
// NewForConfigAndClient creates a new Clientset for the given config and http client.
// Note the http client provided takes precedence over the configured transport values.
// If config's RateLimiter is not set and QPS and Burst are acceptable,
// NewForConfigAndClient will generate a rate-limiter in configShallowCopy.
func NewForConfigAndClient(c *rest.Config, httpClient *http.Client) (*Clientset, error) {
configShallowCopy := *c
if configShallowCopy.RateLimiter == nil && configShallowCopy.QPS > 0 {
if configShallowCopy.Burst <= 0 {
return nil, fmt.Errorf("burst is required to be greater than 0 when RateLimiter is not set and QPS is set to greater than 0")
}
configShallowCopy.RateLimiter = flowcontrol.NewTokenBucketRateLimiter(configShallowCopy.QPS, configShallowCopy.Burst)
}
var cs Clientset
var err error
cs.zalandoV1, err = zalandov1.NewForConfig(&configShallowCopy)
cs.zalandoV1, err = zalandov1.NewForConfigAndClient(&configShallowCopy, httpClient)
if err != nil {
return nil, err
}
cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy)
cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfigAndClient(&configShallowCopy, httpClient)
if err != nil {
return nil, err
}
@@ -80,11 +100,11 @@ func NewForConfig(c *rest.Config) (*Clientset, error) {
// NewForConfigOrDie creates a new Clientset for the given config and
// panics if there is an error in the config.
func NewForConfigOrDie(c *rest.Config) *Clientset {
var cs Clientset
cs.zalandoV1 = zalandov1.NewForConfigOrDie(c)
cs.DiscoveryClient = discovery.NewDiscoveryClientForConfigOrDie(c)
return &cs
cs, err := NewForConfig(c)
if err != nil {
panic(err)
}
return cs
}
// New creates a new Clientset for the given RESTClient.
@@ -74,7 +74,10 @@ func (c *Clientset) Tracker() testing.ObjectTracker {
return c.tracker
}
var _ clientset.Interface = &Clientset{}
var (
_ clientset.Interface = &Clientset{}
_ testing.FakeClient = &Clientset{}
)
// ZalandoV1 retrieves the ZalandoV1Client
func (c *Clientset) ZalandoV1() zalandov1.ZalandoV1Interface {
@@ -99,7 +99,7 @@ func (c *FakeClusterScalingSchedules) Update(ctx context.Context, clusterScaling
// Delete takes name of the clusterScalingSchedule and deletes it. Returns an error if one occurs.
func (c *FakeClusterScalingSchedules) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewRootDeleteAction(clusterscalingschedulesResource, name), &zalandoorgv1.ClusterScalingSchedule{})
Invokes(testing.NewRootDeleteActionWithOptions(clusterscalingschedulesResource, name, opts), &zalandoorgv1.ClusterScalingSchedule{})
return err
}
@@ -105,7 +105,7 @@ func (c *FakeScalingSchedules) Update(ctx context.Context, scalingSchedule *zala
// Delete takes name of the scalingSchedule and deletes it. Returns an error if one occurs.
func (c *FakeScalingSchedules) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewDeleteAction(scalingschedulesResource, c.ns, name), &zalandoorgv1.ScalingSchedule{})
Invokes(testing.NewDeleteActionWithOptions(scalingschedulesResource, c.ns, name, opts), &zalandoorgv1.ScalingSchedule{})
return err
}
@@ -19,6 +19,8 @@ limitations under the License.
package v1
import (
"net/http"
v1 "github.com/zalando-incubator/kube-metrics-adapter/pkg/apis/zalando.org/v1"
"github.com/zalando-incubator/kube-metrics-adapter/pkg/client/clientset/versioned/scheme"
rest "k8s.io/client-go/rest"
@@ -44,12 +46,28 @@ func (c *ZalandoV1Client) ScalingSchedules(namespace string) ScalingScheduleInte
}
// NewForConfig creates a new ZalandoV1Client for the given config.
// NewForConfig is equivalent to NewForConfigAndClient(c, httpClient),
// where httpClient was generated with rest.HTTPClientFor(c).
func NewForConfig(c *rest.Config) (*ZalandoV1Client, error) {
config := *c
if err := setConfigDefaults(&config); err != nil {
return nil, err
}
client, err := rest.RESTClientFor(&config)
httpClient, err := rest.HTTPClientFor(&config)
if err != nil {
return nil, err
}
return NewForConfigAndClient(&config, httpClient)
}
// NewForConfigAndClient creates a new ZalandoV1Client for the given config and http client.
// Note the http client provided takes precedence over the configured transport values.
func NewForConfigAndClient(c *rest.Config, h *http.Client) (*ZalandoV1Client, error) {
config := *c
if err := setConfigDefaults(&config); err != nil {
return nil, err
}
client, err := rest.RESTClientForConfigAndClient(&config, h)
if err != nil {
return nil, err
}
+16 -1
View File
@@ -72,10 +72,25 @@ func (g *JSONPathMetricsGetter) GetMetric(metricsURL url.URL) (float64, error) {
return 0, err
}
if len(nodes) != 1 {
if len(nodes) == 0 {
return 0, fmt.Errorf("unexpected json: expected single numeric or array value")
}
if len(nodes) > 1 {
if g.aggregator == nil {
return 0, fmt.Errorf("no aggregator function has been specified")
}
values := make([]float64, 0, len(nodes))
for _, node := range nodes {
v, err := node.GetNumeric()
if err != nil {
return 0, fmt.Errorf("unexpected json: did not find numeric or array value '%s': %w", nodes, err)
}
values = append(values, v)
}
return g.aggregator(values...), nil
}
node := nodes[0]
if node.IsArray() {
if g.aggregator == nil {
@@ -51,6 +51,13 @@ func TestJSONPathMetricsGetter(t *testing.T) {
result: 5,
aggregator: Average,
},
{
name: "glob array query",
jsonResponse: []byte(`{"worker_status":[{"last_status":{"backlog":3}},{"last_status":{"backlog":7}}]}`),
jsonPath: "$.worker_status.[*].last_status.backlog",
result: 5,
aggregator: Average,
},
{
name: "json path not resulting in array or number should lead to error",
jsonResponse: []byte(`{"metric.value":5}`),
+112 -53
View File
@@ -3,6 +3,7 @@ package collector
import (
"errors"
"fmt"
"math"
"time"
v1 "github.com/zalando-incubator/kube-metrics-adapter/pkg/apis/zalando.org/v1"
@@ -78,30 +79,38 @@ type Store interface {
// ScalingScheduleCollectorPlugin is a collector plugin for initializing metrics
// collectors for getting ScalingSchedule configured metrics.
type ScalingScheduleCollectorPlugin struct {
store Store
now Now
store Store
now Now
defaultScalingWindow time.Duration
rampSteps int
}
// ClusterScalingScheduleCollectorPlugin is a collector plugin for initializing metrics
// collectors for getting ClusterScalingSchedule configured metrics.
type ClusterScalingScheduleCollectorPlugin struct {
store Store
now Now
store Store
now Now
defaultScalingWindow time.Duration
rampSteps int
}
// NewScalingScheduleCollectorPlugin initializes a new ScalingScheduleCollectorPlugin.
func NewScalingScheduleCollectorPlugin(store Store, now Now) (*ScalingScheduleCollectorPlugin, error) {
func NewScalingScheduleCollectorPlugin(store Store, now Now, defaultScalingWindow time.Duration, rampSteps int) (*ScalingScheduleCollectorPlugin, error) {
return &ScalingScheduleCollectorPlugin{
store: store,
now: now,
store: store,
now: now,
defaultScalingWindow: defaultScalingWindow,
rampSteps: rampSteps,
}, nil
}
// NewClusterScalingScheduleCollectorPlugin initializes a new ClusterScalingScheduleCollectorPlugin.
func NewClusterScalingScheduleCollectorPlugin(store Store, now Now) (*ClusterScalingScheduleCollectorPlugin, error) {
func NewClusterScalingScheduleCollectorPlugin(store Store, now Now, defaultScalingWindow time.Duration, rampSteps int) (*ClusterScalingScheduleCollectorPlugin, error) {
return &ClusterScalingScheduleCollectorPlugin{
store: store,
now: now,
store: store,
now: now,
defaultScalingWindow: defaultScalingWindow,
rampSteps: rampSteps,
}, nil
}
@@ -109,14 +118,14 @@ func NewClusterScalingScheduleCollectorPlugin(store Store, now Now) (*ClusterSca
// specified HPA. It's the only required method to implement the
// collector.CollectorPlugin interface.
func (c *ScalingScheduleCollectorPlugin) NewCollector(hpa *autoscalingv2.HorizontalPodAutoscaler, config *MetricConfig, interval time.Duration) (Collector, error) {
return NewScalingScheduleCollector(c.store, c.now, hpa, config, interval)
return NewScalingScheduleCollector(c.store, c.defaultScalingWindow, c.rampSteps, c.now, hpa, config, interval)
}
// NewCollector initializes a new cluster wide scaling schedule
// collector from the specified HPA. It's the only required method to
// implement the collector.CollectorPlugin interface.
func (c *ClusterScalingScheduleCollectorPlugin) NewCollector(hpa *autoscalingv2.HorizontalPodAutoscaler, config *MetricConfig, interval time.Duration) (Collector, error) {
return NewClusterScalingScheduleCollector(c.store, c.now, hpa, config, interval)
return NewClusterScalingScheduleCollector(c.store, c.defaultScalingWindow, c.rampSteps, c.now, hpa, config, interval)
}
// ScalingScheduleCollector is a metrics collector for time based
@@ -135,41 +144,47 @@ type ClusterScalingScheduleCollector struct {
// struct used by both ClusterScalingScheduleCollector and the
// ScalingScheduleCollector.
type scalingScheduleCollector struct {
store Store
now Now
metric autoscalingv2.MetricIdentifier
objectReference custom_metrics.ObjectReference
hpa *autoscalingv2.HorizontalPodAutoscaler
interval time.Duration
config MetricConfig
store Store
now Now
metric autoscalingv2.MetricIdentifier
objectReference custom_metrics.ObjectReference
hpa *autoscalingv2.HorizontalPodAutoscaler
interval time.Duration
config MetricConfig
defaultScalingWindow time.Duration
rampSteps int
}
// NewScalingScheduleCollector initializes a new ScalingScheduleCollector.
func NewScalingScheduleCollector(store Store, now Now, hpa *autoscalingv2.HorizontalPodAutoscaler, config *MetricConfig, interval time.Duration) (*ScalingScheduleCollector, error) {
func NewScalingScheduleCollector(store Store, defaultScalingWindow time.Duration, rampSteps int, now Now, hpa *autoscalingv2.HorizontalPodAutoscaler, config *MetricConfig, interval time.Duration) (*ScalingScheduleCollector, error) {
return &ScalingScheduleCollector{
scalingScheduleCollector{
store: store,
now: now,
objectReference: config.ObjectReference,
hpa: hpa,
metric: config.Metric,
interval: interval,
config: *config,
store: store,
now: now,
objectReference: config.ObjectReference,
hpa: hpa,
metric: config.Metric,
interval: interval,
config: *config,
defaultScalingWindow: defaultScalingWindow,
rampSteps: rampSteps,
},
}, nil
}
// NewClusterScalingScheduleCollector initializes a new ScalingScheduleCollector.
func NewClusterScalingScheduleCollector(store Store, now Now, hpa *autoscalingv2.HorizontalPodAutoscaler, config *MetricConfig, interval time.Duration) (*ClusterScalingScheduleCollector, error) {
func NewClusterScalingScheduleCollector(store Store, defaultScalingWindow time.Duration, rampSteps int, now Now, hpa *autoscalingv2.HorizontalPodAutoscaler, config *MetricConfig, interval time.Duration) (*ClusterScalingScheduleCollector, error) {
return &ClusterScalingScheduleCollector{
scalingScheduleCollector{
store: store,
now: now,
objectReference: config.ObjectReference,
hpa: hpa,
metric: config.Metric,
interval: interval,
config: *config,
store: store,
now: now,
objectReference: config.ObjectReference,
hpa: hpa,
metric: config.Metric,
interval: interval,
config: *config,
defaultScalingWindow: defaultScalingWindow,
rampSteps: rampSteps,
},
}, nil
}
@@ -188,7 +203,7 @@ func (c *ScalingScheduleCollector) GetMetrics() ([]CollectedMetric, error) {
if !ok {
return nil, ErrNotScalingScheduleFound
}
return calculateMetrics(scalingSchedule.Spec.Schedules, c.now(), c.objectReference, c.metric)
return calculateMetrics(scalingSchedule.Spec, c.defaultScalingWindow, c.rampSteps, c.now(), c.objectReference, c.metric)
}
// GetMetrics is the main implementation for collector.Collector interface
@@ -221,7 +236,7 @@ func (c *ClusterScalingScheduleCollector) GetMetrics() ([]CollectedMetric, error
clusterScalingSchedule = v1.ClusterScalingSchedule(*scalingSchedule)
}
return calculateMetrics(clusterScalingSchedule.Spec.Schedules, c.now(), c.objectReference, c.metric)
return calculateMetrics(clusterScalingSchedule.Spec, c.defaultScalingWindow, c.rampSteps, c.now(), c.objectReference, c.metric)
}
// Interval returns the interval at which the collector should run.
@@ -234,9 +249,17 @@ func (c *ClusterScalingScheduleCollector) Interval() time.Duration {
return c.interval
}
func calculateMetrics(schedules []v1.Schedule, now time.Time, objectReference custom_metrics.ObjectReference, metric autoscalingv2.MetricIdentifier) ([]CollectedMetric, error) {
value := 0
for _, schedule := range schedules {
func calculateMetrics(spec v1.ScalingScheduleSpec, defaultScalingWindow time.Duration, rampSteps int, now time.Time, objectReference custom_metrics.ObjectReference, metric autoscalingv2.MetricIdentifier) ([]CollectedMetric, error) {
scalingWindowDuration := defaultScalingWindow
if spec.ScalingWindowDurationMinutes != nil {
scalingWindowDuration = time.Duration(*spec.ScalingWindowDurationMinutes) * time.Minute
}
if scalingWindowDuration < 0 {
return nil, fmt.Errorf("scaling window duration cannot be negative")
}
value := int64(0)
for _, schedule := range spec.Schedules {
switch schedule.Type {
case v1.RepeatingSchedule:
location, err := time.LoadLocation(schedule.Period.Timezone)
@@ -269,9 +292,7 @@ func calculateMetrics(schedules []v1.Schedule, now time.Time, objectReference cu
parsedStartTime.Nanosecond(),
location,
)
if within(now, scheduledTime, schedule.DurationMinutes) && schedule.Value > value {
value = schedule.Value
}
value = maxInt64(value, valueForEntry(now, scheduledTime, schedule.Duration(), scalingWindowDuration, rampSteps, schedule.Value))
break
}
}
@@ -280,9 +301,8 @@ func calculateMetrics(schedules []v1.Schedule, now time.Time, objectReference cu
if err != nil {
return nil, ErrInvalidScheduleDate
}
if within(now, scheduledTime, schedule.DurationMinutes) && schedule.Value > value {
value = schedule.Value
}
value = maxInt64(value, valueForEntry(now, scheduledTime, schedule.Duration(), scalingWindowDuration, rampSteps, schedule.Value))
}
}
@@ -293,17 +313,56 @@ func calculateMetrics(schedules []v1.Schedule, now time.Time, objectReference cu
Custom: custom_metrics.MetricValue{
DescribedObject: objectReference,
Timestamp: metav1.Time{Time: now},
Value: *resource.NewMilliQuantity(int64(value*1000), resource.DecimalSI),
Value: *resource.NewMilliQuantity(value*1000, resource.DecimalSI),
Metric: custom_metrics.MetricIdentifier(metric),
},
},
}, nil
}
// within receive two time.Time and a number of minutes. It returns true
// if the first given time, instant, is within the period of the second
// given time (start) plus the given number of minutes.
func within(instant, start time.Time, minutes int) bool {
return (instant.After(start) || instant.Equal(start)) &&
instant.Before(start.Add(time.Duration(minutes)*time.Minute))
func valueForEntry(timestamp time.Time, startTime time.Time, entryDuration time.Duration, scalingWindowDuration time.Duration, rampSteps int, value int64) int64 {
scaleUpStart := startTime.Add(-scalingWindowDuration)
endTime := startTime.Add(entryDuration)
scaleUpEnd := endTime.Add(scalingWindowDuration)
if between(timestamp, startTime, endTime) {
return value
}
if between(timestamp, scaleUpStart, startTime) {
return scaledValue(timestamp, scaleUpStart, scalingWindowDuration, rampSteps, value)
}
if between(timestamp, endTime, scaleUpEnd) {
return scaledValue(scaleUpEnd, timestamp, scalingWindowDuration, rampSteps, value)
}
return 0
}
// The HPA has a rule to do not scale up or down if the change in the
// metric is less than 10% (by default) of the current value. We will
// use buckets of time using the floor of each as the returned metric.
// Any config greater or equal to 10 buckets must guarantee changes
// bigger than 10%.
func scaledValue(timestamp time.Time, startTime time.Time, scalingWindowDuration time.Duration, rampSteps int, value int64) int64 {
if scalingWindowDuration == 0 {
return 0
}
steps := float64(rampSteps)
requiredPercentage := math.Abs(float64(timestamp.Sub(startTime))) / float64(scalingWindowDuration)
return int64(math.Floor(requiredPercentage*steps) * (float64(value) / steps))
}
func between(timestamp, start, end time.Time) bool {
if timestamp.Before(start) {
return false
}
return timestamp.Before(end)
}
func maxInt64(i1, i2 int64) int64 {
if i1 > i2 {
return i1
}
return i2
}
+103 -24
View File
@@ -12,7 +12,11 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const hHMMFormat = "15:04"
const (
hHMMFormat = "15:04"
defaultScalingWindowDuration = 1 * time.Minute
defaultRampSteps = 10
)
type schedule struct {
kind string
@@ -21,7 +25,7 @@ type schedule struct {
days []v1.ScheduleDay
timezone string
duration int
value int
value int64
}
func TestScalingScheduleCollector(t *testing.T) {
@@ -37,11 +41,15 @@ func TestScalingScheduleCollector(t *testing.T) {
return uTCNow
}
tenMinutes := int64(10)
for _, tc := range []struct {
msg string
schedules []schedule
expectedValue int
err error
msg string
schedules []schedule
scalingWindowDurationMinutes *int64
expectedValue int64
err error
rampSteps int
}{
{
msg: "Return the right value for one time config",
@@ -80,7 +88,70 @@ func TestScalingScheduleCollector(t *testing.T) {
expectedValue: 100,
},
{
msg: "Return the default value (0) for one time config - 30 seconds after",
msg: "Return the scaled value (60) for one time config - 20 seconds before starting",
schedules: []schedule{
{
date: nowTime.Add(time.Second * 20).Format(time.RFC3339),
kind: "OneTime",
duration: 45,
value: 100,
},
},
expectedValue: 60,
},
{
msg: "Return the scaled value (60) for one time config - 20 seconds after",
schedules: []schedule{
{
date: nowTime.Add(-time.Minute * 45).Add(-time.Second * 20).Format(time.RFC3339),
kind: "OneTime",
duration: 45,
value: 100,
},
},
expectedValue: 60,
},
{
msg: "10 steps (default) return 90% of the metric, even 1 second before",
schedules: []schedule{
{
date: nowTime.Add(time.Second * 1).Format(time.RFC3339),
kind: "OneTime",
duration: 45,
value: 100,
},
},
expectedValue: 90,
},
{
msg: "5 steps return 80% of the metric, even 1 second before",
schedules: []schedule{
{
date: nowTime.Add(time.Second * 1).Format(time.RFC3339),
kind: "OneTime",
duration: 45,
value: 100,
},
},
expectedValue: 80,
rampSteps: 5,
},
{
msg: "Return the scaled value (90) for one time config with a custom scaling window - 30 seconds before starting",
scalingWindowDurationMinutes: &tenMinutes,
schedules: []schedule{
{
date: nowTime.Add(time.Second * 30).Format(time.RFC3339),
kind: "OneTime",
duration: 45,
value: 100,
},
},
expectedValue: 90,
},
{
msg: "Return the scaled value (90) for one time config with a custom scaling window - 30 seconds after",
scalingWindowDurationMinutes: &tenMinutes,
schedules: []schedule{
{
date: nowTime.Add(-time.Minute * 45).Add(-time.Second * 30).Format(time.RFC3339),
@@ -89,7 +160,7 @@ func TestScalingScheduleCollector(t *testing.T) {
value: 100,
},
},
expectedValue: 0,
expectedValue: 90,
},
{
msg: "Return the default value (0) for one time config not started yet (20 minutes before)",
@@ -427,17 +498,22 @@ func TestScalingScheduleCollector(t *testing.T) {
scalingScheduleName := "my_scaling_schedule"
namespace := "default"
rampSteps := tc.rampSteps
if rampSteps == 0 {
rampSteps = defaultRampSteps
}
schedules := getSchedules(tc.schedules)
store := newMockStore(scalingScheduleName, namespace, schedules)
plugin, err := NewScalingScheduleCollectorPlugin(store, now)
store := newMockStore(scalingScheduleName, namespace, tc.scalingWindowDurationMinutes, schedules)
plugin, err := NewScalingScheduleCollectorPlugin(store, now, defaultScalingWindowDuration, rampSteps)
require.NoError(t, err)
clusterStore := newClusterMockStore(scalingScheduleName, schedules)
clusterPlugin, err := NewClusterScalingScheduleCollectorPlugin(clusterStore, now)
clusterStore := newClusterMockStore(scalingScheduleName, tc.scalingWindowDurationMinutes, schedules)
clusterPlugin, err := NewClusterScalingScheduleCollectorPlugin(clusterStore, now, defaultScalingWindowDuration, rampSteps)
require.NoError(t, err)
clusterStoreFirstRun := newClusterMockStoreFirstRun(scalingScheduleName, schedules)
clusterPluginFirstRun, err := NewClusterScalingScheduleCollectorPlugin(clusterStoreFirstRun, now)
clusterStoreFirstRun := newClusterMockStoreFirstRun(scalingScheduleName, tc.scalingWindowDurationMinutes, schedules)
clusterPluginFirstRun, err := NewClusterScalingScheduleCollectorPlugin(clusterStoreFirstRun, now, defaultScalingWindowDuration, rampSteps)
require.NoError(t, err)
hpa := makeScalingScheduleHPA(namespace, scalingScheduleName)
@@ -505,14 +581,14 @@ func TestScalingScheduleObjectNotPresentReturnsError(t *testing.T) {
make(map[string]interface{}),
getByKeyFn,
}
plugin, err := NewScalingScheduleCollectorPlugin(store, time.Now)
plugin, err := NewScalingScheduleCollectorPlugin(store, time.Now, defaultScalingWindowDuration, defaultRampSteps)
require.NoError(t, err)
clusterStore := mockStore{
make(map[string]interface{}),
getByKeyFn,
}
clusterPlugin, err := NewClusterScalingScheduleCollectorPlugin(clusterStore, time.Now)
clusterPlugin, err := NewClusterScalingScheduleCollectorPlugin(clusterStore, time.Now, defaultScalingWindowDuration, defaultRampSteps)
require.NoError(t, err)
hpa := makeScalingScheduleHPA("namespace", "scalingScheduleName")
@@ -567,10 +643,10 @@ func TestReturnsErrorWhenStoreDoes(t *testing.T) {
},
}
plugin, err := NewScalingScheduleCollectorPlugin(store, time.Now)
plugin, err := NewScalingScheduleCollectorPlugin(store, time.Now, defaultScalingWindowDuration, defaultRampSteps)
require.NoError(t, err)
clusterPlugin, err := NewClusterScalingScheduleCollectorPlugin(store, time.Now)
clusterPlugin, err := NewClusterScalingScheduleCollectorPlugin(store, time.Now, defaultScalingWindowDuration, defaultRampSteps)
require.NoError(t, err)
hpa := makeScalingScheduleHPA("namespace", "scalingScheduleName")
@@ -615,7 +691,7 @@ func getByKeyFn(d map[string]interface{}, key string) (item interface{}, exists
return item, exists, nil
}
func newMockStore(name, namespace string, schedules []v1.Schedule) mockStore {
func newMockStore(name, namespace string, scalingWindowDurationMinutes *int64, schedules []v1.Schedule) mockStore {
return mockStore{
map[string]interface{}{
fmt.Sprintf("%s/%s", namespace, name): &v1.ScalingSchedule{
@@ -623,7 +699,8 @@ func newMockStore(name, namespace string, schedules []v1.Schedule) mockStore {
Name: name,
},
Spec: v1.ScalingScheduleSpec{
Schedules: schedules,
ScalingWindowDurationMinutes: scalingWindowDurationMinutes,
Schedules: schedules,
},
},
},
@@ -631,7 +708,7 @@ func newMockStore(name, namespace string, schedules []v1.Schedule) mockStore {
}
}
func newClusterMockStore(name string, schedules []v1.Schedule) mockStore {
func newClusterMockStore(name string, scalingWindowDurationMinutes *int64, schedules []v1.Schedule) mockStore {
return mockStore{
map[string]interface{}{
name: &v1.ClusterScalingSchedule{
@@ -639,7 +716,8 @@ func newClusterMockStore(name string, schedules []v1.Schedule) mockStore {
Name: name,
},
Spec: v1.ScalingScheduleSpec{
Schedules: schedules,
ScalingWindowDurationMinutes: scalingWindowDurationMinutes,
Schedules: schedules,
},
},
},
@@ -650,7 +728,7 @@ func newClusterMockStore(name string, schedules []v1.Schedule) mockStore {
// The cache.Store returns the v1.ClusterScalingSchedule items as
// v1.ScalingSchedule when it first lists it. When it's update it
// asserts it correctly to the v1.ClusterScalingSchedule type.
func newClusterMockStoreFirstRun(name string, schedules []v1.Schedule) mockStore {
func newClusterMockStoreFirstRun(name string, scalingWindowDurationMinutes *int64, schedules []v1.Schedule) mockStore {
return mockStore{
map[string]interface{}{
name: &v1.ScalingSchedule{
@@ -658,7 +736,8 @@ func newClusterMockStoreFirstRun(name string, schedules []v1.Schedule) mockStore
Name: name,
},
Spec: v1.ScalingScheduleSpec{
Schedules: schedules,
ScalingWindowDurationMinutes: scalingWindowDurationMinutes,
Schedules: schedules,
},
},
},
+3 -3
View File
@@ -269,7 +269,7 @@ func (p *HPAProvider) collectMetrics(ctx context.Context) {
// GetMetricByName gets a single metric by name.
func (p *HPAProvider) GetMetricByName(name types.NamespacedName, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValue, error) {
metric := p.metricStore.GetMetricsByName(name, info)
metric := p.metricStore.GetMetricsByName(name, info, metricSelector)
if metric == nil {
return nil, provider.NewMetricNotFoundForError(info.GroupResource, info.Metric, name.Name)
}
@@ -279,7 +279,7 @@ func (p *HPAProvider) GetMetricByName(name types.NamespacedName, info provider.C
// GetMetricBySelector returns metrics for namespaced resources by
// label selector.
func (p *HPAProvider) GetMetricBySelector(namespace string, selector labels.Selector, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValueList, error) {
return p.metricStore.GetMetricsBySelector(namespace, selector, info), nil
return p.metricStore.GetMetricsBySelector(objectNamespace(namespace), selector, info), nil
}
// ListAllMetrics list all available metrics from the provicer.
@@ -288,7 +288,7 @@ func (p *HPAProvider) ListAllMetrics() []provider.CustomMetricInfo {
}
func (p *HPAProvider) GetExternalMetric(namespace string, metricSelector labels.Selector, info provider.ExternalMetricInfo) (*external_metrics.ExternalMetricValueList, error) {
return p.metricStore.GetExternalMetric(namespace, metricSelector, info)
return p.metricStore.GetExternalMetric(objectNamespace(namespace), metricSelector, info)
}
func (p *HPAProvider) ListAllExternalMetrics() []provider.ExternalMetricInfo {
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
+8 -2
View File
@@ -128,6 +128,8 @@ func NewCommandStartAdapterServer(stopCh <-chan struct{}) *cobra.Command {
flags.DurationVar(&o.GCInterval, "garbage-collector-interval", 10*time.Minute, "Interval to clean up metrics that are stored in in-memory cache.")
flags.BoolVar(&o.ScalingScheduleMetrics, "scaling-schedule", o.ScalingScheduleMetrics, ""+
"whether to enable time-based ScalingSchedule metrics")
flags.DurationVar(&o.DefaultScheduledScalingWindow, "scaling-schedule-default-scaling-window", 10*time.Minute, "Default rampup and rampdown window duration for ScalingSchedules")
flags.IntVar(&o.RampSteps, "scaling-schedule-ramp-steps", 10, "Number of steps used to rampup and rampdown ScalingSchedules. It's used to guarantee won't avoid reaching the max scaling due to the 10% minimum change rule.")
return cmd
}
@@ -293,7 +295,7 @@ func (o AdapterServerOptions) RunCustomMetricsAdapterServer(stopCh <-chan struct
)
go reflector.Run(ctx.Done())
clusterPlugin, err := collector.NewClusterScalingScheduleCollectorPlugin(clusterScalingSchedulesStore, time.Now)
clusterPlugin, err := collector.NewClusterScalingScheduleCollectorPlugin(clusterScalingSchedulesStore, time.Now, o.DefaultScheduledScalingWindow, o.RampSteps)
if err != nil {
return fmt.Errorf("unable to create ClusterScalingScheduleCollector plugin: %v", err)
}
@@ -302,7 +304,7 @@ func (o AdapterServerOptions) RunCustomMetricsAdapterServer(stopCh <-chan struct
return fmt.Errorf("failed to register ClusterScalingSchedule object collector plugin: %v", err)
}
plugin, err := collector.NewScalingScheduleCollectorPlugin(scalingSchedulesStore, time.Now)
plugin, err := collector.NewScalingScheduleCollectorPlugin(scalingSchedulesStore, time.Now, o.DefaultScheduledScalingWindow, o.RampSteps)
if err != nil {
return fmt.Errorf("unable to create ScalingScheduleCollector plugin: %v", err)
}
@@ -428,4 +430,8 @@ type AdapterServerOptions struct {
GCInterval time.Duration
// Time-based scaling based on the CRDs ScheduleScaling and ClusterScheduleScaling.
ScalingScheduleMetrics bool
// Default ramp-up/ramp-down window duration for scheduled metrics
DefaultScheduledScalingWindow time.Duration
// Number of steps utilized during the rampup and rampdown for scheduled metrics
RampSteps int
}