Kubernetes Metrics
The Custom Pod Autoscaler supports automatically gathering useful metrics from Kubernetes for your autoscaler, which is then fed into the metric gathering stage for your autoscaler to process and filter.
The pipeline looks like this:
- Gather K8s resource information (Pod/Deployment/StatefulSet etc.).
- Query the metrics server if the metrics server is available using the resource information retrieved.
- If the
requireKubernetesMetrics
flag is set tofalse
and it fails to get the metrics, continue as normal. If it is set totrue
then fail at this point with a useful error. - Combine the K8s resource information with any gathered metrics information, serialising into JSON.
- Call the metrics stage, providing the JSON generated in the previous step.
- Get the results of the metrics stage, continue as normal.
The Custom Pod Autoscaler allows the autoscaler to define in its configuration any Kubernetes metrics it needs, for example to get CPU details this configuration can be used:
Once this data has been fetched by the Custom Pod Autoscaler, it is then exposed to the metric gathering stage in serialized JSON form, for example:
{
"resource": {...},
"runType": "scaler",
"kubernetesMetrics": [
{
"current_replicas": 1,
"spec": {
"type": "Resource",
"resource": {
"name": "cpu",
"target": {
"type": "Utilization"
}
}
},
"resource": {
"pod_metrics_info": {
"flask-metric-697794dd85-bsttm": {
"Timestamp": "2021-04-05T18:10:10Z",
"Window": 30000000000,
"Value": 4
}
},
"requests": {
"flask-metric-697794dd85-bsttm": 200
},
"ready_pod_count": 1,
"ignored_pods": {},
"missing_pods": {},
"total_pods": 1,
"timestamp": "2021-04-05T18:10:10Z"
}
}
]
}
Visit the configuration reference for full details and the
k8s-metrics-cpu
example
for sample code.