티스토리 뷰
introduce
kubernetes node상에 다수의 image들이 존재함으로 인해 디스크 사용률이 높아지고 이로인한 부가적인 이슈들이 생길수 있어 이를 최소화할수 있도록 사용하지 않는 이미지를 주기적으로 삭제하는 도구이다.
how it works
eraser는 주기적으로 삭제도 하지만 즉시 삭제해야하는 경우 특정 이미지를 추가하면 삭제도 가능하다.
또한 취약점 기반으로 이미지를 삭제할수도 있다.
우선 두가지 모드에 대하여 알아보자.
- manual
- 설명 : 삭제할 이미지리스트를 imagelist로 생성하면 노드별로 삭제진행
- CRD: imagelist
- automated
- 설명 : 지정시간마다 자동으로 동작중이지 않은 이미지들을 제거를 한다.
취약점 스캔의 결과에 기반한 취약한 이미지도 제거를 한다. - CRD: imagejob
- 설명 : 지정시간마다 자동으로 동작중이지 않은 이미지들을 제거를 한다.
위에 언급되었듯이 두가지 CRD(imagelist / imagejob)가 각각의 방식으로 이미지 삭제를 진행한다.
실제 위에 링크된 architecture들을 보면 좀더 이해가 쉽게 될것이다.
Architecture의 내용을 참고하여 삭제 과정을 정리하자면 다음과 같다.
- imagejob은 앞의 3(eraser/collector/trivy-scanner) container들을 하나의 pod로 각 노드별로 동작시킨다.
먼저 collector에 의해 노드가 보유한 image list를 만들어 보고하게 되고 이때 trivy-scanner에 의한 취약점 분석까지 같이 보고되며
이를 종합하여 eraser는 해당 노드의 kubelet과 통신하여 동작중인 container가 아닌 image를 제거한다. - imagelist 는 앞서 imagejob의 collector에서 수집한 node에 보유중인 image들의 리스트를 기반으로 eraser container를 동작시켜 삭제를 진행한다.
참고
여기서 중요하게 알아두어야할 것은 실제 동작되는 pod에 맞는 image는 제거 대상이 안된다는것이다.
imagejob(자동)
먼저 imagejob이 controller(eraser-controller-manager)에 의해 자동으로 생성되며
jacob@laptop:~ $ kubectl get imagejob
NAME AGE
imagejob-qbk56 4m20s
imagejob-shbts 101s
아래와 같은 pod들이 노드별로 생성되어지며 진행된다.
jacob@laptop:~ $ kubectl get po -n eraser-system -l name=collector
NAME READY STATUS RESTARTS AGE
collector-nodepool1-36305176 0/2 Completed 0 3m21s
collector-nodepool2-34435815 0/2 Completed 0 3m21s
참고로 아래 configmap에 존재하는 controller_manager_config.yaml 파일에 정의된 설정을 기반으로 반복적으로 imagejob이 생성된다
data:
controller_manager_config.yaml: |
manager:
scheduling:
repeatInterval: 3m
beginImmediately: true
위 collector-xxxx 로 시작되는 pod들은 아래와 같이 3가지 container로 구성되어진다. (설정 변경이 없는 경우)
- eraser
- collector
- trivy-scanner
먼저 collector가 동작되며 아래와 같이 현재 노드에 존재하는 이미지들을 수집한다.
collector {"level":"info","ts":1682418881.1536896,"logger":"collector","msg":"no images to exclude"}
collector {"level":"info","ts":1682418881.1554096,"logger":"collector","msg":"images collected","finalImages:":[{"image_id":"sha256:7b178dc69474dd40a6471673c620079746e086c341b373fa723c09e043a5b911",
"names":["mcr.microsoft.com/oss/kubernetes/pause:3.6"],"digests":["sha256:b4b669f27933146227c9180398f99d8b3100637e4a0a1ccf804f8b12f4b9b8df"]},{"image_id":"sha256:3acf7fe3d7fa03d6a3f69fe594a46ff37a5c
7a3dec1df9b4a5d131c69673b5c8","names":["ghcr.io/azure/eraser:v1.1.0-beta.0"],"digests":["sha256:54362f6fc40a7a2db2ec6268273b802facf8bc04c755b86211467e0d6d104efe"]}]}
위에 collector에 의해 수집된 image list를 kubelet과 통신하여 동작중인 container list와 비교하여 삭제를 진행한다.
(만약 scanner.enabled: true로 설정한 경우 trivy-scanner가 수행되고 취약한 이미지를 eraser가 삭제하게 된다.)
eraser {"level":"info","ts":1682418881.2959065,"logger":"eraser","msg":"successfully created imagelist from scanned non-compliant images"}
eraser {"level":"info","ts":1682418881.2961388,"logger":"eraser","msg":"no images to exclude"}
eraser {"level":"info","ts":1682418881.3269212,"logger":"eraser","msg":"removed image","given":"sha256:7b178dc69474dd40a6471673c620079746e086c341b373fa723c09e043a5b911","imageID":"sha256:7b178dc6947
4dd40a6471673c620079746e086c341b373fa723c09e043a5b911","name":{"image_id":"sha256:7b178dc69474dd40a6471673c620079746e086c341b373fa723c09e043a5b911","names":["mcr.microsoft.com/oss/kubernetes/pause:3
.6"],"digests":["sha256:b4b669f27933146227c9180398f99d8b3100637e4a0a1ccf804f8b12f4b9b8df"]}}
eraser {"level":"info","ts":1682418881.327087,"logger":"eraser","msg":"image is running","given":"sha256:3acf7fe3d7fa03d6a3f69fe594a46ff37a5c7a3dec1df9b4a5d131c69673b5c8","imageID":"sha256:3acf7fe3d
7fa03d6a3f69fe594a46ff37a5c7a3dec1df9b4a5d131c69673b5c8","name":{"image_id":"sha256:3acf7fe3d7fa03d6a3f69fe594a46ff37a5c7a3dec1df9b4a5d131c69673b5c8","names":["ghcr.io/azure/eraser:v1.1.0-beta.0"],"
digests":["sha256:54362f6fc40a7a2db2ec6268273b802facf8bc04c755b86211467e0d6d104efe"]}}
imagelist(수동)
앞서 설명했던데로 즉시 삭제가 필요한 image가 존재할 경우 이를 imagelist라는 crd로 삭제할 수 있다.
아래 예제는 busybox를 제거하는 yaml 파일이다.
jacob@laptop:~ $ cat manualremoval.yaml
apiVersion: eraser.sh/v1
kind: ImageList
metadata:
name: imagelist
spec:
images:
- docker.io/library/busybox
위 yaml을 생성하게 되면 imagelist가 CRD가 생성되게 되고
jacob@laptop:~ $ kubectl get imagelist
NAME AGE
imagelist 45s
앞선 imagejob과 같이 노드별로 pod가 동작되는데 이때는 eraser만 동작하게 된다.
jacob@laptop:~ $ kubectl get po -n eraser-system -l name=eraser
NAME READY STATUS RESTARTS AGE
eraser-nodepool1-36305176 0/1 Completed 0 29s
eraser-nodepool2-34435815 0/1 Completed 0 29s
installation
역시나 설치는 manifest로 직접 설치하는 방식과 helm chart로 제공되어지고 있다.
- manifest 방식 : https://azure.github.io/eraser/docs/installation#manifest
- helm 방식 : https://github.com/Azure/eraser/blob/main/charts/eraser/README.md
아래와 같이 helm chart로 설치를 진행해보자.
jacob@laptop:~ $ helm repo add eraser https://azure.github.io/eraser/charts
"eraser" has been added to your repositories
jacob@laptop:~ $ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "eraser" chart repository
jacob@laptop:~ $ helm install -n eraser-system eraser eraser/eraser --create-namespace
NAME: eraser
LAST DEPLOYED: Tue Apr 11 21:45:49 2023
NAMESPACE: eraser-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
간단하게 설치가 되면 다음과 같은 deployment가 배포되어지고 이와 함께 노드별 pod가 동작되어진다.
jacob@laptop:~ $ kubectl get deploy,cm -n eraser-system
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/eraser-controller-manager 1/1 1 1 34h
NAME DATA AGE
configmap/eraser-manager-config 1 34h
configmap/kube-root-ca.crt 1 34h
config
아래와 같은 configmap을 제공한다.
여기서 collector부터 eraser, trivy-scanner에 대한 설정을 진행할 수 있다.
jacob@laptop:~ $ kubectl get cm eraser-manager-config -n eraser-system -o jsonpath='{.data.controller_manager_config\.yaml}'
apiVersion: eraser.sh/v1alpha1
kind: EraserConfig
manager:
runtime: containerd
otlpEndpoint: ""
logLevel: info
scheduling:
repeatInterval: 24h ### imagejob을 해당 주기마다 생성하여 collector,eraser를 실행한다.
beginImmediately: true
profile:
enabled: false
port: 6060
imageJob:
successRatio: 1.0
cleanup:
delayOnSuccess: 0s ### collector,eraser,trivy-scanner container들이 동작하는 pod가 동작을 완료하면 바로 제거(0s)한다
delayOnFailure: 24h
pullSecrets: [] # image pull secrets for collector/scanner/eraser
priorityClassName: "" # priority class name for collector/scanner/eraser
nodeFilter:
type: exclude # must be either exclude|include
selectors:
- eraser.sh/cleanup.filter
- kubernetes.io/os=windows
components:
collector:
enabled: true
image:
repo: ghcr.io/azure/collector
tag: v1.1.0-beta.0
request:
mem: 25Mi
cpu: 2m
limit:
mem: 500Mi
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run
cpu: 0
scanner:
enabled: true ### 만약 취약점 분석에 의한 이미지 삭제를 진행하지 않고자 한다면 이를 false로 둔다.
image:
repo: ghcr.io/azure/eraser-trivy-scanner # supply custom image for custom scanner
tag: v1.1.0-beta.0
request:
mem: 500Mi
cpu: 1000m
limit:
mem: 2Gi
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run
cpu: 0
# The config needs to be passed through to the scanner as yaml, as a
# single string. Because we allow custom scanner images, the scanner is
# responsible for defining a schema, parsing, and validating.
config: |
# this is the schema for the provided 'trivy-scanner'. custom scanners
# will define their own configuration.
cacheDir: /var/lib/trivy
dbRepo: ghcr.io/aquasecurity/trivy-db
deleteFailedImages: true
vulnerabilities:
ignoreUnfixed: true
types:
- os
- library
securityChecks:
- vuln
severities:
- CRITICAL
timeout:
total: 23h
perImage: 1h
eraser:
image:
repo: ghcr.io/azure/eraser
tag: v1.1.0-beta.0
request:
mem: 25Mi
# https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run
cpu: 0
limit:
mem: 30Mi
cpu: 1000m
how to use
imagejob
먼저 주기적인 이미지 삭제를 진행해보자.
공식 페이지에서 제공하는 daemonset을 배포해보자.
jacob@laptop:~ $ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: alpine
spec:
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
containers:
- name: alpine
image: docker.io/library/alpine:3.7.3
EOF
jacob@laptop:~ $ kubectl get po -l app=alpine
NAME READY STATUS RESTARTS AGE
alpine-96x2c 0/1 Completed 3 (28s ago) 50s
alpine-hj24p 0/1 Completed 3 (27s ago) 50s
jacob@laptop:~ $ kubectl get ds alpine -o jsonpath='{.spec.template.spec.containers[].image}'
docker.io/library/alpine:3.7.3
이제 노드에서 image를 확인해보자.
root@nodepool1-36305176:/# crictl images | grep alpine
docker.io/library/alpine 3.7.3 6d1ef012b5674 2.11MB
docker.io/library/alpine latest 9ed4aefc74f67 3.38MB
daemonset 삭제
jacob@laptop:~ $ k delete ds alpine
daemonset.apps "alpine" deleted
이후에도 남아 있는 alpine 이미지를 확인할 수 있다.
root@nodepool1-36305176:/# crictl images | grep alpine
docker.io/library/alpine 3.7.3 6d1ef012b5674 2.11MB
docker.io/library/alpine
이제 deployment를 restart하자.
jacob@laptop:~ $ kubectl rollout restart deploy -n eraser-system -l control-plane=controller-manager
deployment.apps/eraser-controller-manager restarted
아래와 같은 현재 보유중인 container list를 report하게 되고
jacob@laptop:~ $ kubectl logs collector-nodepool1-36305176-dw672 -n kube-system -c collector
{"level":"info","ts":1682311283.045005,"logger":"collector","msg":"images collected","finalImages:":[{"image_id":"sha256:67f622f206c864b3e5404c451ffc3febae94ed781fbd0d3c77369963e42244f9","names":["mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5"]},{"image_id":"sha256:ae6923f82e470496af83e3b9cdc1163a1ba8dfc08afac020f01297709e172e94","names":["mcr.microsoft.com/containernetworking/cni-dropgz:v0.0.4"]},{"image_id":"sha256:0e84e3c2c157f6124b379d14f2394aa2062b75ed945fa804c60a605241cba00b","names":["mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.3"]},{"image_id":"sha256:3acf7fe3d7fa03d6a3f69fe594a46ff37a5c7a3dec1df9b4a5d131c69673b5c8","names":["ghcr.io/azure/eraser:v1.1.0-beta.0"],"digests":["sha256:54362f6fc40a7a2db2ec6268273b802facf8bc04c755b86211467e0d6d104efe"]},{"image_id":"sha256:d72089f5f0f48850f21c22a49d67718851a5f9259af786e62e912ce658c65078","names":["mcr.microsoft.com/oss/calico/pod2daemon-flexvol:v3.8.9.1"]},{"image_id":"sha256:6d1ef012b5674ad8a127ecfa9b5e6f5178d171b90ee462846974177fd9bdd39f","names":["docker.io/library/alpine:3.7.3"],"digests":["sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10"]},{"image_id":"sha256:9ee784233e569a0b501273c7501a7782020a64082f6c353523cd1029c609b832","names":["mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.25.11"]},{"image_id":"sha256:b7737aaa9e9473a10017410834c26b638f4c381f063e53a252748691d97adbea","names":["mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.27.1"]},{"image_id":"sha256:ebb4e94df49e851eff7318143acd81f2462ed1d1524f134d51643563e03019e3","names":["ghcr.io/azure/eraser-manager:v1.1.0-beta.0"],"digests":["sha256:55c1e8a0e94c6eb292dfd3b0d8f79f38dcedef072a23bd20faebe1aff90018b2"]},{"image_id":"sha256:83d1f54dd91455e8512964339e15267e78ae993891a67d31f07b2b44afd55f6b","names":["mcr.microsoft.com/oss/calico/pod2daemon-flexvol:v3.24.0"]},{"image_id":"sha256:7c2350135f572345e6ccbb44ce9b18621984e1278d8cd088624006c63c9fc5f4","names":["mcr.microsoft.com/containernetworking/cni-dropgz:v0.0.2"]},{"image_id":"sha256:b1497d31e7995291a39304e9800118293b6d9e630157d705fd9449e51f28c898","names":["ghcr.io/azure/collector:v1.1.0-beta.0"],"digests":["sha256:978f93131c1e466a687d9909909373e159ec21fcf29fb127c57f4a8faea5db51"]},{"image_id":"sha256:77ae17b5fb4f9fe99c0a91d4b092eaa6fdab4cc9aa0230806846cba1031facdb","names":["mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.5"]},{"image_id":"sha256:0d88211ed303ad62c319666fcea6ebb7c68eccc67a28e6cf429b39e2e048c35d","names":["mcr.microsoft.com/oss/calico/pod2daemon-flexvol:v3.23.3"]},{"image_id":"sha256:846921f0fe0e57df9e4d4961c0c4af481bf545966b5f61af68e188837363530e","names":["mcr.microsoft.com/oss/kubernetes/defaultbackend:1.4"]},{"image_id":"sha256:afa50b5f5d252c3b77c003e6ddfcddba856c9c955e13052d532149026df1235e","names":["mcr.microsoft.com/aks/aks-node-ca-watcher:master.221011.1","mcr.microsoft.com/aks/aks-node-ca-watcher:static"]},{"image_id":"sha256:9311829ca226782807f6a875db2d2c3edb256c44d163e3b63b582d7dec1a8967","names":["mcr.microsoft.com/oss/calico/typha:v3.8.9"]},{"image_id":"sha256:cfb93326278fdad0d75931baef3412cb5eb59ab292bd993d50a448ef2e5eadac","names":["mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.3"]},{"image_id":"sha256:3947978e85e29af55962247c5b9862aae1852e823498f28f9578a967c9e4c7e7","names":["mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.23.30"]},{"image_id":"sha256:7575dd615282b8ee5d60808860bc159faea44b042530b0235ca9d70c1bbd1ff9","names":["mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.26.7"]},{"image_id":"sha256:c6c3fb974fc37746dfd976a701da1961988e3012a9bbd2e1aec01c8e024b701f","names":["mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.24.17"]},{"image_id":"sha256:118423f7cc377517e6c27a2ed0f14c94356d9454f6f274de50364f94700f47d0","names":["mcr.microsoft.com/oss/kubernetes/ip-masq-agent:v2.5.0.12"]}]}
실제 삭제 작업은 eraser에 의해 수행되어지고 아래와 같은 log가 남겨진다.
jacob@laptop:~ $ kubectl logs collector-nodepool1-36305176-dw672 -n kube-system -c eraser
{"level":"info","ts":1682311310.1763678,"logger":"eraser","msg":"successfully created imagelist from scanned non-compliant images"}
{"level":"info","ts":1682311310.1991806,"logger":"eraser","msg":"removed image","given":"sha256:6d1ef012b5674ad8a127ecfa9b5e6f5178d171b90ee462846974177fd9bdd39f","imageID":"sha256:6d1ef012b5674ad8a127ecfa9b5e6f5178d171b90ee462846974177fd9bdd39f","name":{"image_id":"sha256:6d1ef012b5674ad8a127ecfa9b5e6f5178d171b90ee462846974177fd9bdd39f","names":["docker.io/library/alpine:3.7.3"],"digests":["sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10"]}}
imagelist
아래와 같이 임의의 busybox 이미지를 pull 받아놓고 이를 지워보자.
root@nodepool1-36305176:/# crictl pull busybox
Image is up to date for sha256:7cfbbec8963d8f13e6c70416d6592e1cc10f47a348131290a55d43c3acab3fb9
root@nodepool1-36305176:/# crictl images | grep busybox
mcr.microsoft.com/mirror/docker/library/busybox 1.35 12b6f68a826b2 2.59MB
이후 앞서 예시로 보여줬던 manualremoval.yaml 파일을 아래와 같이 변경하고
jacob@laptop:~ $ cat manualremoval.yaml
apiVersion: eraser.sh/v1
kind: ImageList
metadata:
name: imagelist
spec:
images:
- mcr.microsoft.com/mirror/docker/library/busybox
적용시 eraser pod만 동작하면서 삭제가 진행된다.
scanner disable
scanner를 사용하는 경우 request 가 7m로 상당히 높은 편이다. 하여 필요하지 않다면 disable 하고 사용해도 무방하기에 아래와 같은 설정변경하는 방법을 알아두자.
jacob@laptop:~ $ kubectl get po -n eraser-system -l name=collector
NAME READY STATUS RESTARTS AGE
collector-nodepool1-36305176 0/3 OutOfcpu 0 3m55s
collector-nodepool2-34435815 0/3 OutOfcpu 0 3m55s
jacob@laptop:~ $ kubectl get pod collector-nodepool1-36305176-dw672 -n eraser-system -o jsonpath='{.spec.containers[].resources}' | jq
{
"limits": {
"memory": "500Mi"
},
"requests": {
"cpu": "7m",
"memory": "25Mi"
}
configmap의 data에 정의된 controller_manager_config.yaml에서
jacob@laptop:~ $ kubectl get cm eraser-manager-config -n eraser-system -o jsonpath='{.data.controller_manager_config\.yaml}'
apiVersion: eraser.sh/v1alpha1
kind: EraserConfig
...
components:
...
scanner:
enabled: false
위와 같이 false로 변경한후 deploy를 재시작한다.
jacob@laptop:~ $ kubectl rollout restart deploy -n eraser-system -l control-plane=controller-manager
deployment.apps/eraser-controller-manager restarted
scanner를 사용해도 삭제가 되지 않는 경우
scanner를 통해 취약점 분석이 되었음에도 삭제가 안되는 경우가 있어 확인해보니
동작중인 image는 삭제하지 않지 않고 CRICICAL 취약점 이미지만 삭제하도록 되어 있었다.
또한 어떤이유에서인지 scanner가 true인 경우 삭제되어야할 사용되지 않는 image도 삭제가 안되었다.
(이점은 원인을 좀더 확인해볼 예정)
metric
아래 링크에 소개된데로 삭제한 copunt등을 metric(exporter형태)으로 제공하고 있다.
실제 metric을 조회해본 결과이다.
root@nginx-sample-769445df77-8t7tv:/# curl 10.1.1.40:8889/metrics
# HELP certwatcher_read_certificate_errors_total Total number of certificate read errors
# TYPE certwatcher_read_certificate_errors_total counter
certwatcher_read_certificate_errors_total 0
# HELP certwatcher_read_certificate_total Total number of certificate reads
# TYPE certwatcher_read_certificate_total counter
certwatcher_read_certificate_total 0
# HELP controller_runtime_active_workers Number of currently used workers per controller
# TYPE controller_runtime_active_workers gauge
controller_runtime_active_workers{controller="imagecollector-controller"} 0
controller_runtime_active_workers{controller="imagejob-controller"} 0
controller_runtime_active_workers{controller="imagelist-controller"} 0
# HELP controller_runtime_max_concurrent_reconciles Maximum number of concurrent reconciles per controller
# TYPE controller_runtime_max_concurrent_reconciles gauge
controller_runtime_max_concurrent_reconciles{controller="imagecollector-controller"} 1
controller_runtime_max_concurrent_reconciles{controller="imagejob-controller"} 1
controller_runtime_max_concurrent_reconciles{controller="imagelist-controller"} 1
# HELP controller_runtime_reconcile_errors_total Total number of reconciliation errors per controller
# TYPE controller_runtime_reconcile_errors_total counter
controller_runtime_reconcile_errors_total{controller="imagecollector-controller"} 0
controller_runtime_reconcile_errors_total{controller="imagejob-controller"} 7
controller_runtime_reconcile_errors_total{controller="imagelist-controller"} 0
# HELP controller_runtime_reconcile_time_seconds Length of time per reconciliation per controller
# TYPE controller_runtime_reconcile_time_seconds histogram
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.005"} 1
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.01"} 8
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.025"} 10
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.05"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.1"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.15"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.2"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.25"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.3"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.35"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.4"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.45"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.5"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.6"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.7"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.8"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="0.9"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="1"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="1.25"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="1.5"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="1.75"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="2"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="2.5"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="3"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="3.5"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="4"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="4.5"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="5"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="6"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="7"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="8"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="9"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="10"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="15"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="20"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="25"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="30"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="40"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="50"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="60"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagecollector-controller",le="+Inf"} 12
controller_runtime_reconcile_time_seconds_sum{controller="imagecollector-controller"} 0.15582324399999997
controller_runtime_reconcile_time_seconds_count{controller="imagecollector-controller"} 12
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.005"} 34
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.01"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.025"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.05"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.1"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.15"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.2"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.25"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.3"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.35"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.4"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.45"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.5"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.6"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.7"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.8"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="0.9"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="1"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="1.25"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="1.5"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="1.75"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="2"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="2.5"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="3"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="3.5"} 38
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="4"} 39
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="4.5"} 40
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="5"} 41
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="6"} 41
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="7"} 41
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="8"} 41
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="9"} 41
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="10"} 41
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="15"} 41
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="20"} 42
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="25"} 42
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="30"} 42
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="40"} 42
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="50"} 42
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="60"} 42
controller_runtime_reconcile_time_seconds_bucket{controller="imagejob-controller",le="+Inf"} 42
controller_runtime_reconcile_time_seconds_sum{controller="imagejob-controller"} 31.153467052999996
controller_runtime_reconcile_time_seconds_count{controller="imagejob-controller"} 42
# HELP controller_runtime_reconcile_total Total number of reconciliations per controller
# TYPE controller_runtime_reconcile_total counter
controller_runtime_reconcile_total{controller="imagecollector-controller",result="error"} 0
controller_runtime_reconcile_total{controller="imagecollector-controller",result="requeue"} 0
controller_runtime_reconcile_total{controller="imagecollector-controller",result="requeue_after"} 4
controller_runtime_reconcile_total{controller="imagecollector-controller",result="success"} 8
controller_runtime_reconcile_total{controller="imagejob-controller",result="error"} 7
controller_runtime_reconcile_total{controller="imagejob-controller",result="requeue"} 0
controller_runtime_reconcile_total{controller="imagejob-controller",result="requeue_after"} 0
controller_runtime_reconcile_total{controller="imagejob-controller",result="success"} 35
controller_runtime_reconcile_total{controller="imagelist-controller",result="error"} 0
controller_runtime_reconcile_total{controller="imagelist-controller",result="requeue"} 0
controller_runtime_reconcile_total{controller="imagelist-controller",result="requeue_after"} 0
controller_runtime_reconcile_total{controller="imagelist-controller",result="success"} 0
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.12e-05
go_gc_duration_seconds{quantile="0.25"} 8.16e-05
go_gc_duration_seconds{quantile="0.5"} 0.000116802
go_gc_duration_seconds{quantile="0.75"} 0.000160001
go_gc_duration_seconds{quantile="1"} 0.009723439
go_gc_duration_seconds_sum 0.388291271
go_gc_duration_seconds_count 765
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 102
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.19.6"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 6.091784e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 3.378576152e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.475359e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.7473738e+07
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 9.82904e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 6.091784e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 6.782976e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 8.585216e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 28838
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 3.145728e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.5368192e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6823421339081361e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.7502576e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 2400
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 126432
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 227808
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.0182808e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 622697
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.409024e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.409024e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 2.894772e+07
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 9
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 35.21
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 12
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 4.7345664e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.68234153048e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 7.72251648e+08
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19
# HELP rest_client_requests_total Number of HTTP requests, partitioned by status code, method, and host.
# TYPE rest_client_requests_total counter
rest_client_requests_total{code="200",host="10.0.0.1:443",method="DELETE"} 5
rest_client_requests_total{code="200",host="10.0.0.1:443",method="GET"} 62
rest_client_requests_total{code="200",host="10.0.0.1:443",method="PUT"} 12
rest_client_requests_total{code="201",host="10.0.0.1:443",method="POST"} 16
# HELP workqueue_adds_total Total number of adds handled by workqueue
# TYPE workqueue_adds_total counter
workqueue_adds_total{name="imagecollector-controller"} 12
workqueue_adds_total{name="imagejob-controller"} 42
workqueue_adds_total{name="imagelist-controller"} 0
# HELP workqueue_depth Current depth of workqueue
# TYPE workqueue_depth gauge
workqueue_depth{name="imagecollector-controller"} 0
workqueue_depth{name="imagejob-controller"} 0
workqueue_depth{name="imagelist-controller"} 0
# HELP workqueue_longest_running_processor_seconds How many seconds has the longest running processor for workqueue been running.
# TYPE workqueue_longest_running_processor_seconds gauge
workqueue_longest_running_processor_seconds{name="imagecollector-controller"} 0
workqueue_longest_running_processor_seconds{name="imagejob-controller"} 0
workqueue_longest_running_processor_seconds{name="imagelist-controller"} 0
# HELP workqueue_queue_duration_seconds How long in seconds an item stays in workqueue before being requested
# TYPE workqueue_queue_duration_seconds histogram
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="1e-08"} 0
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="1e-07"} 0
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="1e-06"} 0
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="9.999999999999999e-06"} 4
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="9.999999999999999e-05"} 9
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="0.001"} 11
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="0.01"} 11
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="0.1"} 11
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="1"} 12
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="10"} 12
workqueue_queue_duration_seconds_bucket{name="imagecollector-controller",le="+Inf"} 12
workqueue_queue_duration_seconds_sum{name="imagecollector-controller"} 0.30319366
workqueue_queue_duration_seconds_count{name="imagecollector-controller"} 12
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="1e-08"} 0
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="1e-07"} 0
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="1e-06"} 0
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="9.999999999999999e-06"} 21
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="9.999999999999999e-05"} 35
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="0.001"} 36
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="0.01"} 36
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="0.1"} 36
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="1"} 37
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="10"} 40
workqueue_queue_duration_seconds_bucket{name="imagejob-controller",le="+Inf"} 42
workqueue_queue_duration_seconds_sum{name="imagejob-controller"} 49.248795437999995
workqueue_queue_duration_seconds_count{name="imagejob-controller"} 42
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="1e-08"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="1e-07"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="1e-06"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="9.999999999999999e-06"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="9.999999999999999e-05"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="0.001"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="0.01"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="0.1"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="1"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="10"} 0
workqueue_queue_duration_seconds_bucket{name="imagelist-controller",le="+Inf"} 0
workqueue_queue_duration_seconds_sum{name="imagelist-controller"} 0
workqueue_queue_duration_seconds_count{name="imagelist-controller"} 0
# HELP workqueue_retries_total Total number of retries handled by workqueue
# TYPE workqueue_retries_total counter
workqueue_retries_total{name="imagecollector-controller"} 4
workqueue_retries_total{name="imagejob-controller"} 7
workqueue_retries_total{name="imagelist-controller"} 0
# HELP workqueue_unfinished_work_seconds How many seconds of work has been done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.
# TYPE workqueue_unfinished_work_seconds gauge
workqueue_unfinished_work_seconds{name="imagecollector-controller"} 0
workqueue_unfinished_work_seconds{name="imagejob-controller"} 0
workqueue_unfinished_work_seconds{name="imagelist-controller"} 0
# HELP workqueue_work_duration_seconds How long in seconds processing an item from workqueue takes.
# TYPE workqueue_work_duration_seconds histogram
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="1e-08"} 0
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="1e-07"} 0
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="1e-06"} 0
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="9.999999999999999e-06"} 0
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="9.999999999999999e-05"} 0
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="0.001"} 0
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="0.01"} 8
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="0.1"} 12
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="1"} 12
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="10"} 12
workqueue_work_duration_seconds_bucket{name="imagecollector-controller",le="+Inf"} 12
workqueue_work_duration_seconds_sum{name="imagecollector-controller"} 0.155986944
workqueue_work_duration_seconds_count{name="imagecollector-controller"} 12
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="1e-08"} 0
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="1e-07"} 0
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="1e-06"} 0
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="9.999999999999999e-06"} 0
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="9.999999999999999e-05"} 14
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="0.001"} 33
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="0.01"} 38
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="0.1"} 38
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="1"} 38
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="10"} 41
workqueue_work_duration_seconds_bucket{name="imagejob-controller",le="+Inf"} 42
workqueue_work_duration_seconds_sum{name="imagejob-controller"} 31.154016055999996
workqueue_work_duration_seconds_count{name="imagejob-controller"} 42
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="1e-08"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="1e-07"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="1e-06"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="9.999999999999999e-06"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="9.999999999999999e-05"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="0.001"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="0.01"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="0.1"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="1"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="10"} 0
workqueue_work_duration_seconds_bucket{name="imagelist-controller",le="+Inf"} 0
workqueue_work_duration_seconds_sum{name="imagelist-controller"} 0
workqueue_work_duration_seconds_count{name="imagelist-controller"} 0
references
'Cloud > Kubernetes' 카테고리의 다른 글
kubernetes_sd_config on Prometheus (0) | 2023.06.13 |
---|---|
Gatekeeper monitoring and logging (0) | 2023.05.26 |
trivy-operator (0) | 2023.04.01 |
Postee (0) | 2023.03.17 |
kubent(no trouble) (0) | 2023.03.14 |
- Total
- Today
- Yesterday
- Helm Chart
- kubernetes install
- openstack backup
- metallb
- minio
- DevSecOps
- crashloopbackoff
- vmware openstack
- hashicorp boundary
- Jenkinsfile
- kubernetes
- K3S
- Terraform
- boundary ssh
- nginx-ingress
- socket
- openstacksdk
- jenkins
- ceph
- ansible
- kata container
- minikube
- azure policy
- OpenStack
- aquasecurity
- macvlan
- mattermost
- GateKeeper
- open policy agent
- wsl2
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |