티스토리 뷰

kubespary upgrade 과정중 아래와 같은 이슈로 인해 kubelet이 정상적으로 동작되지 않는 상황이 발생되어 kubespary upgrade가 실패되는 상황이 발생되었다.

Oct 24 00:35:14 master001 kubelet[2553253]: I1024 00:35:14.327902 2553253 kubelet_node_status.go:74] "Successfully registered node" node="master001"
Oct 24 00:35:14 master001 kubelet[2553253]: I1024 00:35:14.334714 2553253 kubelet_node_status.go:554] "Recording event message for node" node="master001" event="NodeNotSchedulable"
Oct 24 00:35:14 master001 kubelet[2553253]: E1024 00:35:14.384905 2553253 kubelet.go:1384] "Failed to start ContainerManager" err="failed to build map of initial containers from runtime: no PodsandBox found with Id '2859df5e96b864daafc0b5a462c57a3828e1bed9baacc1716186cd3b902ff848'"
Oct 24 00:35:14 master001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Oct 24 00:35:14 master001 systemd[1]: kubelet.service: Failed with result 'exit-code'.

확인결과 동일한 pod id를 가지는 kube-apiserver가 존재하였고

root@master001:~# crictl ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
0498e210bc6d7       22d1a2072ec7b       2 weeks ago         Running             kube-controller-manager   2                   73cc99261863a
2a7292d163864       38f903b540101       2 weeks ago         Running             kube-scheduler            0                   2afd956ce68c3
95a106807ea09       034671b24f0f1       2 weeks ago         Running             kube-apiserver            2                   0d9b1884bd601
463bf04f5bf66       22d1a2072ec7b       2 weeks ago         Exited              kube-controller-manager   1                   eebcd03564553
46277e72812d0       425ebe418b9b9       3 weeks ago         Running             speaker                   0                   2bb7be6d6f5ee
37e7dad652068       034671b24f0f1       6 weeks ago         Exited              kube-apiserver            1                   2859df5e96b86
0b2dad1fcf27d       21fc69048bd5d       2 months ago        Running             node-cache                0                   bd7ab0af36f46
ed3fd5ddfa9e1       4d9399da41dcc       2 months ago        Running             calico-node               0                   0c0190050574d
9b5ecd147473f       f3abd83bc819e       2 months ago        Exited              install-cni               0                   0c0190050574d
58251d95a064c       f3abd83bc819e       2 months ago        Exited              upgrade-ipam              0                   0c0190050574d
6aa1baff47a16       ff54c88b8ecfa       2 months ago        Running             kube-proxy                0                   3c7f6b42a7c9cㅍ

이슈에 해당하는 container ID를 가지고 stop 및 rm 후

root@master001:~# crictl stop 37e7dad652068
37e7dad652068
root@master001:~# crictl rm 37e7dad652068
37e7dad652068​

다시 kubelet 상태를 확인해보면 정상적으로 동작함을 확인할 수 있었다.

참고사이트

댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/06   »
1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30
글 보관함