티스토리 뷰
기본적으로 K3s는 flannel(default VXLAN) CNI를 사용하도록 되어 있다.
개인적으로 calico를 사용하고 싶어 Tigera 라는 Calico 기반 비지니스를 하는것으로 보이는 회사에서 제공하는 Operator를 CNI로 사용해서 설치해본 내용을 정리한다.
Installation
먼저 K3s 를 설치하자.
개인적으로 1.20.x가 익숙치 않아 1.19대로 사용하도록 하여 설치를 진행했다.
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.19.7+k3s1" K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr=192.168.0.0/16 --disable-network-policy --disable=traefik" sh -
k3s install만으로는 pod들이 pending 상태로 남게된다.
jacob@jacob-laptop:~$ k3s kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
metrics-server-7b4f8b595-lsmk4 0/1 Pending 0 3m12s <none> <none> <none> <none>
coredns-66c464876b-fkdxz 0/1 Pending 0 3m12s <none> <none> <none> <none>
local-path-provisioner-7ff9579c6-8zxh2 0/1 Pending 0 3m12s <none> <none> <none> <none>
이유는 kube-proxy도 설치가 안되기때문이다.
해결을 위해 calico operator 및 pod를 설치하자.
jacob@jacob-laptop:~$ k3s kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
podsecuritypolicy.policy/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
jacob@jacob-laptop:~$ k3s kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
installation.operator.tigera.io/default created
calico CNI가 설치된후 앞서 pending 상태로 있던 pod들도 Running상태로 변경됨을 확인할 수 있다.
jacob@jacob-laptop:~$ k3s kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
tigera-operator tigera-operator-549bf46b5c-lfz2s 1/1 Running 0 20m
calico-system calico-typha-66686c5766-dc4pg 1/1 Running 0 20m
calico-system calico-node-h882l 1/1 Running 0 20m
kube-system metrics-server-7b4f8b595-lsmk4 1/1 Running 0 24m
kube-system local-path-provisioner-7ff9579c6-8zxh2 1/1 Running 0 24m
kube-system coredns-66c464876b-fkdxz 1/1 Running 0 24m
calico-system calico-kube-controllers-9ffd98979-v6pjx 1/1 Running 0 20m
참고로 Tigera에서 제공하는 calico operator를 사용하면 다음과 같은 namespace 가 생성된다.
jacob@jacob-laptop:~$ k3s kubectl get ns
NAME STATUS AGE
kube-system Active 12m
default Active 12m
kube-public Active 12m
kube-node-lease Active 12m
tigera-operator Active 8m27s
calico-system Active 8m10s
실제 동작중인 process를 확인해보면 아래와 같이 나타난다.
jacob@jacob-laptop:~$ ps -ef | grep -E '[c]alico|[k]3s|[f]elix'
root 455741 1 16 11:09 ? 01:07:14 /usr/local/bin/k3s server
root 461565 1 0 11:14 ? 00:00:11 /var/lib/rancher/k3s/data/30740d1d67da51fe92b10367ecce4d580e552c634ad4a6c4dd13297ffd1f3edd/bin/containerd-shim-runc-v2 -namespace k8s.io -id 210e9e9c70aa745cdd472e81716f0eb866aba2e267fe32786be1ea75015b4c96 -address /run/k3s/containerd/containerd.sock
root 461955 1 0 11:14 ? 00:00:11 /var/lib/rancher/k3s/data/30740d1d67da51fe92b10367ecce4d580e552c634ad4a6c4dd13297ffd1f3edd/bin/containerd-shim-runc-v2 -namespace k8s.io -id b74deb12b486378722885dce8f5db31ed4983234a80ad12b270cc55d9f5776e8 -address /run/k3s/containerd/containerd.sock
root 462002 1 0 11:14 ? 00:00:46 /var/lib/rancher/k3s/data/30740d1d67da51fe92b10367ecce4d580e552c634ad4a6c4dd13297ffd1f3edd/bin/containerd-shim-runc-v2 -namespace k8s.io -id 43d170de64268354b69c1153724b91c9acfe74784387acdc5c6ce6bdb38730d4 -address /run/k3s/containerd/containerd.sock
999 462403 461955 0 11:14 ? 00:00:00 /sbin/tini -- calico-typha
999 462416 462403 0 11:14 ? 00:00:49 calico-typha
root 463143 463055 0 11:15 ? 00:00:00 runsv felix
root 463149 463143 3 11:15 ? 00:12:52 calico-node -felix
root 463150 463146 0 11:15 ? 00:00:06 calico-node -confd
root 463152 463144 0 11:15 ? 00:00:06 calico-node -monitor-addresses
root 463153 463148 0 11:15 ? 00:00:06 calico-node -allocate-tunnel-addrs
root 463431 463147 0 11:15 ? 00:00:08 bird6 -R -s /var/run/calico/bird6.ctl -d -c /etc/calico/confd/config/bird6.cfg
root 463432 463145 0 11:15 ? 00:00:08 bird -R -s /var/run/calico/bird.ctl -d -c /etc/calico/confd/config/bird.cfg
root 463488 1 0 11:15 ? 00:00:11 /var/lib/rancher/k3s/data/30740d1d67da51fe92b10367ecce4d580e552c634ad4a6c4dd13297ffd1f3edd/bin/containerd-shim-runc-v2 -namespace k8s.io -id ba973031db88dd61cdb29bf18bca77e509906d1d7ec5a0f30ec678490ca4bca6 -address /run/k3s/containerd/containerd.sock
root 463511 1 0 11:15 ? 00:00:11 /var/lib/rancher/k3s/data/30740d1d67da51fe92b10367ecce4d580e552c634ad4a6c4dd13297ffd1f3edd/bin/containerd-shim-runc-v2 -namespace k8s.io -id 8752ec15496f323b31679851f5d9b4ad3a0f1c6a92075dc756f15037306bc0d9 -address /run/k3s/containerd/containerd.sock
root 463586 1 0 11:15 ? 00:00:11 /var/lib/rancher/k3s/data/30740d1d67da51fe92b10367ecce4d580e552c634ad4a6c4dd13297ffd1f3edd/bin/containerd-shim-runc-v2 -namespace k8s.io -id a22a5f938224bb39db09ebdfec5e46326c743583e61a8307c1cfc49fb963bb3a -address /run/k3s/containerd/containerd.sock
root 464025 1 0 11:15 ? 00:00:41 /var/lib/rancher/k3s/data/30740d1d67da51fe92b10367ecce4d580e552c634ad4a6c4dd13297ffd1f3edd/bin/containerd-shim-runc-v2 -namespace k8s.io -id ccabfa521a448f041d70979dbe3fa9e800420bc91fb01b5f10c7e661020f2f30 -address /run/k3s/containerd/containerd.sock
runtime은 containrd를 사용하는것을 확인할 수 있다.
jacob@jacob-laptop:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
jacob-laptop Ready master 6h49m v1.19.7+k3s1 172.16.254.2 <none> Ubuntu 20.04.1 LTS 5.4.0-66-generic containerd://1.4.3-k3s1
참고
node를 추가하는 경우도 동일하게 tigera-operator를 추가로 설치해줘야 한다.
'Cloud > Kubernetes' 카테고리의 다른 글
ingress with subpath (0) | 2021.03.23 |
---|---|
Cluster-API (0) | 2021.03.20 |
kubernetes ingress (0) | 2021.03.15 |
Longhorn (0) | 2021.03.13 |
K3s (0) | 2019.11.17 |
댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- openstacksdk
- minio
- ceph
- DevSecOps
- socket
- K3S
- hashicorp boundary
- minikube
- nginx-ingress
- Jenkinsfile
- OpenStack
- metallb
- wsl2
- kubernetes
- crashloopbackoff
- jenkins
- openstack backup
- kata container
- ansible
- boundary ssh
- Terraform
- kubernetes install
- vmware openstack
- Helm Chart
- GateKeeper
- macvlan
- open policy agent
- mattermost
- aquasecurity
- azure policy
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |
글 보관함