티스토리 뷰

Cloud/Cloud Native

Rook

Jacob_baek 2020. 12. 14. 16:50

Rook 란

Rook는 Open Source cloud-native storage orchestrator이다.
간단하게 이야기하면 kubernetes 위에서 Ceph를 구동하고 kubernetes를 통해 관리할수 있는 방식이다.

Rook의 동작원리

Rook는 다음과 같은 Storage Provider

  • ceph : stable/v1
  • cassandra : alpha
  • crochDB : alpha
  • EdgeFS : deprecated
  • NFS : alpha
  • YoguabyteDB : alpha

위 내용은 아래 provider list를 참고한 내용으로 아래 링크를 통해 최신정보를 확인하길 바란다.

Rook 사용전 알아두어야할 사항

전제조건
Kubernetes Cluster가 존재해야 한다. (v 1.11 이상)

Ceph 전제조건 : [https://github.com/rook/rook/blob/master/Documentation/ceph-prerequisites.md] (https://github.com/rook/rook/blob/master/Documentation/ceph-prerequisites.md)

Disk 구성 방식

rook-ceph 구성방법

rook-ceph를 구성해보자.
앞서 이야기 했듯이 kubernetes 환경이 필수로 필요하기 때문에 kubernetes 환경이 기 구축되어 있다고 가정하고 진행한다.
실제 테스트했던 환경은 kubernetes 1.19.2로 kubespray로 배포했던 환경이다.

  1. 준비

    관련 source code를 clone 한다.

    [root@deploy ~]# git clone --single-branch --branch release-1.5 https://github.com/rook/rook.git
    [root@deploy ~]# cd rook

    clone된 source내에 example 디렉토리에 있는 기본 crds, operator 등을 생성한다.

    [root@deploy rook]# cd cluster/examples/kubernetes/ceph/
    [root@deploy ceph]# kubectl create -f crds.yaml -f common.yaml -f operator.yaml

    이제 rook-ceph namespace에 rook-ceph-operator deployment/pod가 생성된다.

    [root@deploy ceph]# kubectl get all -n rook-ceph
    NAME                                      READY   STATUS    RESTARTS   AGE
    pod/rook-ceph-operator-5f7785c597-xkb25   1/1     Running   0          2m6s
    
    NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/rook-ceph-operator   1/1     1            1           2m6s
    
    NAME                                            DESIRED   CURRENT   READY   AGE
    replicaset.apps/rook-ceph-operator-5f7785c597   1         1         1       2m6s

    참고로 osd가 동작될 node에 lvm2 package 설치가 필요하다.(CentOS7 기준)
    만약 설치가 안되었을 경우 아래와 같은 이슈와 함께 crash가 발생된다.

    Message:      binary "lvm" does not exist on the host, make sure lvm2 package is installed: binary "lvm" does not exist on the host

    또한 firewall(혹은 security group)에 의해 차단되는 포트는 없는지도 확인이 필요하다.
    (개인적으로 firewall 때문에 고생한경험때문에..)

  2. 배포전 아래와 같은 node의 label을 확인해 놓자.

    [root@node1 ~]# kubectl get nodes --show-labels
    NAME    STATUS   ROLES    AGE    VERSION   LABELS
    node1   Ready    master   2d5h   v1.19.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux,node-role.kubernetes.io/master=
    node2   Ready    master   2d5h   v1.19.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux,node-role.kubernetes.io/master=
    node3   Ready    master   2d5h   v1.19.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node3,kubernetes.io/os=linux,node-role.kubernetes.io/master=

    현재시점(rook v1.5.x)에서는 CSI로 기본적으로 모든 node에 동작되도록 되어 있다.
    하여 위 확인이 불필요할수는 있으나 혹시 모를 기존 설정이 남아있을것에 대비해 확인해놓자.

  3. operator 배포

    참고사항

    - name: ROOK_ENABLE_DISCOVERY_DAEMON
      value: "true"

    osd 설치와 관련이 있을것이라 판단되었었으나 실제 false여도 무방하여 해당 설정은 참고만하자
    (설명상으로는 discovery를 전담으로 하는 daemon을 구동할지 여부를 선택하는것이다. 보통 baremetal 환경 사용된다.)

    참고로 위에 ROOK_ENABLE_DISCOVERY_DAEMON을 true로 하게 되면 아래와 같은 rook-discover pod들이 동작되게 된다. (rook-discover라는 pod는 DaemonSet 으로 동작되어진다.)

    NAME                                     READY   STATUS    RESTARTS   AGE
    pod/rook-ceph-operator-7d569f655-2cs98   1/1     Running   0          2m32s
    pod/rook-discover-vb4hf                  1/1     Running   0          2m30s
    pod/rook-discover-w7vnt                  1/1     Running   0          2m30s
    pod/rook-discover-zlgfg                  1/1     Running   0          2m30s

    실제 rook-discover pod의 log를 확인해보면

    2020-12-18 05:25:13.732133 I | rook-discover: updating device configmap
    2020-12-18 05:25:13.747551 I | inventory: skipping device "sda" because it has child, considering the child instead.
    2020-12-18 05:25:13.796635 I | rook-discover: localdevices: "sda1, sdb, sdc, sdd"
    2020-12-18 05:25:13.796666 I | rook-discover: Getting ceph-volume inventory information
    2020-12-18 05:25:14.494806 I | sys: Output: NAME="sdb" SIZE="5368709120" TYPE="disk" PKNAME=""
    2020-12-18 05:25:14.494827 I | sys: Device found - sdb
    2020-12-18 05:25:14.500999 I | sys: Output: NAME="sdc" SIZE="5368709120" TYPE="disk" PKNAME=""
    2020-12-18 05:25:14.501015 I | sys: Device found - sdc
    2020-12-18 05:25:14.506861 I | sys: Output: NAME="sdd" SIZE="5368709120" TYPE="disk" PKNAME=""
    2020-12-18 05:25:14.506880 I | sys: Device found - sdd
    2020-12-18 05:25:14.511177 I | rook-discover: available devices: [{Name:sdb Parent: HasChildren:false DevLinks:/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2239e6f3-8259-422a-b /dev/disk/by-path/pci-0000:00:07.0-scsi-0:0:0:1 Size:5368709120 UUID:9fbe1464-dcab-4ac4-9181-9376ef43ec74 Serial:0QEMU_QEMU_HARDDISK_2239e6f3-8259-422a-b Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Vendor:QEMU Model:QEMU_HARDDISK WWN: WWNVendorExtension: Empty:true CephVolumeData:{"path":"/dev/sdb","available":true,"rejected_reasons":[],"sys_api":{"removable":"0","ro":"0","vendor":"QEMU","model":"QEMU HARDDISK","rev":"2.5+","sas_address":"","sas_device_handle":"","support_discard":"4096","rotational":"1","nr_requests":"128","scheduler_mode":"deadline","partitions":{},"sectors":0,"sectorsize":"512","size":5368709120.0,"human_readable_size":"5.00 GB","path":"/dev/sdb","locked":0},"lvs":[]} RealPath:/dev/sdb KernelName:sdb Encrypted:false} {Name:sdc Parent: HasChildren:false DevLinks:/dev/disk/by-path/pci-0000:00:07.0-scsi-0:0:0:2 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_8f7c4f2a-520c-462e-9 Size:5368709120 UUID:9a5a553f-ba73-47a6-9cc9-70e791e5c50f Serial:0QEMU_QEMU_HARDDISK_8f7c4f2a-520c-462e-9 Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Vendor:QEMU Model:QEMU_HARDDISK WWN: WWNVendorExtension: Empty:true CephVolumeData:{"path":"/dev/sdc","available":true,"rejected_reasons":[],"sys_api":{"removable":"0","ro":"0","vendor":"QEMU","model":"QEMU HARDDISK","rev":"2.5+","sas_address":"","sas_device_handle":"","support_discard":"4096","rotational":"1","nr_requests":"128","scheduler_mode":"deadline","partitions":{},"sectors":0,"sectorsize":"512","size":5368709120.0,"human_readable_size":"5.00 GB","path":"/dev/sdc","locked":0},"lvs":[]} RealPath:/dev/sdc KernelName:sdc Encrypted:false} {Name:sdd Parent: HasChildren:false DevLinks:/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2df8031d-4ff2-4b96-8 /dev/disk/by-path/pci-0000:00:07.0-scsi-0:0:0:3 Size:5368709120 UUID:f9ed3892-dadf-49aa-8c44-8d01f757ee55 Serial:0QEMU_QEMU_HARDDISK_2df8031d-4ff2-4b96-8 Type:disk Rotational:true Readonly:false Partitions:[] Filesystem: Vendor:QEMU Model:QEMU_HARDDISK WWN: WWNVendorExtension: Empty:true CephVolumeData:{"path":"/dev/sdd","available":true,"rejected_reasons":[],"sys_api":{"removable":"0","ro":"0","vendor":"QEMU","model":"QEMU HARDDISK","rev":"2.5+","sas_address":"","sas_device_handle":"","support_discard":"4096","rotational":"1","nr_requests":"128","scheduler_mode":"deadline","partitions":{},"sectors":0,"sectorsize":"512","size":5368709120.0,"human_readable_size":"5.00 GB","path":"/dev/sdd","locked":0},"lvs":[]} RealPath:/dev/sdd KernelName:sdd Encrypted:false}]

    위와 같이 현재 3개의 device에 대한 discover 과정을 수행했음을 확인할 수 있다.

    이제 기본 yaml들을 실행해서 환경을 구성하자.

    [root@node1 ~]# kubectl create -f common.yaml -f crds.yaml -f operator.yaml
    • common.yaml : RBAC
    • crds.yaml : custom resource definitions
    • operator.yaml : Operator

    참고 helm 으로도 operator 배포가 가능하다.

  4. ceph cluster 구성을 위한 yaml 구성
    이제 제일 중요한 실제 ceph cluster 구성을 해보자.
    앞서 CRD는 생성되었고 kubernetes worker node들에 존재하는 disk를 osd로 동작시킬 것이다.
    이를 위해선 cluster.yaml 생성을 해야 한다.
    구성을 하려는 환경은 on-prem 환경으로 디스크는 no partition, no formatted 상태로 연결만 해놓았다.
    (실제 Rook 구성시 권장되는 환경은 다음링크를 통해 상세히 알수 있으니 필히 읽어보길 권장한다.)
    https://rook.io/docs/rook/v1.5/ceph-prerequisites.html#ceph-prerequisites)

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: rook-config-override
      namespace: rook-ceph
    data:
      config: |
        [global]
        public network =  33.33.33.0/24
        cluster network = 44.44.44.0/24
        public addr = ""
        cluster addr = ""
    ---
    apiVersion: ceph.rook.io/v1
    kind: CephCluster
    metadata:
      name: rook-ceph
      namespace: rook-ceph # namespace:cluster
    spec:
      cephVersion:
        image: ceph/ceph:v15.2.7
        allowUnsupported: false
      dataDirHostPath: /var/lib/rook
      skipUpgradeChecks: false
      continueUpgradeAfterChecksEvenIfNotHealthy: false
      mon:
        count: 3
        allowMultiplePerNode: false
      mgr:
        modules:
        - name: pg_autoscaler
          enabled: true
      dashboard:
        enabled: true
        ssl: false
      monitoring:
        enabled: false
        rulesNamespace: rook-ceph
      network:
        provider: host             <= 특정 interface를 cluster/storage network으로 사용할 경우 설정한다. (위 configmap 필수)
      crashCollector:
        disable: false
      cleanupPolicy:
        confirmation: ""
        sanitizeDisks:
          method: quick
          dataSource: zero
          iteration: 1
        allowUninstallWithVolumes: false
      annotations:
      labels:
      resources:
      removeOSDsIfOutAndSafeToRemove: false
      storage: # cluster level storage configuration and selection
        useAllNodes: true          <= 만약 아래 nodes 설정을 추가하는 경우 false로 놓는다.
        useAllDevices: true        <= 만약 아래 devicefilter 설정을 추가하는 경우 false 놓는다.
    #    deviceFilter: "^sd[b-z]"  <= 만약 특정 device만 osd로 동작시키려 한다면 해당 filter를 적용한다.
    #    nodes:                    <= 만약 직접 node의 device들을 지정하고자 한다면 해당 설정을 추가한다.
    #    - name: "node1"
    #      devices:
    #      - name: "sdb"
    #      - name: "sdc"
    #      - name: "sdd"
    #    - name: "node2"
    #      devices:
    #      - name: "sdb"
    #      - name: "sdc"
    #      - name: "sdd"
    #    - name: "node3"
    #      devices:
    #      - name: "sdb"
    #      - name: "sdc"
    #      - name: "sdd"
    #    - name: "node4"
    #      devices:
    #      - name: "sdb"
    #      - name: "sdc"
    #      - name: "sdd"
    #      config: 
    #        storeType: bluestore
        config:
          databaseSizeMB: "1024"   <= config의 옵션들(databasesizeMB 부터 encrypeddevice)은 20G 미만으로 생성한 디스크로 인해 추가했다.
          journalSizeMB: "1024"
          osdsPerDevice: "1"
          encryptedDevice: "false"

    참고로 위 예제에 작성된 rook-config-override configmap은 존재할 경우 ceph.conf의 설정으로 사용된다.
    또한 network 분리는 osd network에 한한것이며 monitor network의 경우 kubernetes의 node-ip를 따라가게 된다.
    하여 RBD와 같이 외부에 monitor ip가 노출되야 하는 경우 network architecture에 대한 고민을 한번더 해보길 추천한다.

  5. ceph cluster 생성

    이제 앞서 생성한 cluster.yaml을 생성해보자.

    [root@jacobbaek-deploy yamls]# kubectl create -f cluster.yaml

    명령 수행후 rbdplugin,cephfsplugin 부터 mgr, mon등 다수의 deployment가 생성된다.

    [root@jacobbaek-deploy ceph]# kubectl get deploy -n rook-ceph
    NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
    csi-cephfsplugin-provisioner     2/2     2            2           4h24m
    csi-rbdplugin-provisioner        2/2     2            2           4h24m
    rook-ceph-crashcollector-node1   1/1     1            1           4h15m
    rook-ceph-crashcollector-node2   1/1     1            1           4h23m
    rook-ceph-crashcollector-node3   1/1     1            1           4h16m
    rook-ceph-crashcollector-node4   1/1     1            1           4h16m
    rook-ceph-mgr-a                  1/1     1            1           4h16m
    rook-ceph-mon-a                  1/1     1            1           4h23m
    rook-ceph-mon-b                  1/1     1            1           4h16m
    rook-ceph-mon-c                  1/1     1            1           4h16m
    rook-ceph-operator               1/1     1            1           4h25m

    이후 osd node들에 대한 job이 생성되고 해당 job을 통해 osd pod들이 생성된다.

    [root@jacobbaek-deploy yamls]# kubectl get job -n rook-ceph
    NAME                          COMPLETIONS   DURATION   AGE
    rook-ceph-osd-prepare-node1   0/1           17m        17m
    rook-ceph-osd-prepare-node2   0/1           17m        17m
    rook-ceph-osd-prepare-node3   0/1           17m        17m

    다음엔 rook-ceph-osd-prepare-[hostname] job이 동작되면서 osd 동작을 위한 hardware discovering 부터 ceph-volume 명령등이 수행된다.

    2020-12-19 06:43:46.391009 I | cephosd: discovering hardware
    2020-12-19 06:43:46.391017 D | exec: Running command: lsblk --all --noheadings --list --output KNAME
    2020-12-19 06:43:46.398204 D | exec: Running command: lsblk /dev/sda --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
    2020-12-19 06:43:46.402192 D | exec: Running command: sgdisk --print /dev/sda
    2020-12-19 06:43:46.408527 D | exec: Running command: udevadm info --query=property /dev/sda
    2020-12-19 06:43:46.422510 D | exec: Running command: lsblk --noheadings --pairs /dev/sda
    2020-12-19 06:43:46.426276 I | inventory: skipping device "sda" because it has child, considering the child instead.
    2020-12-19 06:43:46.426301 D | exec: Running command: lsblk /dev/sda1 --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
    2020-12-19 06:43:46.431081 D | exec: Running command: udevadm info --query=property /dev/sda1
    2020-12-19 06:43:46.438037 D | exec: Running command: lsblk /dev/sdb --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
    2020-12-19 06:43:46.442867 D | exec: Running command: sgdisk --print /dev/sdb
    2020-12-19 06:43:46.454553 D | exec: Running command: udevadm info --query=property /dev/sdb
    2020-12-19 06:43:46.475252 D | exec: Running command: lsblk --noheadings --pairs /dev/sdb
    2020-12-19 06:43:46.483547 D | exec: Running command: lsblk /dev/sdc --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
    2020-12-19 06:43:46.487738 D | exec: Running command: sgdisk --print /dev/sdc
    2020-12-19 06:43:46.492637 D | exec: Running command: udevadm info --query=property /dev/sdc
    2020-12-19 06:43:46.498504 D | exec: Running command: lsblk --noheadings --pairs /dev/sdc
    2020-12-19 06:43:46.504605 D | exec: Running command: lsblk /dev/sdd --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME
    2020-12-19 06:43:46.506952 D | exec: Running command: sgdisk --print /dev/sdd
    2020-12-19 06:43:46.509728 D | exec: Running command: udevadm info --query=property /dev/sdd
    2020-12-19 06:43:46.515282 D | exec: Running command: lsblk --noheadings --pairs /dev/sdd
    2020-12-19 06:43:46.518787 D | inventory: discovered disks are [0xc0004bc480 0xc000187e60 0xc0004bc6c0 0xc0004bcb40]

    최종적으로 osd pod들까지 생성되면서 rook-ceph 구성이 완료된다.

  6. toolbox 생성 및 lvm 확인
    ceph 명령을 직접 실행해볼수 있는 deployment 이다.
    만약 동작중 이슈가 있을 경우 toolbox pod를 통해 debug를 확인해볼수 있다.
    rook example에는 tool yaml을 제공하니 해당 yaml을 통해 배포한다.

    [root@jacobbaek-deploy ~]# kubectl create -f rook/cluster/examples/kubernetes/ceph/toolbox.yaml

    tool을 통해 ceph 정보를 확인해볼수 있다.

    [root@jacobbaek-deploy ~]# kubectl -n rook-ceph exec -it rook-ceph-tools-6f7467bb4d-rmxv6 -- bash
    [root@rook-ceph-tools-6f7467bb4d-rmxv6 /]# ceph -s
      cluster:
        id:     77105b93-e768-40b4-bd56-ce4f2111ec58
        health: HEALTH_OK
    
      services:
        mon: 3 daemons, quorum a,b,c (age 10m)
        mgr: a(active, since 9m)
        osd: 12 osds: 12 up (since 9m), 12 in (since 9m)
    
      data:
        pools:   1 pools, 1 pgs
        objects: 0 objects, 0 B
        usage:   12 GiB used, 132 GiB / 144 GiB avail
        pgs:     1 active+clean
    
    [root@rook-ceph-tools-6f7467bb4d-rmxv6 /]# ceph osd tree
    ID  CLASS  WEIGHT   TYPE NAME       STATUS  REWEIGHT  PRI-AFF
    -1         0.14026  root default                             
    -9         0.03506      host node1                           
     3    hdd  0.01169          osd.3       up   1.00000  1.00000
     7    hdd  0.01169          osd.7       up   1.00000  1.00000
    11    hdd  0.01169          osd.11      up   1.00000  1.00000
    -7         0.03506      host node2                           
     0    hdd  0.01169          osd.0       up   1.00000  1.00000
     4    hdd  0.01169          osd.4       up   1.00000  1.00000
     8    hdd  0.01169          osd.8       up   1.00000  1.00000
    -5         0.03506      host node3                           
     1    hdd  0.01169          osd.1       up   1.00000  1.00000
     6    hdd  0.01169          osd.6       up   1.00000  1.00000
    10    hdd  0.01169          osd.10      up   1.00000  1.00000
    -3         0.03506      host node4                           
     2    hdd  0.01169          osd.2       up   1.00000  1.00000
     5    hdd  0.01169          osd.5       up   1.00000  1.00000
     9    hdd  0.01169          osd.9       up   1.00000  1.00000

    실제 node에서 확인해보면 disk정보가 아래와 같이 확인된다.

    [root@node1 ~]# lsblk
    NAME                                                                MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda                                                                   8:0    0  30G  0 disk 
    └─sda1                                                                8:1    0  30G  0 part /
    sdb                                                                   8:16   0  12G  0 disk 
    └─ceph--a4cd91ad--7cda--4fdd--89e5--9549f001c9f0-osd--data--6efc891d--5b97--451c--b2c1--7f9cb663c6df
                                                                        253:0    0  12G  0 lvm  
    sdc                                                                   8:32   0  12G  0 disk 
    └─ceph--3e9ed7ef--6b9b--44f3--a96e--ddc2e99940be-osd--data--3e8142b0--5c2f--4ad9--991d--ece1bd947915
                                                                        253:1    0  12G  0 lvm  
    sdd                                                                   8:48   0  12G  0 disk 
    └─ceph--0a52ca38--45c8--4865--9189--cb9136797b7d-osd--data--4282ec7c--7b43--4750--b7ce--2aeace89e96a
                                                                    253:2    0  12G  0 lvm  
  7. 실 application이 사용할 PV 생성을 위한 pvc 추가

    이제 환경 구성은 되었고 실제 storage class 추가부터 application 생성까지 진행해보도록 하자.

    pvc로 pv 생성을 하기 위해 storage class를 생성한다.

    [root@jacobbaek-deploy ~]# cd rook/cluster/examples/kubernetes/ceph/csi/rbd/
    [root@jacobbaek-deploy rbd]# kubectl create -f storageclass.yaml

    생성된 storageclass로 pv를 생성해보자.
    아래와 같이 간단한 pvc sample을 만들어 생성해보자.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
          name: test-pvc
    spec:
      storageClassName: rook-ceph-block
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi

    아래와 같이 rook-ceph-block 이라는 storageclass로 pvc,pv가 생성되었음을 확인할수 있다.

    [root@jacobbaek-deploy yamls]# kubectl create -f test-pvc.yaml 
    persistentvolumeclaim/test-pvc created
    [root@jacobbaek-deploy yamls]# kubectl get pv,pvc
    NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS      REASON   AGE
    persistentvolume/pvc-f3bd2368-67f8-4051-901b-31770d4f1dc3   5Gi        RWO            Delete           Bound    default/test-pvc   rook-ceph-block            2s
    
    NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
    persistentvolumeclaim/test-pvc   Bound    pvc-f3bd2368-67f8-4051-901b-31770d4f1dc3   5Gi        RWO            rook-ceph-block   2s

    생성된 pv에 맞는 rbd object 가 생성되었는지 확인해보기 위해 pool을 확인해보자.

    [root@jacobbaek-deploy ~]# kubectl get sc/rook-ceph-block -o json
    {
        "allowVolumeExpansion": true,
        "apiVersion": "storage.k8s.io/v1",
        "kind": "StorageClass",
        "metadata": {
            "creationTimestamp": "2020-12-22T01:57:52Z",
            "managedFields": [
                {
                    "apiVersion": "storage.k8s.io/v1",
                    "fieldsType": "FieldsV1",
                    "fieldsV1": {
                        "f:allowVolumeExpansion": {},
                        "f:parameters": {
                            ".": {},
                            "f:clusterID": {},
                            "f:csi.storage.k8s.io/controller-expand-secret-name": {},
                            "f:csi.storage.k8s.io/controller-expand-secret-namespace": {},
                            "f:csi.storage.k8s.io/fstype": {},
                            "f:csi.storage.k8s.io/node-stage-secret-name": {},
                            "f:csi.storage.k8s.io/node-stage-secret-namespace": {},
                            "f:csi.storage.k8s.io/provisioner-secret-name": {},
                            "f:csi.storage.k8s.io/provisioner-secret-namespace": {},
                            "f:imageFeatures": {},
                            "f:imageFormat": {},
                            "f:pool": {}
                        },
                        "f:provisioner": {},
                        "f:reclaimPolicy": {},
                        "f:volumeBindingMode": {}
                    },
                    "manager": "kubectl-create",
                    "operation": "Update",
                    "time": "2020-12-22T01:57:52Z"
                }
            ],
            "name": "rook-ceph-block",
            "resourceVersion": "215873",
            "selfLink": "/apis/storage.k8s.io/v1/storageclasses/rook-ceph-block",
            "uid": "e00cdda1-e634-4bd1-a38e-1f4a628a3dd5"
        },
        "parameters": {
            "clusterID": "rook-ceph",
            "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-rbd-provisioner",
            "csi.storage.k8s.io/controller-expand-secret-namespace": "rook-ceph",
            "csi.storage.k8s.io/fstype": "ext4",
            "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-rbd-node",
            "csi.storage.k8s.io/node-stage-secret-namespace": "rook-ceph",
            "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-rbd-provisioner",
            "csi.storage.k8s.io/provisioner-secret-namespace": "rook-ceph",
            "imageFeatures": "layering",
            "imageFormat": "2",
            "pool": "replicapool"
        },
        "provisioner": "rook-ceph.rbd.csi.ceph.com",
        "reclaimPolicy": "Delete",
        "volumeBindingMode": "Immediate"
    }
    

    pool을 실제 ceph 에서 확인해보고

    [root@rook-ceph-tools-6f7467bb4d-j7d58 /]# ceph osd pool ls
    device_health_metrics
    replicapool
    ceph-store.rgw.control
    ceph-store.rgw.meta
    ceph-store.rgw.log
    ceph-store.rgw.buckets.index
    ceph-store.rgw.buckets.non-ec
    .rgw.root
    ceph-store.rgw.buckets.data

    아래와 같이 생성된 rbd 이미지를 확인할 수 있다.

    [root@rook-ceph-tools-6f7467bb4d-j7d58 /]# rbd ls -p replicapool
    csi-vol-a71a34d9-4417-11eb-ae9a-a23b8ff3fe3b
    [root@rook-ceph-tools-6f7467bb4d-j7d58 /]# rbd info csi-vol-a71a34d9-4417-11eb-ae9a-a23b8ff3fe3b -p replicapool
    rbd image 'csi-vol-a71a34d9-4417-11eb-ae9a-a23b8ff3fe3b':
        size 5 GiB in 1280 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 96c2ca30c0b7
        block_name_prefix: rbd_data.96c2ca30c0b7
        format: 2
        features: layering
        op_features: 
        flags: 
        create_timestamp: Tue Dec 22 05:36:34 2020
        access_timestamp: Tue Dec 22 05:36:34 2020
        modify_timestamp: Tue Dec 22 05:36:34 2020
    

이와 관련된 모든 내용은 아래 링크에 나와 있다.(실제 아래링크가 정확한 내용을 제공한다.)
출처 : https://github.com/rook/rook/blob/master/Documentation/ceph-quickstart.md

그외 정보

dashboard 접속은 다음과 같은 계정을 통해 접근할 수 있다.

# default userid: admin 
# password ..
[root@jacobbaek-deploy ceph]# kubectl get secret/rook-ceph-dash-o jsonpath='{.data.password}' | base64 -d
96:LHBTe~MJm1uqpeuBe

dashboard는 nodeport부터 loadbalancer, ingress 등으로 설치된 kubernetes 환경에 따라 자유롭게 설정하여 사용할수 있다.

Trouble Shooting

rook ceph를 배포하다보니 몇가지 이슈가 있었어서 아래와 같이 정리해놓는다.

OSD가 생성되지 않아요..

가장많이 겪은 이슈로 OSD prepare job이 실패하고 osd 생성이 안되는 이슈가 계속 발생되었었다.

현재 버전에서는 발생되지 않는 것으로 보이나 혹시몰라 해당 내용도 간단히 정리해 놓는다.
mon과 동일 노드에서 동작되지 않는 이슈로 아래와 같은 제한사항을 없애야 한다.

또한 disk size 이슈가 있었다.
앞서 이야기했듯이 테스트로 VM단에서 수행했기에 disk size를 5Gb로밖에 안만들었었고 이로 인해 이슈가 발생된것으로 추정된다.
(사실 원인분석이 정확히 이루어지지 않아 좀더 확인이 필요하나 현재 disk size를 10G이상으로 올린후 문제 상황은 재현되지 않았다.)

Unable to use device 4.00 GB /dev/sdb, LVs would be smaller than 5GB

rook-ceph-crash-collector-keyring not found

기존 데이터가 삭제되지 않는 경우 발생될수도 있고 최초설치시 발생되었다면 방화벽에 의해 keyring을 가져오는 구간에 통신이 안되는 문제로 인해 발생될수 있다.

MountVolume.SetUp failed for volume "rook-ceph-crash-collector-keyring" : secret "rook-ceph-crash-collector-keyring" not found

하여 재배포를 할 경우 다음과 같은 script을 작성해서 삭제를 수행하고 재배포를 수행했다.

#!/bin/bash

for node in {node1,node2,node3,node4};
do
    ssh $node rm -rf /var/lib/rook/*
    ssh $node rm -rf /var/lib/kubelet/plugins/rook-ceph*
    ssh $node rm -rf /var/lib/kubelet/plugins_registry/rook-ceph*
    echo "delete rook-ceph files in "$node
done

# 만약 osd 배포까지 완료되었었다면 zapping 과정이 필요하다.
# * https://github.com/rook/rook/blob/master/Documentation/ceph-teardown.md#zapping-devices
for node in {worker001,worker002,worker003};
do
    ssh $node << 'ENDSSH'
DISKS="/dev/vdb /dev/vdc /dev/vdd"
disks=($DISKS)

for disk in ${disks[@]}; do
  echo $disk
  sgdisk --zap-all $disk
  dd if=/dev/zero of="$disk" bs=1M count=100 oflag=direct,dsync
  blkdiscard $disk
done

ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %
rm -rf /dev/ceph-*
ENDSSH
    echo "[INFO] zapping disks in "$node
done

또한 방화벽에 의한 이슈가 있었던적도 있어 각 서버의 방화벽(실제 VM단에서 수행되었었기에 security group)을 allow로 변경하였었다.

구성관련

Suse에서 제공하는 Best Practice로 참고할만한 부분이 존재함.

장애 발생시키기

kubernetes 위에서 동작되기에 더 유연한 관리가 될것이라 생각되고 이를 특징으로 내세우기도 하니 한번 장애를 일으켜보고 실제 rook-ceph는 어떻게 동작되는지 간단히 확인해보자.

노드의 재부팅

특정노드(node4)를 재부팅해보았다.
노드가 종료되고 node4에서 동작되던 osd2,osd5,osd9가 restart 되었다.

[root@jacobbaek-deploy centos]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME       STATUS  REWEIGHT  PRI-AFF
-1         0.14026  root default                             
-9         0.03506      host node1                           
 3    hdd  0.01169          osd.3       up   1.00000  1.00000
 7    hdd  0.01169          osd.7       up   1.00000  1.00000
11    hdd  0.01169          osd.11      up   1.00000  1.00000
-7         0.03506      host node2                           
 0    hdd  0.01169          osd.0       up   1.00000  1.00000
 4    hdd  0.01169          osd.4       up   1.00000  1.00000
 8    hdd  0.01169          osd.8       up   1.00000  1.00000
-5         0.03506      host node3                           
 1    hdd  0.01169          osd.1       up   1.00000  1.00000
 6    hdd  0.01169          osd.6       up   1.00000  1.00000
10    hdd  0.01169          osd.10      up   1.00000  1.00000
-3         0.03506      host node4                           
 2    hdd  0.01169          osd.2       up   1.00000  1.00000
 5    hdd  0.01169          osd.5       up   1.00000  1.00000
 9    hdd  0.01169          osd.9       up   1.00000  1.00000

아래 restart 되었음을 확인할수 있다.

rook-ceph-crashcollector-node1-84d8d4d58c-d5p46   1/1     Running     0          37h
rook-ceph-crashcollector-node2-57c5b67fd-vcffh    1/1     Running     0          33h
rook-ceph-crashcollector-node3-658f7fb757-q96fd   1/1     Running     0          37h
rook-ceph-crashcollector-node4-75f46b5886-ph564   1/1     Running     1          37h
rook-ceph-mgr-a-64ddd85697-lnhl9                  1/1     Running     0          7m54s
rook-ceph-mon-a-8668f8db98-5ghsd                  1/1     Running     0          37h
rook-ceph-mon-b-86d47cd449-whts5                  1/1     Running     0          37h
rook-ceph-mon-c-579778bbd8-kn9l2                  1/1     Running     1          37h
rook-ceph-operator-88b89d9f4-9p6nd                1/1     Running     2          37h
rook-ceph-osd-0-85c665d589-zl6vv                  1/1     Running     0          37h
rook-ceph-osd-1-79cbd9bc8f-wr667                  1/1     Running     0          37h
rook-ceph-osd-10-647f454564-tpgsf                 1/1     Running     0          37h
rook-ceph-osd-11-74c9966dcc-2ns65                 1/1     Running     0          37h
rook-ceph-osd-2-9f6bcd6f-z6cfk                    1/1     Running     1          37h
rook-ceph-osd-3-64b4456d7d-lrjjg                  1/1     Running     0          37h
rook-ceph-osd-4-78db4c6b9-f8hsh                   1/1     Running     0          37h
rook-ceph-osd-5-5645fdd74b-966g5                  1/1     Running     1          37h
rook-ceph-osd-6-8549645d88-kkkwh                  1/1     Running     0          37h
rook-ceph-osd-7-6dc568dfcb-gmkpt                  1/1     Running     0          37h
rook-ceph-osd-8-775998886f-ms4l8                  1/1     Running     0          37h
rook-ceph-osd-9-6dc47d45fb-l96jg                  1/1     Running     1          37h
rook-ceph-osd-prepare-node1-92jzv                 0/1     Completed   0          5m22s
rook-ceph-osd-prepare-node2-8hwmz                 0/1     Completed   0          5m19s
rook-ceph-osd-prepare-node3-jspx2                 0/1     Completed   0          5m17s
rook-ceph-osd-prepare-node4-m29fp                 0/1     Completed   0          5m12s
rook-ceph-rgw-ceph-store-a-5fd787c4ff-snsc7       1/1     Running     0          33h
rook-ceph-tools-6f7467bb4d-rmxv6                  1/1     Running     0          36h

재부팅과 함께 node4가 정상동작되는 시점에 osd-prepare job이 재구동하였고 새로운 device가 추가된것은 아니기에 기존 ceph-volume 설정을 그대로 사용한다고 하면서 job이 완료된다.

2020-12-23 14:00:13.577949 D | cephosd: desiredDevices are [{Name:all OSDsPerDevice:1 MetadataDevice: DatabaseSizeMB:0 DeviceClass: IsFilter:true IsDevicePathFilter:false}]
2020-12-23 14:00:13.577954 D | cephosd: context.Devices are [0xc00067afc0 0xc00041efc0 0xc00073e6c0 0xc00073ed80]
2020-12-23 14:00:13.577961 I | cephosd: skipping device "sda1" because it contains a filesystem "xfs"
2020-12-23 14:00:13.577964 I | cephosd: skipping 'dm' device "dm-0"
2020-12-23 14:00:13.577967 I | cephosd: skipping 'dm' device "dm-1"
2020-12-23 14:00:13.577969 I | cephosd: skipping 'dm' device "dm-2"
2020-12-23 14:00:13.578051 I | cephosd: configuring osd devices: {"Entries":{}}
2020-12-23 14:00:13.578060 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2020-12-23 14:00:13.578247 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list  --format json

node 추가

kubernetes에 worker를 추가하게 되면 추가와 동시에 rook-ceph-operator가 신규노드(node5)를 인지하여 osd 생성 및 추가를 진행하게 된다.

2020-12-23 14:25:31.321328 I | ceph-cluster-controller: node watcher: adding node "node5" to cluster "rook-ceph"
2020-12-23 14:25:31.325682 I | ceph-cluster-controller: reconciling ceph cluster in namespace "rook-ceph"
2020-12-23 14:25:31.333364 I | op-mon: parsing mon endpoints: c=11.11.11.13:6789,a=11.11.11.11:6789,b=11.11.11.12:6789
2020-12-23 14:25:31.345836 I | ceph-cluster-controller: detecting the ceph image version for image ceph/ceph:v15.2.7...
2020-12-23 14:25:34.413773 I | ceph-cluster-controller: detected ceph image version: "15.2.7-0 octopus"
2020-12-23 14:25:34.413787 I | ceph-cluster-controller: validating ceph version from provided image
2020-12-23 14:25:34.420905 I | op-mon: parsing mon endpoints: c=11.11.11.13:6789,a=11.11.11.11:6789,b=11.11.11.12:6789

실제 node 추가 시간과 신규 osd들(12,13,14)가 동작된 시간이 거의 유사함을 알수 있다.

[root@jacobbaek-deploy kubespray]# kubectl get nodes && kubectl get po -n rook-ceph
NAME    STATUS   ROLES    AGE     VERSION
node1   Ready    master   2d1h    v1.19.2
node2   Ready    master   2d1h    v1.19.2
node3   Ready    master   2d1h    v1.19.2
node4   Ready    <none>   2d1h    v1.19.2
node5   Ready    <none>   3m36s   v1.19.2
NAME                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-86v4l                            3/3     Running     0          37h
csi-cephfsplugin-gh5mw                            3/3     Running     0          3m3s
csi-cephfsplugin-j4nfm                            3/3     Running     0          37h
csi-cephfsplugin-pqrrm                            3/3     Running     3          37h
csi-cephfsplugin-provisioner-7dc78747bf-24t9p     6/6     Running     0          37h
csi-cephfsplugin-provisioner-7dc78747bf-cfrwp     6/6     Running     7          37h
csi-cephfsplugin-wv9zn                            3/3     Running     0          37h
csi-rbdplugin-2clhq                               3/3     Running     3          37h
csi-rbdplugin-b6ndx                               3/3     Running     0          37h
csi-rbdplugin-g6npt                               3/3     Running     0          3m3s
csi-rbdplugin-jckcc                               3/3     Running     0          37h
csi-rbdplugin-knxzk                               3/3     Running     0          37h
csi-rbdplugin-provisioner-54d48757b4-5mwk5        6/6     Running     7          37h
csi-rbdplugin-provisioner-54d48757b4-7zl86        6/6     Running     0          37h
rook-ceph-crashcollector-node1-84d8d4d58c-d5p46   1/1     Running     0          37h
rook-ceph-crashcollector-node2-57c5b67fd-vcffh    1/1     Running     0          33h
rook-ceph-crashcollector-node3-658f7fb757-q96fd   1/1     Running     0          37h
rook-ceph-crashcollector-node4-75f46b5886-ph564   1/1     Running     1          37h
rook-ceph-crashcollector-node5-775d98d6b7-k74pz   1/1     Running     0          90s
rook-ceph-mgr-a-64ddd85697-lnhl9                  1/1     Running     0          30m
rook-ceph-mon-a-8668f8db98-5ghsd                  1/1     Running     0          37h
rook-ceph-mon-b-86d47cd449-whts5                  1/1     Running     0          37h
rook-ceph-mon-c-579778bbd8-kn9l2                  1/1     Running     1          37h
rook-ceph-operator-88b89d9f4-9p6nd                1/1     Running     2          37h
rook-ceph-osd-0-85c665d589-zl6vv                  1/1     Running     0          37h
rook-ceph-osd-1-79cbd9bc8f-wr667                  1/1     Running     0          37h
rook-ceph-osd-10-647f454564-tpgsf                 1/1     Running     0          37h
rook-ceph-osd-11-74c9966dcc-2ns65                 1/1     Running     0          37h
rook-ceph-osd-12-5cb7cc797f-q7dlm                 1/1     Running     0          91s
rook-ceph-osd-13-58cf4fc559-98pkg                 1/1     Running     0          90s
rook-ceph-osd-14-5b9474b4f9-vs4vs                 1/1     Running     0          90s
rook-ceph-osd-2-9f6bcd6f-z6cfk                    1/1     Running     1          37h
rook-ceph-osd-3-64b4456d7d-lrjjg                  1/1     Running     0          37h
rook-ceph-osd-4-78db4c6b9-f8hsh                   1/1     Running     0          37h
rook-ceph-osd-5-5645fdd74b-966g5                  1/1     Running     1          37h
rook-ceph-osd-6-8549645d88-kkkwh                  1/1     Running     0          37h
rook-ceph-osd-7-6dc568dfcb-gmkpt                  1/1     Running     0          37h
rook-ceph-osd-8-775998886f-ms4l8                  1/1     Running     0          37h
rook-ceph-osd-9-6dc47d45fb-l96jg                  1/1     Running     1          37h
rook-ceph-osd-prepare-node1-8b2dm                 0/1     Completed   0          66s
rook-ceph-osd-prepare-node2-72d5b                 0/1     Completed   0          63s
rook-ceph-osd-prepare-node3-t8x2r                 0/1     Completed   0          60s
rook-ceph-osd-prepare-node4-v6q7h                 0/1     Completed   0          57s
rook-ceph-osd-prepare-node5-cbqn9                 0/1     Completed   0          54s
rook-ceph-rgw-ceph-store-a-5fd787c4ff-snsc7       1/1     Running     0          33h
rook-ceph-tools-6f7467bb4d-rmxv6                  1/1     Running     0          37h

참고사이트

'Cloud > Cloud Native' 카테고리의 다른 글

Falco  (0) 2021.01.03
Hashicorp Waypoint  (0) 2020.10.27
How to use Hashicorp Waypoint  (0) 2020.10.27
Metallb on Minikube  (0) 2020.10.27
starboard  (0) 2020.10.03
댓글
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/03   »
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
글 보관함