티스토리 뷰
VM에 대한 Rebuild 가 수행될때의 과정을 기술하였다.
1. shutdown instance
2. device unplug
3. move old disk and delete
4. new block device mapping and create image
5. run spawing task
6. power sync
7. update info(ex. cache)
8. auditing compute node resource
1. shutdown instance
2018-08-10 14:12:16.377 2387 INFO nova.virt.libvirt.driver [req-87576bc2-b7e5-4233-8f96-fb151ef4fdc3 6ea737b29dd24751938ad472548323b1 582b7dc7c54946c5960b3fa88ed5d1d2 - - -] [instance: 1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0] Instance failed to shutdown in 60 seconds.
2018-08-10 14:12:16.591 2387 INFO nova.virt.libvirt.driver [-] [instance: 1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0] Instance destroyed successfully.
2. device unplugged
2018-08-10 14:12:16.763 2387 INFO os_vif [req-87576bc2-b7e5-4233-8f96-fb151ef4fdc3 6ea737b29dd24751938ad472548323b1 582b7dc7c54946c5960b3fa88ed5d1d2 - - -] Successfully unplugged vif VIFBridge(active=False,address=fa:16:3e:81:a8:b4,bridge_name='qbr8dbfeeff-e2',has_traffic_filtering=True,id=8dbfeeff-e21c-4b3d-b527-77845d2a998b,network=Network(d0068b89-fb82-4864-99ac-0caa105fd660),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap8dbfeeff-e2')
3. move old disk and delete
2018-08-10 14:12:16.764 2387 DEBUG oslo_concurrency.processutils [req-87576bc2-b7e5-4233-8f96-fb151ef4fdc3 6ea737b29dd24751938ad472548323b1 582b7dc7c54946c5960b3fa88ed5d1d2 - - -] Running cmd (subprocess): mv /var/lib/nova/instances/1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0 /var/lib/nova/instances/1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0_del execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:355
2018-08-10 14:12:16.788 2387 DEBUG oslo_concurrency.processutils [req-87576bc2-b7e5-4233-8f96-fb151ef4fdc3 6ea737b29dd24751938ad472548323b1 582b7dc7c54946c5960b3fa88ed5d1d2 - - -] CMD "mv /var/lib/nova/instances/1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0 /var/lib/nova/instances/1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0_del" returned: 0 in 0.023s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:385
2018-08-10 14:12:16.790 2387 INFO nova.virt.libvirt.driver [req-87576bc2-b7e5-4233-8f96-fb151ef4fdc3 6ea737b29dd24751938ad472548323b1 582b7dc7c54946c5960b3fa88ed5d1d2 - - -] [instance: 1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0] Deleting instance files /var/lib/nova/instances/1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0_del
4. block device mapping and create image (동일한 UUID 가짐)
2018-08-10 14:12:17.218 2387 DEBUG nova.block_device [req-87576bc2-b7e5-4233-8f96-fb151ef4fdc3 6ea737b29dd24751938ad472548323b1 582b7dc7c54946c5960b3fa88ed5d1d2 - - -] block_device_list [] volume_in_mapping /usr/lib/python2.7/site-packages/nova/block_device.py:591
2018-08-10 14:12:17.222 2387 INFO nova.virt.libvirt.driver [req-87576bc2-b7e5-4233-8f96-fb151ef4fdc3 6ea737b29dd24751938ad472548323b1 582b7dc7c54946c5960b3fa88ed5d1d2 - - -] [instance: 1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0] Creating image
5. spawning 작업 수행 (libvirt를 통한 device plug 과 같은 작업 수행)
2018-08-10 14:12:19.621 2387 DEBUG nova.virt.libvirt.driver [req-87576bc2-b7e5-4233-8f96-fb151ef4fdc3 6ea737b29dd24751938ad472548323b1 582b7dc7c54946c5960b3fa88ed5d1d2 - - -] [instance: 1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0] End _get_guest_xml xml=<domain type="kvm">
<uuid>1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0</uuid>
<name>instance-00000011</name>
<memory>1048576</memory>
<vcpu>1</vcpu>
...
2018-08-10 14:12:23.213 2387 INFO nova.virt.libvirt.driver [-] [instance: 1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0] Instance spawned successfully.
6. power sync
2018-08-10 14:12:23.635 2387 DEBUG nova.compute.manager [req-03fcc4f4-0bbe-4908-9081-0689809615de - - - - -] [instance: 1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python2.7/site-packages/nova/compute/manager.py:1085
7. update info
2018-08-10 14:12:25.203 2387 DEBUG nova.network.base_api [req-92971e03-fa55-4903-b7da-ba85d900202e - - - - -] [instance: 1e0e6ed9-67fd-4b64-98d6-7bde4f6f4fa0] Updating instance_info_cache with network_info: [{"profile": {}, "ovs_interfaceid": "8dbfeeff-e21c-4b3d-b527-77845d2a998b", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.10.10.13"}], "version": 4, "meta": {"dhcp_server": "10.10.10.2"}, "dns": [], "routes": [], "cidr": "10.10.10.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "10.10.10.1"}}], "meta": {"injected": false, "tenant_id": "582b7dc7c54946c5960b3fa88ed5d1d2", "mtu": 1450}, "id": "d0068b89-fb82-4864-99ac-0caa105fd660", "label": "internal_network"}, "devname": "tap8dbfeeff-e2", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "ovs_hybrid_plug": true}, "address": "fa:16:3e:81:a8:b4", "active": true, "type": "ovs", "id": "8dbfeeff-e21c-4b3d-b527-77845d2a998b", "qbg_params": null}] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:43
8. compute node에서 사용 가능한 resource를 auditing
2018-08-10 14:12:39.875 2387 DEBUG nova.compute.resource_tracker [req-92971e03-fa55-4903-b7da-ba85d900202e - - - - -] Auditing locally available compute resources for overcloud-compute-0.jacob-lab.com (node: overcloud-compute-0.jacob-lab.com) update_available_resource /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py:534
- Total
- Today
- Yesterday
- DevSecOps
- GateKeeper
- boundary ssh
- aquasecurity
- OpenStack
- hashicorp boundary
- jenkins
- Terraform
- metallb
- socket
- openstack backup
- minikube
- minio
- openstacksdk
- ansible
- kata container
- ceph
- open policy agent
- K3S
- mattermost
- Jenkinsfile
- nginx-ingress
- wsl2
- azure policy
- Helm Chart
- kubernetes install
- vmware openstack
- kubernetes
- macvlan
- crashloopbackoff
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 | 31 |