Giter VIP home page Giter VIP logo

Comments (14)

AkagiS avatar AkagiS commented on August 10, 2024

seems like problem in vhd-tool

from rbdsr.

rposudnevskiy avatar rposudnevskiy commented on August 10, 2024

Hi,
Could you please add this code echo "$0 $@" > /tmp/vhd-tool.log at the beginning of the vhd-tool script.
It should looks like this
#!/bin/bash
echo "$0 $@" > /tmp/vhd-tool.log
.....
And try again
What is the contents of the /tmp/vhd-tool.log file?

from rbdsr.

AkagiS avatar AkagiS commented on August 10, 2024

Hi,

/bin/vhd-tool serve --source-format raw --source-protocol chunked --source-fd 7 --tar-filename-prefix --destination file:///dev/sm/backend/a07d3396-e781-63c0-85b4-87f34823e6e8/e87075c7-9648-4284-bab1-fbf673f4cc8c --destination-format raw --progress --machine --direct --prezeroed

from rbdsr.

rposudnevskiy avatar rposudnevskiy commented on August 10, 2024

Hi,
Please try the last version df8828e

from rbdsr.

AkagiS avatar AkagiS commented on August 10, 2024

Hi,
Thanks.
Looks better, but it's still not fully working.
Now everything works as it should, prior to the step of VDI creation from template.

In Xendesktop this error:

Error Id: XDDS:4A5116C7

Exception:
Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.TaskTerminatedException Failed to create the virtual machine; DOMAIN\VDI-0015$.
at Citrix.Console.PowerShellSdk.ProvisioningSchemeService.BackgroundTasks.AnalyzeMachineProvisioningResultsTask.RunTask()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.Run()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.RunSubTasks()
at Citrix.Console.PowerShellSdk.ProvisioningSchemeService.BackgroundTasks.McsMachineCreationTask.RunTask()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.Run()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.RunSubTasks()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.Run()
at Citrix.Console.PowerShellSdk.ProvisioningSchemeService.Scripts.CreateMcsBasedDesktopCatalogScript.<>c__DisplayClass7_1.b__2()
at Citrix.Console.PowerShellInteraction.PowerShellScript`1.<>c__DisplayClass81_0.b__0()

DOMAIN\VDI-0015$ : [DOMAIN\VDI-0015$, Failed to create the virtual machine; DOMAIN\VDI-0015$.
Error Details
MachineFailure
 : DOMAIN\VDI-0015$Inner Error:
**Failure in AssignDisktoVM, INTERNAL_ERROR, Storage_interface.Vdi_does_not_exist("b45517d2-768e-49d7-8c57-c7fe15c19d06")**
Error Details
ErrorID
 : PluginUtilities.Exceptions.ManagedMachineGeneralExceptionTaskErrorInformation
 : PluginUtilities.Exceptions.ManagedMachineGeneralException: Failure in AssignDisktoVM, INTERNAL_ERROR, Storage_interface.Vdi_does_not_exist("b45517d2-768e-49d7-8c57-c7fe15c19d06") ---> PluginUtilities.Exceptions.ManagedMachineGeneralException: Failure in AssignDisktoVM, INTERNAL_ERROR, Storage_interface.Vdi_does_not_exist("b45517d2-768e-49d7-8c57-c7fe15c19d06") ---> PluginUtilities.Exceptions.WrappedPluginException: Internal error: Storage_interface.Vdi_does_not_exist("b45517d2-768e-49d7-8c57-c7fe15c19d06")

Created disks:

uuid ( RO) : 9f87cfda-66e4-4d5a-911c-38aad1b9bed2
name-label ( RW): VDI-0015-clone-of-VDI-7-64-Ceph-baseDisk
name-description ( RW): Template
sr-uuid ( RO): 976e0ed5-2901-47b0-8962-ad6e76d6b55a
virtual-size ( RO): 37580963840
sharable ( RO): false
read-only ( RO): true


Also createrd disk didn't delete automaticaly


Errors in xensource.log

May 22 15:31:21 hs-0227 xenopsd-xc: [debug|hs-0227|31 |VM.start R:8802b8bf4768|xenops] Device.Vif.add domid=47 devid=0 mac=72:da:76:6a:04:9c carrier=false rate=none other_config=[] extra_private_keys=[vif-id=0; vif-uuid=fa629f89-c5b9-7ca3-be82-79d11a6269dd; network-uuid=e266b4f1-4301-908b-8c2a-b93a11946e56; locking-mode=disabled; setup-vif-rules=/usr/libexec/xenopsd/setup-vif-rules; setup-pvs-proxy-rules=/usr/libexec/xenopsd/setup-pvs-proxy-rules; xenopsd-backend=classic] extra_xenserver_keys=[static-ip-setting/mac=72:da:76:6a:04:9c; static-ip-setting/error-code=0; static-ip-setting/error-msg=; static-ip-setting/enabled=0; static-ip-setting/enabled6=0]
May 22 15:31:22 hs-0227 xapi: [debug|hs-0227|81 |xapi events D:6f4adf04fe49|helpers] Helpers.call_api_functions failed to logout: Server_error(SESSION_INVALID, [ OpaqueRef:693da6dd-ec29-a105-8c46-a0bfa29013a8 ]) (ignoring)
May 22 15:31:22 hs-0227 xapi: [error|hs-0227|81 |xapi events D:6f4adf04fe49|xenops] events_from_xapi: missing from the cache: [ 56ce1a77-8aee-4678-9835-bbb6d2fcbbbb ]
May 22 15:48:23 hs-0227 xenopsd-xc: [ info|hs-0227|23 |events|xenops] removing core files from /var/xen/qemu: ignoring exception Unix.Unix_error(Unix.ENOENT, "rmdir", "/var/xen/qemu/1795")
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|24 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|22 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|24 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/47
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|20 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|24 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] xenstore-rm /local/domain/47/error/device/vbd/832
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|22 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/47
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|22 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] xenstore-rm /local/domain/47/error/device/vbd/5696
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|20 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] xenstore-rm /local/domain/0/error/backend/vbd3/47
May 22 15:48:23 hs-0227 xenopsd-xc: [debug|hs-0227|20 |Parallel:task=1012.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|xenops] xenstore-rm /local/domain/47/error/device/vbd/768
May 22 15:48:26 hs-0227 xenopsd-xc: [debug|hs-0227|23 |events|xenops] Device.Generic.hard_shutdown about to blow away backend and error paths
May 22 15:48:26 hs-0227 xenopsd-xc: [debug|hs-0227|23 |events|xenops] xenstore-rm /local/domain/0/error/backend/vif/47
May 22 15:48:26 hs-0227 xenopsd-xc: [debug|hs-0227|23 |events|xenops] xenstore-rm /local/domain/47/error/device/vif/0
May 22 15:48:26 hs-0227 xenopsd-xc: [error|hs-0227|23 |events|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/interface : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/interface")
May 22 15:48:26 hs-0227 xenopsd-xc: [error|hs-0227|23 |events|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/interface : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/interface")
May 22 15:48:26 hs-0227 xenopsd-xc: [error|hs-0227|23 |events|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/extra : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/extra")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|13 |events|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|13 |events|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|25 |events|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|25 |events|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|32 |Parallel:task=1017.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|28 |Parallel:task=1017.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|26 |Parallel:task=1017.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|32 |Parallel:task=1017.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|28 |Parallel:task=1017.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|26 |Parallel:task=1017.atoms=3.(VBD.unplug vm=b0240f14-f811-6087-acb2-a413fd652886)|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|15 |events|memory] Failed to read /vm/b0240f14-f811-6087-acb2-a413fd652886/domains: has this domain already been cleaned up?
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM/b0240f14-f811-6087-acb2-a413fd652886 : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM/b0240f14-f811-6087-acb2-a413fd652886")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM/b0240f14-f811-6087-acb2-a413fd652886 : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM/b0240f14-f811-6087-acb2-a413fd652886")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM/b0240f14-f811-6087-acb2-a413fd652886 : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM/b0240f14-f811-6087-acb2-a413fd652886")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM/b0240f14-f811-6087-acb2-a413fd652886 : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM/b0240f14-f811-6087-acb2-a413fd652886")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM")
May 22 15:48:27 hs-0227 xenopsd-xc: [error|hs-0227|9 |org.xen.xapi.xenops.classic events D:4110b3b014b3|xenops_utils] Failed to DB.delete /var/run/nonpersistent/xenopsd/classic/VM : Unix.Unix_error(Unix.ENOTEMPTY, "rmdir", "/var/run/nonpersistent/xenopsd/classic/VM")
May 22 15:48:27 hs-0227 xapi: [debug|hs-0227|81 |xapi events D:6f4adf04fe49|helpers] Helpers.call_api_functions failed to logout: Server_error(SESSION_INVALID, [ OpaqueRef:fb73cc5d-ea3a-789b-7900-db0f3f22b083 ]) (ignoring)
May 22 15:48:27 hs-0227 xapi: [error|hs-0227|81 |xapi events D:6f4adf04fe49|xenops] events_from_xapi: missing from the cache: [ 56ce1a77-8aee-4678-9835-bbb6d2fcbbbb ]

SMLog

May 22 15:31:18 hs-0227 SM: [883] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:18 hs-0227 SM: [880] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:18 hs-0227 SM: [883] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:31:18 hs-0227 SM: [880] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:31:18 hs-0227 SM: [883] Calling cephutils.SR._get_srlist
May 22 15:31:18 hs-0227 SM: [880] Calling cephutils.SR._get_srlist
May 22 15:31:18 hs-0227 SM: [883] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:31:18 hs-0227 SM: [880] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:31:19 hs-0227 SM: [883] pread SUCCESS
May 22 15:31:19 hs-0227 SM: [883] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [883] RBDVDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [883] Calling cephutils.SR._get_path: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [883] Calling cephutils.VDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [883] vdi_epoch_begin {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|55edd3fd-f58a-781d-d3c3-71203bb447ce|VDI.epoch_begin', 'vdi_ref': 'OpaqueRef:11e52ee9-eef1-8ccb-6d67-752b351d5319', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'e25e1317-260a-4714-b7af-0ef32a87b1b1', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:724868c8-e76e-65ad-b10d-ba05db5f0e85', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_epoch_begin', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'e25e1317-260a-4714-b7af-0ef32a87b1b1'}
May 22 15:31:19 hs-0227 SM: [880] pread SUCCESS
May 22 15:31:19 hs-0227 SM: [880] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [880] RBDVDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [880] Calling cephutils.SR._get_path: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [880] Calling cephutils.VDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [880] vdi_epoch_begin {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|91e9ea2c-c3f3-ec2b-f683-1cc24ba26ed7|VDI.epoch_begin', 'vdi_ref': 'OpaqueRef:fbbb6e4f-e724-2de2-5102-4685c9a73118', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'd4a70912-459a-4720-8901-2cb2d08930ae', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:242eed3e-b581-5eb1-6c52-62f214e37502', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_epoch_begin', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'd4a70912-459a-4720-8901-2cb2d08930ae'}
May 22 15:31:19 hs-0227 SM: [961] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [961] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:31:19 hs-0227 SM: [961] Calling cephutils.SR._get_srlist
May 22 15:31:19 hs-0227 SM: [961] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:31:19 hs-0227 SM: [966] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [966] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:31:19 hs-0227 SM: [966] Calling cephutils.SR._get_srlist
May 22 15:31:19 hs-0227 SM: [966] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:31:19 hs-0227 SM: [966] pread SUCCESS
May 22 15:31:19 hs-0227 SM: [966] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [966] RBDVDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [966] Calling cephutils.SR._get_path: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [966] Calling cephutils.VDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [966] vdi_attach {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|3cceb016-4638-257b-0c77-c8863af04fd2|VDI.attach', 'vdi_ref': 'OpaqueRef:fbbb6e4f-e724-2de2-5102-4685c9a73118', 'vdi_on_boot': 'persist', 'args': ['true'], 'vdi_location': 'd4a70912-459a-4720-8901-2cb2d08930ae', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:713764bb-bba5-1b49-ea89-c3a9fa4b7db4', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_attach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'd4a70912-459a-4720-8901-2cb2d08930ae'}
May 22 15:31:19 hs-0227 SM: [966] lock: opening lock file /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:31:19 hs-0227 SM: [961] pread SUCCESS
May 22 15:31:19 hs-0227 SM: [961] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [961] RBDVDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [961] Calling cephutils.SR._get_path: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [961] Calling cephutils.VDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [961] vdi_attach {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|a8fee549-cb2d-1fa9-3450-e0b8a5506016|VDI.attach', 'vdi_ref': 'OpaqueRef:11e52ee9-eef1-8ccb-6d67-752b351d5319', 'vdi_on_boot': 'persist', 'args': ['true'], 'vdi_location': 'e25e1317-260a-4714-b7af-0ef32a87b1b1', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:c62ef24b-6c09-95c2-6d99-d05adc190964', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_attach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'e25e1317-260a-4714-b7af-0ef32a87b1b1'}
May 22 15:31:19 hs-0227 SM: [961] lock: opening lock file /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:31:19 hs-0227 SM: [966] RBDSR.handles type nfs
May 22 15:31:19 hs-0227 SM: [966] RBDSR.handles type ext
May 22 15:31:19 hs-0227 SM: [966] RBDSR.handles type smb
May 22 15:31:19 hs-0227 SM: [966] result: {'o_direct_reason': 'SR_NOT_SUPPORTED', 'params': '/dev/sm/backend/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae', 'o_direct': True, 'xenstore_data': {'scsi/0x12/0x80': 'AIAAEmQ0YTcwOTEyLTQ1OWEtNDcgIA==', 'scsi/0x12/0x83': 'AIMAMQIBAC1YRU5TUkMgIGQ0YTcwOTEyLTQ1OWEtNDcyMC04OTAxLTJjYjJkMDg5MzBhZSA=', 'vdi-uuid': 'd4a70912-459a-4720-8901-2cb2d08930ae', 'mem-pool': '976e0ed5-2901-47b0-8962-ad6e76d6b55a'}}
May 22 15:31:19 hs-0227 SM: [966] lock: closed /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi

May 22 15:31:19 hs-0227 SM: [961] RBDSR.handles type nfs
May 22 15:31:19 hs-0227 SM: [961] RBDSR.handles type ext
May 22 15:31:19 hs-0227 SM: [961] RBDSR.handles type smb
May 22 15:31:19 hs-0227 SM: [961] result: {'o_direct_reason': 'SR_NOT_SUPPORTED', 'params': '/dev/sm/backend/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1', 'o_direct': True, 'xenstore_data': {'scsi/0x12/0x80': 'AIAAEmUyNWUxMzE3LTI2MGEtNDcgIA==', 'scsi/0x12/0x83': 'AIMAMQIBAC1YRU5TUkMgIGUyNWUxMzE3LTI2MGEtNDcxNC1iN2FmLTBlZjMyYTg3YjFiMSA=', 'vdi-uuid': 'e25e1317-260a-4714-b7af-0ef32a87b1b1', 'mem-pool': '976e0ed5-2901-47b0-8962-ad6e76d6b55a'}}
May 22 15:31:19 hs-0227 SM: [961] lock: closed /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:31:19 hs-0227 SM: [1042] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [1042] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:31:19 hs-0227 SM: [1042] Calling cephutils.SR._get_srlist
May 22 15:31:19 hs-0227 SM: [1042] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:31:19 hs-0227 SM: [1045] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [1045] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:31:19 hs-0227 SM: [1045] Calling cephutils.SR._get_srlist
May 22 15:31:19 hs-0227 SM: [1045] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:31:19 hs-0227 SM: [1042] pread SUCCESS
May 22 15:31:19 hs-0227 SM: [1042] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [1042] RBDVDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [1042] Calling cephutils.SR._get_path: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [1042] Calling cephutils.VDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [1042] vdi_activate {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|52c78bb2-681b-90ad-0ae7-6c00fb14087d|VDI.activate', 'vdi_ref': 'OpaqueRef:fbbb6e4f-e724-2de2-5102-4685c9a73118', 'vdi_on_boot': 'persist', 'args': ['true'], 'vdi_location': 'd4a70912-459a-4720-8901-2cb2d08930ae', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:3abc80af-eb3e-8a17-a8af-49dadefeb649', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_activate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'd4a70912-459a-4720-8901-2cb2d08930ae'}
May 22 15:31:19 hs-0227 SM: [1042] lock: opening lock file /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:31:19 hs-0227 SM: [1042] blktap2.activate
May 22 15:31:19 hs-0227 SM: [1045] pread SUCCESS
May 22 15:31:19 hs-0227 SM: [1045] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [1045] RBDVDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [1045] Calling cephutils.SR._get_path: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [1045] Calling cephutils.VDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [1045] vdi_activate {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|388b3925-aa5f-0491-0142-fe7abd256d8b|VDI.activate', 'vdi_ref': 'OpaqueRef:11e52ee9-eef1-8ccb-6d67-752b351d5319', 'vdi_on_boot': 'persist', 'args': ['true'], 'vdi_location': 'e25e1317-260a-4714-b7af-0ef32a87b1b1', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:b3eb3a8a-0ef5-9f7b-ced6-65101b05f97c', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_activate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'e25e1317-260a-4714-b7af-0ef32a87b1b1'}
May 22 15:31:19 hs-0227 SM: [1045] lock: opening lock file /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:31:19 hs-0227 SM: [1045] blktap2.activate
May 22 15:31:19 hs-0227 SM: [1042] lock: acquired /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:31:19 hs-0227 SM: [1042] Adding tag to: d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:19 hs-0227 SM: [1045] lock: acquired /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:31:19 hs-0227 SM: [1045] Adding tag to: e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:19 hs-0227 SM: [1042] Activate lock succeeded
May 22 15:31:19 hs-0227 SM: [1045] Activate lock succeeded
May 22 15:31:19 hs-0227 SM: [1042] RBDSR.handles type rbd
May 22 15:31:19 hs-0227 SM: [1045] RBDSR.handles type rbd
May 22 15:31:19 hs-0227 SM: [1042] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [1042] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:31:19 hs-0227 SM: [1042] Calling cephutils.SR._get_srlist
May 22 15:31:19 hs-0227 SM: [1042] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:31:19 hs-0227 SM: [1045] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:19 hs-0227 SM: [1045] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:31:19 hs-0227 SM: [1045] Calling cephutils.SR._get_srlist
May 22 15:31:19 hs-0227 SM: [1045] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:31:20 hs-0227 SM: [1042] pread SUCCESS
May 22 15:31:20 hs-0227 SM: [1042] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:20 hs-0227 SM: [1042] RBDVDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:20 hs-0227 SM: [1042] Calling cephutils.SR._get_path: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:20 hs-0227 SM: [1042] Calling cephutils.VDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:20 hs-0227 SM: [1042] RBDVDI.attach: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:20 hs-0227 SM: [1045] pread SUCCESS
May 22 15:31:20 hs-0227 SM: [1045] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:31:20 hs-0227 SM: [1045] RBDVDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:20 hs-0227 SM: [1045] Calling cephutils.SR._get_path: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:20 hs-0227 SM: [1045] Calling cephutils.VDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:20 hs-0227 SM: [1045] RBDVDI.attach: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:20 hs-0227 SM: [1042] Calling cephutils.SR._get_path: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:20 hs-0227 SM: [1045] Calling cephutils.SR._get_path: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:20 hs-0227 SM: [1042] Calling cephutills.VDI._map_VHD: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae, size=37580963840, dm=none, sharable=false
May 22 15:31:20 hs-0227 SM: [1042] Calling cephutils.VDI._call_plugin: op=map
May 22 15:31:20 hs-0227 SM: [1042] Calling ceph_plugin
May 22 15:31:20 hs-0227 SM: [1042] Calling rbd/nbd map/unmap on host OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce
May 22 15:31:20 hs-0227 SM: [1045] Calling cephutills.VDI._map_VHD: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1, size=16777216, dm=none, sharable=false
May 22 15:31:20 hs-0227 SM: [1045] Calling cephutils.VDI._call_plugin: op=map
May 22 15:31:20 hs-0227 SM: [1045] Calling ceph_plugin
May 22 15:31:20 hs-0227 SM: [1045] Calling rbd/nbd map/unmap on host OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce
May 22 15:31:20 hs-0227 SM: [1293] ['rbd-nbd', '--nbds_max', '64', 'map', 'RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-e25e1317-260a-4714-b7af-0ef32a87b1b1', '--name', 'client.admin']
May 22 15:31:20 hs-0227 SM: [1290] ['rbd-nbd', '--nbds_max', '64', 'map', 'RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-d4a70912-459a-4720-8901-2cb2d08930ae', '--name', 'client.admin']
May 22 15:31:20 hs-0227 SM: [1290] pread SUCCESS
May 22 15:31:20 hs-0227 SM: [1290] ['ln', '-s', '/dev/nbd4', '/dev/nbd/RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-d4a70912-459a-4720-8901-2cb2d08930ae']
May 22 15:31:20 hs-0227 SM: [1293] pread SUCCESS
May 22 15:31:20 hs-0227 SM: [1293] ['ln', '-s', '/dev/nbd3', '/dev/nbd/RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-e25e1317-260a-4714-b7af-0ef32a87b1b1']
May 22 15:31:20 hs-0227 SM: [1293] pread SUCCESS
May 22 15:31:20 hs-0227 SM: [1293] ['ln', '-s', '/dev/nbd3', '/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1']
May 22 15:31:20 hs-0227 SM: [1290] pread SUCCESS
May 22 15:31:20 hs-0227 SM: [1290] ['ln', '-s', '/dev/nbd4', '/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae']
May 22 15:31:20 hs-0227 SM: [1293] pread SUCCESS
May 22 15:31:20 hs-0227 SM: [1290] pread SUCCESS
May 22 15:31:20 hs-0227 SM: [1045] PhyLink(/dev/sm/phy/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1) -> /run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:31:20 hs-0227 SM: [1042] PhyLink(/dev/sm/phy/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae) -> /run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:31:20 hs-0227 SM: [1045] RBDSR.handles type nfs
May 22 15:31:20 hs-0227 SM: [1045] RBDSR.handles type ext
May 22 15:31:20 hs-0227 SM: [1045] RBDSR.handles type smb
May 22 15:31:20 hs-0227 SM: [1042] RBDSR.handles type nfs
May 22 15:31:20 hs-0227 SM: [1042] RBDSR.handles type ext
May 22 15:31:20 hs-0227 SM: [1042] RBDSR.handles type smb
May 22 15:31:20 hs-0227 SM: [1045] ['/usr/sbin/tap-ctl', 'allocate']
May 22 15:31:20 hs-0227 SM: [1042] ['/usr/sbin/tap-ctl', 'allocate']
May 22 15:31:20 hs-0227 SM: [1045] = 0
May 22 15:31:20 hs-0227 SM: [1042] = 0
May 22 15:31:20 hs-0227 SM: [1045] ['/usr/sbin/tap-ctl', 'spawn']
May 22 15:31:20 hs-0227 SM: [1042] ['/usr/sbin/tap-ctl', 'spawn']
May 22 15:31:20 hs-0227 SM: [1045] = 0
May 22 15:31:20 hs-0227 SM: [1042] = 0
May 22 15:31:20 hs-0227 SM: [1045] ['/usr/sbin/tap-ctl', 'attach', '-p', '1470', '-m', '10']
May 22 15:31:20 hs-0227 SM: [1042] ['/usr/sbin/tap-ctl', 'attach', '-p', '1469', '-m', '11']
May 22 15:31:20 hs-0227 SM: [1042] = 0
May 22 15:31:20 hs-0227 SM: [1042] ['/usr/sbin/tap-ctl', 'open', '-p', '1469', '-m', '11', '-a', 'aio:/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae', '-t', '40']
May 22 15:31:20 hs-0227 SM: [1045] = 0
May 22 15:31:20 hs-0227 SM: [1045] ['/usr/sbin/tap-ctl', 'open', '-p', '1470', '-m', '10', '-a', 'aio:/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1', '-t', '40']
May 22 15:31:20 hs-0227 SM: [1042] = 0
May 22 15:31:20 hs-0227 SM: [1045] = 0
May 22 15:31:20 hs-0227 SM: [1042] tap.activate: Launched Tapdisk(vhd:/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae, pid=1469, minor=11, state=R)
May 22 15:31:20 hs-0227 SM: [1042] Attempt to register tapdisk with RRDD as a plugin.
May 22 15:31:20 hs-0227 SM: [1042] ERROR: Failed to register tapdisk with RRDD due to UnixStreamHTTP instance has no attribute 'getresponse'
May 22 15:31:20 hs-0227 SM: [1042] DeviceNode(/dev/sm/backend/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae) -> /dev/xen/blktap-2/tapdev11
May 22 15:31:20 hs-0227 SM: [1042] lock: released /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:31:20 hs-0227 SM: [1042] lock: closed /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:31:20 hs-0227 SM: [1045] tap.activate: Launched Tapdisk(vhd:/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1, pid=1470, minor=10, state=R)
May 22 15:31:20 hs-0227 SM: [1045] Attempt to register tapdisk with RRDD as a plugin.
May 22 15:31:20 hs-0227 SM: [1045] ERROR: Failed to register tapdisk with RRDD due to UnixStreamHTTP instance has no attribute 'getresponse'
May 22 15:31:20 hs-0227 SM: [1045] DeviceNode(/dev/sm/backend/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1) -> /dev/xen/blktap-2/tapdev10
May 22 15:31:20 hs-0227 SM: [1045] lock: released /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:31:20 hs-0227 SM: [1045] lock: closed /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:48:23 hs-0227 SM: [17370] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:23 hs-0227 SM: [17370] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:48:23 hs-0227 SM: [17370] Calling cephutils.SR._get_srlist
May 22 15:48:23 hs-0227 SM: [17370] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:48:24 hs-0227 SM: [17370] pread SUCCESS
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:24 hs-0227 SM: [17370] RBDVDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.SR._get_path: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.VDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:24 hs-0227 SM: [17370] vdi_deactivate {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|3f178485-3abf-5339-ee40-d526e9b23e02|VDI.deactivate', 'vdi_ref': 'OpaqueRef:11e52ee9-eef1-8ccb-6d67-752b351d5319', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'e25e1317-260a-4714-b7af-0ef32a87b1b1', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:ac55402c-d403-c4de-a04b-0486de162da8', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'e25e1317-260a-4714-b7af-0ef32a87b1b1'}
May 22 15:48:24 hs-0227 SM: [17370] lock: opening lock file /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:48:24 hs-0227 SM: [17370] blktap2.deactivate
May 22 15:48:24 hs-0227 SM: [17370] lock: acquired /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:48:24 hs-0227 SM: [17370] ['/usr/sbin/tap-ctl', 'close', '-p', '1470', '-m', '10']
May 22 15:48:24 hs-0227 SM: [17370] = 0
May 22 15:48:24 hs-0227 SM: [17370] Attempt to deregister tapdisk with RRDD.
May 22 15:48:24 hs-0227 SM: [17370] ERROR: Failed to deregister tapdisk with RRDD due to UnixStreamHTTP instance has no attribute 'getresponse'
May 22 15:48:24 hs-0227 SM: [17370] ['/usr/sbin/tap-ctl', 'detach', '-p', '1470', '-m', '10']
May 22 15:48:24 hs-0227 SM: [17370] = 0
May 22 15:48:24 hs-0227 SM: [17370] ['/usr/sbin/tap-ctl', 'free', '-m', '10']
May 22 15:48:24 hs-0227 SM: [17370] = 0
May 22 15:48:24 hs-0227 SM: [17370] tap.deactivate: Shut down Tapdisk(vhd:/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1, pid=1470, minor=10, state=R)
May 22 15:48:24 hs-0227 SM: [17370] RBDSR.handles type rbd
May 22 15:48:24 hs-0227 SM: [17370] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.SR._get_srlist
May 22 15:48:24 hs-0227 SM: [17370] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:48:24 hs-0227 SM: [17370] pread SUCCESS
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:24 hs-0227 SM: [17370] RBDVDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.SR._get_path: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.VDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:24 hs-0227 SM: [17370] RBDVDI.detach: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutills.VDI._unmap_VHD: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1, size=16777216, dm=none, sharable=false
May 22 15:48:24 hs-0227 SM: [17370] Calling cephutils.VDI._call_plugin: op=unmap
May 22 15:48:24 hs-0227 SM: [17370] Calling ceph_plugin
May 22 15:48:24 hs-0227 SM: [17370] Calling rbd/nbd map/unmap on host OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce
May 22 15:48:24 hs-0227 SM: [17511] ['realpath', '/dev/nbd/RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-e25e1317-260a-4714-b7af-0ef32a87b1b1']
May 22 15:48:24 hs-0227 SM: [17511] pread SUCCESS
May 22 15:48:24 hs-0227 SM: [17511] ['unlink', '/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/e25e1317-260a-4714-b7af-0ef32a87b1b1']
May 22 15:48:24 hs-0227 SM: [17511] pread SUCCESS
May 22 15:48:24 hs-0227 SM: [17511] ['unlink', '/dev/nbd/RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-e25e1317-260a-4714-b7af-0ef32a87b1b1']
May 22 15:48:24 hs-0227 SM: [17511] pread SUCCESS
May 22 15:48:24 hs-0227 SM: [17511] ['rbd-nbd', 'unmap', '/dev/nbd3', '--name', 'client.admin']
May 22 15:48:24 hs-0227 SM: [17511] pread SUCCESS
May 22 15:48:24 hs-0227 SM: [17370] Removed host key host_OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce for e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:24 hs-0227 SM: [17370] lock: released /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:48:24 hs-0227 SM: [17370] lock: closed /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:48:25 hs-0227 SM: [17528] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:25 hs-0227 SM: [17528] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:48:25 hs-0227 SM: [17528] Calling cephutils.SR._get_srlist
May 22 15:48:25 hs-0227 SM: [17528] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:48:25 hs-0227 SM: [17528] pread SUCCESS
May 22 15:48:25 hs-0227 SM: [17528] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:25 hs-0227 SM: [17528] RBDVDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:25 hs-0227 SM: [17528] Calling cephutils.SR._get_path: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:25 hs-0227 SM: [17528] Calling cephutils.VDI.load: vdi_uuid=e25e1317-260a-4714-b7af-0ef32a87b1b1
May 22 15:48:25 hs-0227 SM: [17528] vdi_detach {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|4c486f79-162b-711c-66fc-f08bc3dd5a5d|VDI.detach', 'vdi_ref': 'OpaqueRef:11e52ee9-eef1-8ccb-6d67-752b351d5319', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'e25e1317-260a-4714-b7af-0ef32a87b1b1', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:e1a4346e-d9e4-28c8-f42f-aab8b14930a4', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'e25e1317-260a-4714-b7af-0ef32a87b1b1'}
May 22 15:48:25 hs-0227 SM: [17528] lock: opening lock file /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:48:25 hs-0227 SM: [17528] lock: closed /var/lock/sm/e25e1317-260a-4714-b7af-0ef32a87b1b1/vdi
May 22 15:48:25 hs-0227 SM: [17558] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:25 hs-0227 SM: [17558] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:48:25 hs-0227 SM: [17558] Calling cephutils.SR._get_srlist
May 22 15:48:25 hs-0227 SM: [17558] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:48:25 hs-0227 SM: [17558] pread SUCCESS
May 22 15:48:25 hs-0227 SM: [17558] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:25 hs-0227 SM: [17558] RBDVDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:25 hs-0227 SM: [17558] Calling cephutils.SR._get_path: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:25 hs-0227 SM: [17558] Calling cephutils.VDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:25 hs-0227 SM: [17558] vdi_deactivate {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|2e0c833d-42c6-d375-9083-1d9b98b486f9|VDI.deactivate', 'vdi_ref': 'OpaqueRef:fbbb6e4f-e724-2de2-5102-4685c9a73118', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'd4a70912-459a-4720-8901-2cb2d08930ae', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:527ae2a4-796a-54fe-2753-9ab2cd62b526', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_deactivate', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'd4a70912-459a-4720-8901-2cb2d08930ae'}
May 22 15:48:25 hs-0227 SM: [17558] lock: opening lock file /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:48:25 hs-0227 SM: [17558] blktap2.deactivate
May 22 15:48:25 hs-0227 SM: [17558] lock: acquired /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:48:25 hs-0227 SM: [17558] ['/usr/sbin/tap-ctl', 'close', '-p', '1469', '-m', '11']
May 22 15:48:25 hs-0227 SM: [17558] = 0
May 22 15:48:25 hs-0227 SM: [17558] Attempt to deregister tapdisk with RRDD.
May 22 15:48:25 hs-0227 SM: [17558] ERROR: Failed to deregister tapdisk with RRDD due to UnixStreamHTTP instance has no attribute 'getresponse'
May 22 15:48:25 hs-0227 SM: [17558] ['/usr/sbin/tap-ctl', 'detach', '-p', '1469', '-m', '11']
May 22 15:48:25 hs-0227 SM: [17558] = 0
May 22 15:48:25 hs-0227 SM: [17558] ['/usr/sbin/tap-ctl', 'free', '-m', '11']
May 22 15:48:25 hs-0227 SM: [17558] = 0
May 22 15:48:25 hs-0227 SM: [17558] tap.deactivate: Shut down Tapdisk(vhd:/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae, pid=1469, minor=11, state=R)
May 22 15:48:25 hs-0227 SM: [17558] RBDSR.handles type rbd
May 22 15:48:25 hs-0227 SM: [17558] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:25 hs-0227 SM: [17558] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:48:25 hs-0227 SM: [17558] Calling cephutils.SR._get_srlist
May 22 15:48:25 hs-0227 SM: [17558] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:48:26 hs-0227 SM: [17558] pread SUCCESS
May 22 15:48:26 hs-0227 SM: [17558] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:26 hs-0227 SM: [17558] RBDVDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:26 hs-0227 SM: [17558] Calling cephutils.SR._get_path: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:26 hs-0227 SM: [17558] Calling cephutils.VDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:26 hs-0227 SM: [17558] RBDVDI.detach: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:26 hs-0227 SM: [17558] Calling cephutills.VDI._unmap_VHD: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae, size=37580963840, dm=none, sharable=false
May 22 15:48:26 hs-0227 SM: [17558] Calling cephutils.VDI._call_plugin: op=unmap
May 22 15:48:26 hs-0227 SM: [17558] Calling ceph_plugin
May 22 15:48:26 hs-0227 SM: [17558] Calling rbd/nbd map/unmap on host OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce
May 22 15:48:26 hs-0227 SM: [17786] ['realpath', '/dev/nbd/RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-d4a70912-459a-4720-8901-2cb2d08930ae']
May 22 15:48:26 hs-0227 SM: [17786] pread SUCCESS
May 22 15:48:26 hs-0227 SM: [17786] ['unlink', '/run/sr-mount/976e0ed5-2901-47b0-8962-ad6e76d6b55a/d4a70912-459a-4720-8901-2cb2d08930ae']
May 22 15:48:26 hs-0227 SM: [17786] pread SUCCESS
May 22 15:48:26 hs-0227 SM: [17786] ['unlink', '/dev/nbd/RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-d4a70912-459a-4720-8901-2cb2d08930ae']
May 22 15:48:26 hs-0227 SM: [17786] pread SUCCESS
May 22 15:48:26 hs-0227 SM: [17786] ['rbd-nbd', 'unmap', '/dev/nbd4', '--name', 'client.admin']
May 22 15:48:26 hs-0227 SM: [17786] pread SUCCESS
May 22 15:48:26 hs-0227 SM: [17558] Removed host key host_OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce for d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:26 hs-0227 SM: [17558] lock: released /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:48:26 hs-0227 SM: [17558] lock: closed /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:48:26 hs-0227 SM: [17804] RBDSR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:26 hs-0227 SM: [17804] Calling cephutils.SR.load: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, ceph_user=admin
May 22 15:48:26 hs-0227 SM: [17804] Calling cephutils.SR._get_srlist
May 22 15:48:26 hs-0227 SM: [17804] ['ceph', 'df', '--format', 'json', '--name', 'client.admin']
May 22 15:48:26 hs-0227 SM: [17804] pread SUCCESS
May 22 15:48:26 hs-0227 SM: [17804] Calling cephutils.SR._get_sr_uuid_by_name: pool=RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a
May 22 15:48:26 hs-0227 SM: [17804] RBDVDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:26 hs-0227 SM: [17804] Calling cephutils.SR._get_path: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:26 hs-0227 SM: [17804] Calling cephutils.VDI.load: vdi_uuid=d4a70912-459a-4720-8901-2cb2d08930ae
May 22 15:48:26 hs-0227 SM: [17804] vdi_detach {'sr_uuid': '976e0ed5-2901-47b0-8962-ad6e76d6b55a', 'subtask_of': 'DummyRef:|db69d1d9-d5be-3c1a-8e0d-16a590f49d0b|VDI.detach', 'vdi_ref': 'OpaqueRef:fbbb6e4f-e724-2de2-5102-4685c9a73118', 'vdi_on_boot': 'persist', 'args': [], 'vdi_location': 'd4a70912-459a-4720-8901-2cb2d08930ae', 'host_ref': 'OpaqueRef:41824a7f-7c1c-9f5a-bc06-003bdaa339ce', 'session_ref': 'OpaqueRef:6cb3fbf0-c35a-52f0-8f62-9a50a0328dab', 'device_config': {'SRmaster': 'false'}, 'command': 'vdi_detach', 'vdi_allow_caching': 'false', 'sr_ref': 'OpaqueRef:88cb8c50-c240-d970-05fa-86c11d40a376', 'local_cache_sr': '4294b190-9c60-9ac2-60a2-de98951661bd', 'vdi_uuid': 'd4a70912-459a-4720-8901-2cb2d08930ae'}
May 22 15:48:26 hs-0227 SM: [17804] lock: opening lock file /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:48:26 hs-0227 SM: [17804] lock: closed /var/lock/sm/d4a70912-459a-4720-8901-2cb2d08930ae/vdi
May 22 15:48:27 hs-0227 snapwatchd: [2336] XS-PATH -> /vss/b0240f14-f811-6087-acb2-a413fd652886

from rbdsr.

rposudnevskiy avatar rposudnevskiy commented on August 10, 2024

Hi,
Thank you. I didn't find any messages in XenServer and SMlog files related to Vdi with uuid b45517d2-768e-49d7-8c57-c7fe15c19d06 for which you received an error in XD. Could you please try again and send full log files. You can send them to my email or attach to comment.

from rbdsr.

AkagiS avatar AkagiS commented on August 10, 2024

I didn't find any messages in XenServer and SMlog files related to Vdi with uuid b45517d2-768e-49d7-8c57-c7fe15c19d06 for which you received an error in XD

I think it's the reason of the problem. uuid created disks incorrect or wrong.
Another try:

Lets check disks before start VM provisioning

rbd ls -l --pool RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a

NAME SIZE PARENT FMT PROT LOCK
VHD-23bc5d8f-3d80-4c2f-827b-e8cb9a3b67e1 102400M 2 excl
VHD-3972d793-eb31-4268-bdbc-e698dc31e68b 102400M 2 excl
VHD-3d9c2d4e-1ab3-4f00-a685-b8720f7c8f55 2680M 2
VHD-5199d662-a341-4343-9ade-3d2266e75117 102400M 2 excl
VHD-68d49137-2107-4759-937e-2275714e45eb 32768M 2 excl
VHD-70e2e8a2-8129-454d-b61d-573d9a6331ba 2680M 2
VHD-8b0d71dc-01a6-4ef2-9013-c92d1eeda02c 10240M 2 excl
VHD-a253256c-7216-43ae-8f92-286b6ec4d13b 102400M 2 excl
VHD-a92c7cad-b539-4106-a959-37c21cecc420 102400M 2 excl
VHD-accec397-5cd9-4556-b65e-65e24d75fc65 35840M 2
VHD-d37ab048-427a-40ef-96b9-3acce33101f3 102400M 2 excl
VHD-e1fe0194-9df4-4e82-b336-269cfe05e1a0 24576M 2

xe vdi-list sr-uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a params=uuid,vbd-uuids,virtual-size

uuid ( RO) : a92c7cad-b539-4106-a959-37c21cecc420
vbd-uuids (SRO): bc494d8e-6994-5648-c29a-90afa6cebdec
virtual-size ( RO): 107374182400
uuid ( RO) : 5199d662-a341-4343-9ade-3d2266e75117
vbd-uuids (SRO): 6b4a86bf-f470-4c69-4a69-26cd0bb52b91
virtual-size ( RO): 107374182400
uuid ( RO) : 23bc5d8f-3d80-4c2f-827b-e8cb9a3b67e1
vbd-uuids (SRO): 6b27f76a-4aac-a726-8bea-e78e5b2df35a
virtual-size ( RO): 107374182400
uuid ( RO) : a253256c-7216-43ae-8f92-286b6ec4d13b
vbd-uuids (SRO): 09a3484b-6559-be22-cdf8-4c08825adc00
virtual-size ( RO): 107374182400
uuid ( RO) : 3d9c2d4e-1ab3-4f00-a685-b8720f7c8f55
vbd-uuids (SRO):
virtual-size ( RO): 2810183680
uuid ( RO) : accec397-5cd9-4556-b65e-65e24d75fc65
vbd-uuids (SRO): f5f9423b-569b-c42b-eb2a-a9f362117c3c
virtual-size ( RO): 37580963840
uuid ( RO) : e1fe0194-9df4-4e82-b336-269cfe05e1a0
vbd-uuids (SRO): b17fe295-8cbd-5b35-d252-d2f7743a34ea
virtual-size ( RO): 25769803776
uuid ( RO) : 8b0d71dc-01a6-4ef2-9013-c92d1eeda02c
vbd-uuids (SRO): ae4d945a-8b28-e9cb-ccdd-110687d1c7ac
virtual-size ( RO): 10737418240
uuid ( RO) : 3972d793-eb31-4268-bdbc-e698dc31e68b
vbd-uuids (SRO): 254e98fd-7423-8536-55b0-60a57a3922e9
virtual-size ( RO): 107374182400
uuid ( RO) : d37ab048-427a-40ef-96b9-3acce33101f3
vbd-uuids (SRO): 5bd54814-1d0d-bdc3-74c3-1967ee38cddb
virtual-size ( RO): 107374182400
uuid ( RO) : 70e2e8a2-8129-454d-b61d-573d9a6331ba
vbd-uuids (SRO):
virtual-size ( RO): 2810183680
uuid ( RO) : 68d49137-2107-4759-937e-2275714e45eb
vbd-uuids (SRO): 81365652-b403-de3d-3398-4e33fe3f262a
virtual-size ( RO): 34359738368

Then start VM provisioning from template
On last step of provisioning get the same error:

Error Id: XDDS:4A5116C7

Exception:
Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.TaskTerminatedException Failed to create the virtual machine; DOMAIN\VDI-0001$.
at Citrix.Console.PowerShellSdk.ProvisioningSchemeService.BackgroundTasks.AnalyzeMachineProvisioningResultsTask.RunTask()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.Run()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.RunSubTasks()
at Citrix.Console.PowerShellSdk.ProvisioningSchemeService.BackgroundTasks.McsMachineCreationTask.RunTask()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.Run()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.RunSubTasks()
at Citrix.Console.PowerShellSdk.BackgroundTaskService.BackgroundTask.Task.Run()
at Citrix.Console.PowerShellSdk.ProvisioningSchemeService.Scripts.CreateMcsBasedDesktopCatalogScript.<>c__DisplayClass7_1.b__2()
at Citrix.Console.PowerShellInteraction.PowerShellScript`1.<>c__DisplayClass81_0.b__0()

DOMAIN\VDI-0001$ : [DOMAIN\VDI-0001$, Failed to create the virtual machine; DOMAIN\VDI-0001$.
Error Details
MachineFailure
 : DOMAIN\VDI-0001$Inner Error:
Failure in AssignDisktoVM, INTERNAL_ERROR, Storage_interface.Vdi_does_not_exist("**6ebfd908-a159-49ec-bac2-01f29d42be4e**")
Error Details
ErrorID
 : PluginUtilities.Exceptions.ManagedMachineGeneralExceptionTaskErrorInformation
 : PluginUtilities.Exceptions.ManagedMachineGeneralException: Failure in AssignDisktoVM, INTERNAL_ERROR, Storage_interface.Vdi_does_not_exist("6ebfd908-a159-49ec-bac2-01f29d42be4e") ---> PluginUtilities.Exceptions.ManagedMachineGeneralException: Failure in AssignDisktoVM, INTERNAL_ERROR, Storage_interface.Vdi_does_not_exist("6ebfd908-a159-49ec-bac2-01f29d42be4e") ---> PluginUtilities.Exceptions.WrappedPluginException: Internal error: Storage_interface.Vdi_does_not_exist("6ebfd908-a159-49ec-bac2-01f29d42be4e")

Created disks

rbd ls -l --pool RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a

NAME SIZE PARENT FMT PROT LOCK
VHD-23bc5d8f-3d80-4c2f-827b-e8cb9a3b67e1 102400M 2 excl
VHD-3972d793-eb31-4268-bdbc-e698dc31e68b 102400M 2 excl
VHD-3d9c2d4e-1ab3-4f00-a685-b8720f7c8f55 2680M 2
VHD-5199d662-a341-4343-9ade-3d2266e75117 102400M 2 excl
VHD-68d49137-2107-4759-937e-2275714e45eb 32768M 2 excl
VHD-70e2e8a2-8129-454d-b61d-573d9a6331ba 2680M 2
VHD-77ea08d9-698b-442b-8cff-88e18a1ad6a3 35840M RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-c2a35208-dcfd-4840-aa64-833f2c883c9c@SNAP-e9 a87f31-a3ea-4965-bf94-366a7efe7969 2
VHD-77ea08d9-698b-442b-8cff-88e18a1ad6a3@SNAP-483461f3-37ce-48ef-8b48-38f537822bcd 35840M RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-c2a35208-dcfd-4840-aa64-833f2c883c9c@SNAP-e9 a87f31-a3ea-4965-bf94-366a7efe7969 2 yes

VHD-8b0d71dc-01a6-4ef2-9013-c92d1eeda02c 10240M 2 excl
VHD-a253256c-7216-43ae-8f92-286b6ec4d13b 102400M 2 excl
VHD-a92c7cad-b539-4106-a959-37c21cecc420 102400M 2 excl
VHD-accec397-5cd9-4556-b65e-65e24d75fc65 35840M 2
VHD-accec397-5cd9-4556-b65e-65e24d75fc65@SNAP-91f052c4-4d0c-4388-b7b6-a08fa710346f 35840M 2 yes
VHD-c2a35208-dcfd-4840-aa64-833f2c883c9c 35840M 2
VHD-c2a35208-dcfd-4840-aa64-833f2c883c9c@SNAP-e9a87f31-a3ea-4965-bf94-366a7efe7969 35840M 2 yes

VHD-d37ab048-427a-40ef-96b9-3acce33101f3 102400M 2 excl
VHD-e1fe0194-9df4-4e82-b336-269cfe05e1a0 24576M 2

xe vdi-list sr-uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a params=uuid,vbd-uuids,virtual-size

uuid ( RO) : a92c7cad-b539-4106-a959-37c21cecc420
vbd-uuids (SRO): bc494d8e-6994-5648-c29a-90afa6cebdec
virtual-size ( RO): 107374182400
uuid ( RO) : 5199d662-a341-4343-9ade-3d2266e75117
vbd-uuids (SRO): 6b4a86bf-f470-4c69-4a69-26cd0bb52b91
virtual-size ( RO): 107374182400
uuid ( RO) : 23bc5d8f-3d80-4c2f-827b-e8cb9a3b67e1
vbd-uuids (SRO): 6b27f76a-4aac-a726-8bea-e78e5b2df35a
virtual-size ( RO): 107374182400
uuid ( RO) : 483461f3-37ce-48ef-8b48-38f537822bcd
vbd-uuids (SRO):
virtual-size ( RO): 37580963840
uuid ( RO) : a253256c-7216-43ae-8f92-286b6ec4d13b
vbd-uuids (SRO): 09a3484b-6559-be22-cdf8-4c08825adc00
virtual-size ( RO): 107374182400
uuid ( RO) : 91f052c4-4d0c-4388-b7b6-a08fa710346f
vbd-uuids (SRO): 08218e3f-136c-a830-7e5e-5551d9ef8c32
virtual-size ( RO): 37580963840
uuid ( RO) : 3d9c2d4e-1ab3-4f00-a685-b8720f7c8f55
vbd-uuids (SRO):
virtual-size ( RO): 2810183680
uuid ( RO) : e9a87f31-a3ea-4965-bf94-366a7efe7969
vbd-uuids (SRO):
virtual-size ( RO): 37580963840
uuid ( RO) : accec397-5cd9-4556-b65e-65e24d75fc65
vbd-uuids (SRO): f5f9423b-569b-c42b-eb2a-a9f362117c3c
virtual-size ( RO): 37580963840
uuid ( RO) : c2a35208-dcfd-4840-aa64-833f2c883c9c
vbd-uuids (SRO):
virtual-size ( RO): 37580963840
uuid ( RO) : e1fe0194-9df4-4e82-b336-269cfe05e1a0
vbd-uuids (SRO): b17fe295-8cbd-5b35-d252-d2f7743a34ea
virtual-size ( RO): 25769803776
uuid ( RO) : 8b0d71dc-01a6-4ef2-9013-c92d1eeda02c
vbd-uuids (SRO): ae4d945a-8b28-e9cb-ccdd-110687d1c7ac
virtual-size ( RO): 10737418240
uuid ( RO) : 3972d793-eb31-4268-bdbc-e698dc31e68b
vbd-uuids (SRO): 254e98fd-7423-8536-55b0-60a57a3922e9
virtual-size ( RO): 107374182400
uuid ( RO) : d37ab048-427a-40ef-96b9-3acce33101f3
vbd-uuids (SRO): 5bd54814-1d0d-bdc3-74c3-1967ee38cddb
virtual-size ( RO): 107374182400
uuid ( RO) : 70e2e8a2-8129-454d-b61d-573d9a6331ba
vbd-uuids (SRO):
virtual-size ( RO): 2810183680
uuid ( RO) : 68d49137-2107-4759-937e-2275714e45eb
vbd-uuids (SRO): 81365652-b403-de3d-3398-4e33fe3f262a
virtual-size ( RO): 34359738368

from rbdsr.

AkagiS avatar AkagiS commented on August 10, 2024

Hi again

i think this is not created VM:

xe vm-list uuid=3f383cf4-ddbe-48ee-a5de-b1289aeb26b0 params=all

uuid ( RO) : 3f383cf4-ddbe-48ee-a5de-b1289aeb26b0
name-label ( RW): Control domain on host: hs-0229
name-description ( RW): The domain which manages physical devices and manages other domains
user-version ( RW): 1
is-a-template ( RW): false
is-a-snapshot ( RO): false
snapshot-of ( RO):
snapshots ( RO):
snapshot-time ( RO): 19700101T00:00:00Z
snapshot-info ( RO):
parent ( RO):
children ( RO):
is-control-domain ( RO): true
power-state ( RO): running
memory-actual ( RO): 4294967296
memory-target ( RO):
memory-overhead ( RO): 76546048
memory-static-max ( RW): 4294967296
memory-dynamic-max ( RW): 4294967296
memory-dynamic-min ( RW): 4294967296
memory-static-min ( RW): 4294967296
suspend-VDI-uuid ( RW):
suspend-SR-uuid ( RW):
VCPUs-params (MRW):
VCPUs-max ( RW): 40
VCPUs-at-startup ( RW): 40
actions-after-shutdown ( RW): Destroy
actions-after-reboot ( RW): Destroy
actions-after-crash ( RW): Destroy
console-uuids (SRO): 458bd2d9-f7fb-fb65-1ef9-da8f201590f2; 4f619275-5ec3-4b10-9045-ed29d13a2cd4
hvm ( RO): false
platform (MRW):
allowed-operations (SRO): changing_dynamic_range; changing_static_range
current-operations (SRO):
blocked-operations (MRW):
allowed-VBD-devices (SRO):
allowed-VIF-devices (SRO):
possible-hosts ( RO):
HVM-boot-policy ( RW):
HVM-boot-params (MRW):
HVM-shadow-multiplier ( RW): 1.000
PV-kernel ( RW):
PV-ramdisk ( RW):
PV-args ( RW):
PV-legacy-args ( RW):
PV-bootloader ( RW):
PV-bootloader-args ( RW):
last-boot-CPU-flags ( RO):
last-boot-record ( RO):
resident-on ( RO): aa35216e-8d29-4e25-8d60-42ca9ff58d69
affinity ( RW): aa35216e-8d29-4e25-8d60-42ca9ff58d69
other-config (MRW): storage_driver_domain: OpaqueRef:1d8d2940-62c6-107d-e7c3-88508e87209c; is_system_domain: true
dom-id ( RO): 0
recommendations ( RO):
xenstore-data (MRW):
ha-always-run ( RW) [DEPRECATED]: false
ha-restart-priority ( RW):
blobs ( RO):
start-time ( RO): 19700101T00:00:00Z
install-time ( RO): 19700101T00:00:00Z
VCPUs-number ( RO): 40
VCPUs-utilisation (MRO):
os-version (MRO):
PV-drivers-version (MRO):
PV-drivers-up-to-date ( RO) [DEPRECATED]:
memory (MRO):
disks (MRO):
VBDs (SRO):
networks (MRO):
PV-drivers-detected ( RO):
other (MRO):
live ( RO):
guest-metrics-last-updated ( RO):
can-use-hotplug-vbd ( RO):
can-use-hotplug-vif ( RO):
cooperative ( RO) [DEPRECATED]:
tags (SRW):
appliance ( RW):
start-delay ( RW): 0
shutdown-delay ( RW): 0
order ( RW): 0
version ( RO): 0
generation-id ( RO):
hardware-platform-version ( RO): 0
has-vendor-device ( RW): false
requires-reboot ( RO): false
reference-label ( RO):

from rbdsr.

AkagiS avatar AkagiS commented on August 10, 2024

Problem related SMLog and Xensourse log included.
Smlog.txt
xensource.log.txt

from rbdsr.

rposudnevskiy avatar rposudnevskiy commented on August 10, 2024

Hi,
I still can't find messages in logfiles for some VDIs that are created during provisioning process, but they should be there.
Provisioning process in MCS consists of many steps such as:

  • MCS creates a snapshot of the master VM. You can see it as master VM disk VHD-accec397-5cd9-4556-b65e-65e24d75fc65 and snapshot SNAP-91f052c4-4d0c-4388-b7b6-a08fa710346f)
  • MCS creates a full copy of the master VM snapshot and places this on each storage repository defined in the host connection. It creates disk VHD-c2a35208-dcfd-4840-aa64-833f2c883c9c and snapshot SNAP-e9a87f31-a3ea-4965-bf94-366a7efe7969 and then copy master VM snapshot. You can see in xensource.log that it makes copying by using sparse_dd command.
    May 24 17:17:54 hs-0229 xapi: [debug|hs-0229|5210 INET :::80|Async.VDI.copy R:ac24f60bfccd|xapi] /usr/libexec/xapi/sparse_dd -machine -src /dev/sm/backend/976e0ed5-2901-47b0-8962-ad6e76d6b55a/91f052c4-4d0c-4388-b7b6-a08fa710346f -dest /dev/sm/backend/976e0ed5-2901-47b0-8962-ad6e76d6b55a/c2a35208-dcfd-4840-aa64-833f2c883c9c -size 37580963840 -good-ciphersuites !EXPORT:RSA+AES128-SHA256 -legacy-ciphersuites RSA+AES256-SHA:RSA+AES128-SHA:RSA+RC4-SHA:RSA+DES-CBC3-SHA -ssl-legacy -prezeroed
  • MCS creates preparation VM and small prepare-identity disk (VHD-21253ee3-2885-42d0-bd6c-94449018e3b7) to prepare disk created in previous step VHD-c2a35208-dcfd-4840-aa64-833f2c883c9c to be used as base disk for provisioned VMs later. You can see the messages related to this step in xensource.log
    May 24 17:34:22 hs-0229 xenopsd-xc: [debug|hs-0229|1261 |VM.start R:068fa18aee06|xenops_server] VM.add {"id": "b358481c-305d-bb70-f3e8-9a930e5a59c8", "name": "Preparation - VD I-7-64-Ceph"......
    May 24 17:34:22 hs-0229 xapi: [ info|hs-0229|5795 INET :::80|VM.start R:068fa18aee06|xenops] xenops: VM.import_metadata {"vm": {"id": "b358481c-305d-bb70-f3e8-9a930e5a59c8", "name": "Preparation - VDI-7-64-Ceph"..........
    May 24 17:34:24 hs-0229 xenopsd-xc: [debug|hs-0229|14 |Parallel:task=2058.atoms=2.(VBD.plug RW vm=b358481c-305d-bb70-f3e8-9a930e5a59c8)|xenops] Device.Vbd.add (device_number=Ide(0, 0) | params=/dev/sm/backend/976e0ed5-2901-47b0-8962-ad6e76d6b55a/c2a35208-dcfd-4840-aa64-833f2c883c9c | phystype=phys)
    May 24 17:34:24 hs-0229 xenopsd-xc: [debug|hs-0229|12 |Parallel:task=2058.atoms=2.(VBD.plug RW vm=b358481c-305d-bb70-f3e8-9a930e5a59c8)|xenops] Device.Vbd.add (device_number=Ide(1, 0) | params=/dev/sm/backend/976e0ed5-2901-47b0-8962-ad6e76d6b55a/21253ee3-2885-42d0-bd6c-94449018e3b7 | phystype=phys)

Your logs finishes at the moment when MCS shutdown and delete the preparation VM that means that base image has been prepared.
But it is not final step. After that MCS should create new VM, create a clone of base image etc. You can see that MCS created the clone
VHD-77ea08d9-698b-442b-8cff-88e18a1ad6a3 35840M RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a/VHD-c2a35208-dcfd-4840-aa64-833f2c883c9c@SNAP-e9a87f31-a3ea-4965-bf94-366a7efe7969 2
but there aren't any messages about it in logs.
We catch our error at this moment.
So I guess the logs is not full.
Could you please send full logs from your XenServer or at least issue the commands like these and send the output:
egrep "c2a35208-dcfd-4840-aa64-833f2c883c9c|e9a87f31-a3ea-4965-bf94-366a7efe7969|accec397-5cd9-4556-b65e-65e24d75fc65|91f052c4-4d0c-4388-b7b6-a08fa710346f|77ea08d9-698b-442b-8cff-88e18a1ad6a3|483461f3-37ce-48ef-8b48-38f537822bcd|21253ee3-2885-42d0-bd6c-94449018e3b7" xensource.log.txt
egrep "c2a35208-dcfd-4840-aa64-833f2c883c9c|e9a87f31-a3ea-4965-bf94-366a7efe7969|accec397-5cd9-4556-b65e-65e24d75fc65|91f052c4-4d0c-4388-b7b6-a08fa710346f|77ea08d9-698b-442b-8cff-88e18a1ad6a3|483461f3-37ce-48ef-8b48-38f537822bcd|21253ee3-2885-42d0-bd6c-94449018e3b7" SMlog
Here the uuids are the uuids you highligted in your previous message. You should change it for new try.

Thank you.
PS: I can't test RBDSR with XenDesktop as XD is a commercial product and I don't have it. Thank you again for your help and I hope we can resolve this problem.

from rbdsr.

AkagiS avatar AkagiS commented on August 10, 2024

Hi,
Grepped logs.
SMlog.filtered.txt
xensource.log.filtered.txt

Also I'm going to check all Xen nodes for the related error.

from rbdsr.

rposudnevskiy avatar rposudnevskiy commented on August 10, 2024

Yes please. If it is possible please check all Xen nodes.

from rbdsr.

AkagiS avatar AkagiS commented on August 10, 2024

Another try.
I have been trying to put all nodes (except one) in to maintenance mode, before start provisioning.

SMlog.txt

May 29 14:23:30 hs-0226 SM: [3708] RBDVDI.update: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, vdi_uuid=7052ea4f-5c92-4219-bbff-fea88393247d
May 29 14:23:30 hs-0226 SM: [3708] Calling cephutils.VDI.update: sr_uuid=976e0ed5-2901-47b0-8962-ad6e76d6b55a, vdi_uuid=7052ea4f-5c92-4219-bbff-fea88393247d

May 29 14:23:30 hs-0226 SM: [3708] ['rbd', 'image-meta', 'set', 'VHD-7052ea4f-5c92-4219-bbff-fea88393247d', 'VDI_LABEL', 'VDI-0001-clone-of-VDI-7-64-Ceph-baseDisk', '--pool', 'RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a', '--name', 'client.admin']
May 29 14:23:30 hs-0226 SM: [3708] FAILED in util.pread: (rc 2) stdout: '', stderr: 'rbd: error opening image VHD-7052ea4f-5c92-4219-bbff-fea88393247d: (2) No such file or directory
May 29 14:23:30 hs-0226 SM: [3708] '
May 29 14:23:30 hs-0226 SM: [3708] ***** vdi_update: EXCEPTION <class 'util.CommandException'>, No such file or directory
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/SRCommand.py", line 110, in run
May 29 14:23:30 hs-0226 SM: [3708] return self._run_locked(sr)
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
May 29 14:23:30 hs-0226 SM: [3708] rv = self._run(sr, target)
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/SRCommand.py", line 230, in _run
May 29 14:23:30 hs-0226 SM: [3708] return target.update(self.params['sr_uuid'], self.vdi_uuid)
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/RBDSR", line 650, in update
May 29 14:23:30 hs-0226 SM: [3708] cephutils.VDI.update(self, sr_uuid, vdi_uuid)
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/cephutils.py", line 276, in update
May 29 14:23:30 hs-0226 SM: [3708] util.pread2(["rbd", "image-meta", "set", vdi_name, "VDI_LABEL", self.label, "--pool", self.sr.CEPH_POOL_NAME, "--name", self.sr.CEPH_USER])
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/util.py", line 189, in pread2
May 29 14:23:30 hs-0226 SM: [3708] return pread(cmdlist, quiet = quiet)
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/util.py", line 182, in pread
May 29 14:23:30 hs-0226 SM: [3708] raise CommandException(rc, str(cmdlist), stderr.strip())
May 29 14:23:30 hs-0226 SM: [3708]
May 29 14:23:30 hs-0226 SM: [3708] Raising exception [202, General backend error [opterr=Command ['rbd', 'image-meta', 'set', 'VHD-7052ea4f-5c92-4219-bbff-fea88393247d', 'VDI_LABEL', 'VDI-0001-clone-of-VDI-7-64-Ceph-baseDisk', '--pool', 'RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a', '--name', 'client.admin'] failed (rbd: error opening image VHD-7052ea4f-5c92-4219-bbff-fea88393247d: (2) No such file or directory): No such file or directory]]
May 29 14:23:30 hs-0226 SM: [3708] ***** RBD: EXCEPTION <class 'SR.SROSError'>, General backend error [opterr=Command ['rbd', 'image-meta', 'set', 'VHD-7052ea4f-5c92-4219-bbff-fea88393247d', 'VDI_LABEL', 'VDI-0001-clone-of-VDI-7-64-Ceph-baseDisk', '--pool', 'RBD_XenStorage-976e0ed5-2901-47b0-8962-ad6e76d6b55a', '--name', 'client.admin'] failed (rbd: error opening image VHD-7052ea4f-5c92-4219-bbff-fea88393247d: (2) No such file or directory): No such file or directory]
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/SRCommand.py", line 353, in run
May 29 14:23:30 hs-0226 SM: [3708] ret = cmd.run(sr)
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/SRCommand.py", line 120, in run
May 29 14:23:30 hs-0226 SM: [3708] raise xs_errors.XenError(excType, opterr=msg)
May 29 14:23:30 hs-0226 SM: [3708] File "/opt/xensource/sm/xs_errors.py", line 52, in init
May 29 14:23:30 hs-0226 SM: [3708] raise SR.SROSError(errorcode, errormessage)
May 29 14:23:30 hs-0226 SM: [3708]
May 29 14:23:30 hs-0226 SM: [3557] ***** vdi_update: EXCEPTION <class 'XenAPI.Failure'>, ['INTERNAL_ERROR', 'Storage_interface.Vdi_does_not_exist("7052ea4f-5c92-4219-bbff-fea88393247d")']
May 29 14:23:30 hs-0226 SM: [3557] File "/opt/xensource/sm/SRCommand.py", line 110, in run
May 29 14:23:30 hs-0226 SM: [3557] return self._run_locked(sr)
May 29 14:23:30 hs-0226 SM: [3557] File "/opt/xensource/sm/SRCommand.py", line 159, in _run_locked
May 29 14:23:30 hs-0226 SM: [3557] rv = self._run(sr, target)
May 29 14:23:30 hs-0226 SM: [3557] File "/opt/xensource/sm/SRCommand.py", line 230, in _run
May 29 14:23:30 hs-0226 SM: [3557] return target.update(self.params['sr_uuid'], self.vdi_uuid)
May 29 14:23:30 hs-0226 SM: [3557] File "/opt/xensource/sm/RBDSR", line 662, in update
May 29 14:23:30 hs-0226 SM: [3557] self.session.xenapi.VDI.set_name_label(snapshot_vdi_ref, self.session.xenapi.VDI.get_name_label(self_vdi_ref))
May 29 14:23:30 hs-0226 SM: [3557] File "/usr/lib/python2.7/site-packages/XenAPI.py", line 254, in call
May 29 14:23:30 hs-0226 SM: [3557] return self.__send(self.__name, args)
May 29 14:23:30 hs-0226 SM: [3557] File "/usr/lib/python2.7/site-packages/XenAPI.py", line 150, in xenapi_request
May 29 14:23:30 hs-0226 SM: [3557] result = _parse_result(getattr(self, methodname)(*full_params))
May 29 14:23:30 hs-0226 SM: [3557] File "/usr/lib/python2.7/site-packages/XenAPI.py", line 228, in _parse_result
May 29 14:23:30 hs-0226 SM: [3557] raise Failure(result['ErrorDescription'])
May 29 14:23:30 hs-0226 SM: [3557]
May 29 14:23:30 hs-0226 SM: [3557] Raising exception [202, General backend error [opterr=['INTERNAL_ERROR', 'Storage_interface.Vdi_does_not_exist("7052ea4f-5c92-4219-bbff-fea88393247d")']]]
May 29 14:23:30 hs-0226 SM: [3557] ***** RBD: EXCEPTION <class 'SR.SROSError'>, General backend error [opterr=['INTERNAL_ERROR', 'Storage_interface.Vdi_does_not_exist("7052ea4f-5c92-4219-bbff-fea88393247d")']]
May 29 14:23:30 hs-0226 SM: [3557] File "/opt/xensource/sm/SRCommand.py", line 353, in run
May 29 14:23:30 hs-0226 SM: [3557] ret = cmd.run(sr)
May 29 14:23:30 hs-0226 SM: [3557] File "/opt/xensource/sm/SRCommand.py", line 120, in run
May 29 14:23:30 hs-0226 SM: [3557] raise xs_errors.XenError(excType, opterr=msg)
May 29 14:23:30 hs-0226 SM: [3557] File "/opt/xensource/sm/xs_errors.py", line 52, in init
May 29 14:23:30 hs-0226 SM: [3557] raise SR.SROSError(errorcode, errormessage)
May 29 14:23:30 hs-0226 SM: [3557]

from rbdsr.

AkagiS avatar AkagiS commented on August 10, 2024

Automatically (from Xen console) didn't delete this disk+snap

VHD-b034c911-73fc-486a-aaca-c30742b8c9b1 35840M 2
VHD-b034c911-73fc-486a-aaca-c30742b8c9b1@SNAP-a9ab077c-3401-4379-9840-d9bc25759ec4 35840M 2 yes

[root@hs-0226 log]# xe vdi-list uuid=b034c911-73fc-486a-aaca-c30742b8c9b1
uuid ( RO) : b034c911-73fc-486a-aaca-c30742b8c9b1
name-label ( RW): VDI-7-64-Ceph-baseDisk
name-description ( RW): Template
sr-uuid ( RO): 976e0ed5-2901-47b0-8962-ad6e76d6b55a
virtual-size ( RO): 37580963840
sharable ( RO): false
read-only ( RO): false

xe vdi-list uuid=a9ab077c-3401-4379-9840-d9bc25759ec4
uuid ( RO) : a9ab077c-3401-4379-9840-d9bc25759ec4
name-label ( RW): VDI-7-64-Ceph-baseDisk
name-description ( RW): Template
sr-uuid ( RO): 976e0ed5-2901-47b0-8962-ad6e76d6b55a
virtual-size ( RO): 37580963840
sharable ( RO): false
read-only ( RO): true

from rbdsr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.