I just setup a Pacemaker/Corosync cluster with libvirtd running and 2 VMs.
The filesystem is a gfs2 shared filesystem and it is working fine so far.
I am able to do a pcs resource move and the VMs are shutting down and starting on the other node.
I want to do a live migration. I updated the resource with the meta option for the live migration.
The live migration fails with the following fault:
Code: Select all
Jun 27 17:46:40 melbourne VirtualDomain(Bormann_)[22414]: INFO: Bormann_: Starting live migration to sydney (using remote hypervisor URI qemu://sydn
ey/system ).
Jun 27 17:46:40 melbourne journal: Unable to stop block job on drive-ide0-0-0
Jun 27 17:46:40 melbourne VirtualDomain(Bormann_)[22414]: ERROR: Bormann_: live migration to qemu://sydney/system failed: 1
Jun 27 17:46:40 melbourne lrmd[2641]: notice: operation_finished: Bormann__migrate_to_0:22414:stderr [ error: internal error: Attempt to migrate guest to the
same host 00020003-0004-0005-0006-000700080009 ]
Jun 27 17:46:40 melbourne lrmd[2641]: notice: operation_finished: Bormann__migrate_to_0:22414:stderr [ ocf-exit-reason:Bormann_Chessimage: live migration to q
emu://sydney/system failed: 1 ]
Jun 27 17:46:40 melbourne crmd[2644]: notice: process_lrm_event: Operation Bormann__migrate_to_0: unknown error (node=melbourne, call=84, rc=1, cib-update=73,
confirmed=true)
Jun 27 17:46:40 melbourne crmd[2644]: notice: process_lrm_event: melbourne-Bormann__migrate_to_0:84 [ error: internal error: Attempt to migrate guest to the s
ame host 00020003-0004-0005-0006-000700080009\nocf-exit-reason:Bormann_: live migration to qemu://sydney/system failed: 1\n ]
Jun 27 17:46:40 melbourne VirtualDomain(Bormann_)[22455]: INFO: Issuing graceful shutdown request for domain Bormann_.
thanks
MarcMarin