Virtual disk: add migration case after attaching disk#6797
Virtual disk: add migration case after attaching disk#6797meinaLi wants to merge 1 commit intoautotest:masterfrom
Conversation
|
WalkthroughAdds two new files for a Libvirt migration test: a config Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@libvirt/tests/src/migration/migration_with_disk/migration_after_hotplug.py`:
- Around line 61-68: The detach is currently called with ignore_status=True and
there's no check that the hotplugged disk exists on the destination after
migration, which can yield false positives; update the test so after
migration_obj.run_migration() you explicitly assert the disk_target is present
by calling vm.get_blk_devices().keys() (or equivalent) and failing if missing,
and change the remote.run_remote_cmd invocation that detaches the disk to not
use ignore_status=True (let it fail fast or propagate the error) so detach
failures cause the test to fail; keep the existing
migration_obj.run_migration_back() and final vm.get_blk_devices() check but
ensure the intermediate presence check and removal call reference
migration_obj.run_migration(), remote.run_remote_cmd(..., without
ignore_status), migration_obj.run_migration_back(), vm.get_blk_devices().keys(),
and test.fail(...) to locate the edits.
| migration_obj.run_migration() | ||
| migration_obj.verify_default() | ||
| remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target) | ||
| remote.run_remote_cmd(remote_cmd, params, ignore_status=True) | ||
| migration_obj.run_migration_back() | ||
| all_vm_disks = vm.get_blk_devices().keys() | ||
| if disk_target in all_vm_disks: | ||
| test.fail("Disk %s still exists after migration back." % disk_target) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "migration_after_hotplug.py" | head -20Repository: autotest/tp-libvirt
Length of output: 139
🏁 Script executed:
cat -n ./libvirt/tests/src/migration/migration_with_disk/migration_after_hotplug.pyRepository: autotest/tp-libvirt
Length of output: 3064
🏁 Script executed:
find . -type f -name "base_steps.py" -path "*/provider/migration/*" | head -5Repository: autotest/tp-libvirt
Length of output: 97
🏁 Script executed:
cat -n ./provider/migration/base_steps.py | head -200Repository: autotest/tp-libvirt
Length of output: 11103
🏁 Script executed:
grep -n "verify_default" ./provider/migration/base_steps.py | head -10Repository: autotest/tp-libvirt
Length of output: 96
🏁 Script executed:
sed -n '313,350p' ./provider/migration/base_steps.pyRepository: autotest/tp-libvirt
Length of output: 1454
🏁 Script executed:
grep -n "run_migration_back" ./provider/migration/base_steps.py | head -5Repository: autotest/tp-libvirt
Length of output: 100
🏁 Script executed:
sed -n '244,260p' ./provider/migration/base_steps.pyRepository: autotest/tp-libvirt
Length of output: 811
🏁 Script executed:
find ./libvirt/tests/src/migration/migration_with_disk -type f -name "*.py" | head -20Repository: autotest/tp-libvirt
Length of output: 217
🏁 Script executed:
cat -n ./libvirt/tests/src/migration/migration_with_disk/migration_with_vhostuser.pyRepository: autotest/tp-libvirt
Length of output: 5662
🏁 Script executed:
rg "detach-disk" --type py -B 2 -A 2 ./libvirt/tests/src/migration/ | head -50Repository: autotest/tp-libvirt
Length of output: 709
🏁 Script executed:
rg "get_blk_devices" --type py -B 1 -A 3 | head -50Repository: autotest/tp-libvirt
Length of output: 3959
🏁 Script executed:
rg "domblklist" --type py -B 2 -A 2 | head -40Repository: autotest/tp-libvirt
Length of output: 3303
🏁 Script executed:
grep -n "post_migration_check" ./provider/migration/base_steps.py | head -5Repository: autotest/tp-libvirt
Length of output: 143
🏁 Script executed:
sed -n '12,20p' ./libvirt/tests/src/migration/migration_with_disk/migration_after_hotplug.pyRepository: autotest/tp-libvirt
Length of output: 229
Prevent false‑positive: verify disk survives migration and don't ignore detach failure.
The test can pass even if the disk detach fails on destination because errors are ignored with ignore_status=True. Add an explicit disk-presence check after migration to destination and change detach to fail fast if it fails.
🛠️ Proposed fix
migration_obj.run_migration()
migration_obj.verify_default()
+ # Verify disk is still present on destination after migration
+ domblk = remote.run_remote_cmd(
+ "virsh domblklist %s --details" % vm_name,
+ params,
+ ignore_status=False
+ ).stdout_text
+ if disk_target not in domblk:
+ test.fail("Disk %s missing after migration." % disk_target)
remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target)
- remote.run_remote_cmd(remote_cmd, params, ignore_status=True)
+ remote.run_remote_cmd(remote_cmd, params, ignore_status=False)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| migration_obj.run_migration() | |
| migration_obj.verify_default() | |
| remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target) | |
| remote.run_remote_cmd(remote_cmd, params, ignore_status=True) | |
| migration_obj.run_migration_back() | |
| all_vm_disks = vm.get_blk_devices().keys() | |
| if disk_target in all_vm_disks: | |
| test.fail("Disk %s still exists after migration back." % disk_target) | |
| migration_obj.run_migration() | |
| migration_obj.verify_default() | |
| # Verify disk is still present on destination after migration | |
| domblk = remote.run_remote_cmd( | |
| "virsh domblklist %s --details" % vm_name, | |
| params, | |
| ignore_status=False | |
| ).stdout_text | |
| if disk_target not in domblk: | |
| test.fail("Disk %s missing after migration." % disk_target) | |
| remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target) | |
| remote.run_remote_cmd(remote_cmd, params, ignore_status=False) | |
| migration_obj.run_migration_back() | |
| all_vm_disks = vm.get_blk_devices().keys() | |
| if disk_target in all_vm_disks: | |
| test.fail("Disk %s still exists after migration back." % disk_target) |
🤖 Prompt for AI Agents
In `@libvirt/tests/src/migration/migration_with_disk/migration_after_hotplug.py`
around lines 61 - 68, The detach is currently called with ignore_status=True and
there's no check that the hotplugged disk exists on the destination after
migration, which can yield false positives; update the test so after
migration_obj.run_migration() you explicitly assert the disk_target is present
by calling vm.get_blk_devices().keys() (or equivalent) and failing if missing,
and change the remote.run_remote_cmd invocation that detaches the disk to not
use ignore_status=True (let it fail fast or propagate the error) so detach
failures cause the test to fail; keep the existing
migration_obj.run_migration_back() and final vm.get_blk_devices() check but
ensure the intermediate presence check and removal call reference
migration_obj.run_migration(), remote.run_remote_cmd(..., without
ignore_status), migration_obj.run_migration_back(), vm.get_blk_devices().keys(),
and test.fail(...) to locate the edits.
| Setup steps before migration | ||
| """ | ||
| migration_obj.setup_connection() | ||
| if not vm.is_alive(): |
There was a problem hiding this comment.
If set 'start_vm = "yes"' in cfg, vm will start in migration_obj.setup_connection(). The code is at line 94 in base_steps.py.
There was a problem hiding this comment.
I've removed this part now.
| migration_obj.run_migration() | ||
| migration_obj.verify_default() | ||
| remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target) | ||
| remote.run_remote_cmd(remote_cmd, params, ignore_status=True) |
There was a problem hiding this comment.
Why is set ignore_status to True here? If the disk detach fails, won't that have any impact?
There was a problem hiding this comment.
Yes, it's better to use ignore_status=False.
Automate case: VIRT-19072 - [virtual disks] Migration after attaching disk device to guest Signed-off-by: Meina Li <meili@redhat.com>
74a3ec1 to
a222b5d
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@libvirt/tests/cfg/migration/migration_with_disk/migration_after_hotplug.cfg`:
- Line 1: The test entry is incorrectly written as a YAML list item; remove the
leading dash so the top-level key reads exactly
migration.migration_with_disk.migration_after_hotplug: (no leading "-") to
ensure the cartesian parser registers the test entry; update the declaration in
the config where the current "-
migration.migration_with_disk.migration_after_hotplug:" appears to be plain
mapping syntax.
| @@ -0,0 +1,33 @@ | |||
| - migration.migration_with_disk.migration_after_hotplug: | |||
There was a problem hiding this comment.
Remove the leading dash from the test entry name.
The top-level test config should be declared as migration.migration_with_disk.migration_after_hotplug: without list syntax; the current leading - can prevent the cartesian parser from registering the test entry.
🔧 Proposed fix
- - migration.migration_with_disk.migration_after_hotplug:
+ migration.migration_with_disk.migration_after_hotplug:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - migration.migration_with_disk.migration_after_hotplug: | |
| migration.migration_with_disk.migration_after_hotplug: |
🤖 Prompt for AI Agents
In `@libvirt/tests/cfg/migration/migration_with_disk/migration_after_hotplug.cfg`
at line 1, The test entry is incorrectly written as a YAML list item; remove the
leading dash so the top-level key reads exactly
migration.migration_with_disk.migration_after_hotplug: (no leading "-") to
ensure the cartesian parser registers the test entry; update the declaration in
the config where the current "-
migration.migration_with_disk.migration_after_hotplug:" appears to be plain
mapping syntax.
Automate case:
VIRT-19072 - [virtual disks] Migration after attaching disk device to guest
Summary by CodeRabbit