Skip to content

Virtual disk: add migration case after attaching disk#6797

Open
meinaLi wants to merge 1 commit intoautotest:masterfrom
meinaLi:disk_hotplug_migration
Open

Virtual disk: add migration case after attaching disk#6797
meinaLi wants to merge 1 commit intoautotest:masterfrom
meinaLi:disk_hotplug_migration

Conversation

@meinaLi
Copy link
Contributor

@meinaLi meinaLi commented Feb 9, 2026

Automate case:
VIRT-19072 - [virtual disks] Migration after attaching disk device to guest

Summary by CodeRabbit

  • Tests
    • Added a new migration test and configuration to validate VM migration after hot-plugging a disk. Covers disk-backed NFS migration over SSH, lifecycle and migration flags, pre/post-migration checks (disk present on destination, removed on source), migration-back flow, timeout/connection handling, and includes a "with_precopy" variant.

@meinaLi
Copy link
Contributor Author

meinaLi commented Feb 9, 2026

# avocado run --vt-type libvirt --vt-omit-data-loss --vt-machine-type q35 migration.migration_with_disk.migration_after_hotplug
JOB ID     : b9aea737ef59b761d6fdf9a07e801b78ecffcf63
JOB LOG    : /var/log/avocado/job-results/job-2026-02-09T02.57-b9aea73/job.log
 (1/1) type_specific.io-github-autotest-libvirt.migration.migration_with_disk.migration_after_hotplug.with_precopy: STARTED
 (1/1) type_specific.io-github-autotest-libvirt.migration.migration_with_disk.migration_after_hotplug.with_precopy:PASS (243.86 s)
RESULTS    : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /var/log/avocado/job-results/job-2026-02-09T02.57-b9aea73/results.html
JOB TIME   : 245.46 s

@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Walkthrough

Adds two new files for a Libvirt migration test: a config migration_after_hotplug.cfg that defines an NFS/file-backed disk migration over SSH (including URIs, timeouts, source/destination states and a with_precopy variant), and a Python test migration_after_hotplug.py implementing run(test, params, env). The test prepares and hot‑plugs a disk, verifies hotplug inside the VM, performs migration (and back), detaches the disk on the source, and verifies disk presence/absence. Setup and cleanup helpers are included and a try/finally ensures cleanup.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: adding a migration test case after disk attachment, which matches the PR objectives of automating test case VIRT-19072.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@libvirt/tests/src/migration/migration_with_disk/migration_after_hotplug.py`:
- Around line 61-68: The detach is currently called with ignore_status=True and
there's no check that the hotplugged disk exists on the destination after
migration, which can yield false positives; update the test so after
migration_obj.run_migration() you explicitly assert the disk_target is present
by calling vm.get_blk_devices().keys() (or equivalent) and failing if missing,
and change the remote.run_remote_cmd invocation that detaches the disk to not
use ignore_status=True (let it fail fast or propagate the error) so detach
failures cause the test to fail; keep the existing
migration_obj.run_migration_back() and final vm.get_blk_devices() check but
ensure the intermediate presence check and removal call reference
migration_obj.run_migration(), remote.run_remote_cmd(..., without
ignore_status), migration_obj.run_migration_back(), vm.get_blk_devices().keys(),
and test.fail(...) to locate the edits.

Comment on lines +61 to +68
migration_obj.run_migration()
migration_obj.verify_default()
remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target)
remote.run_remote_cmd(remote_cmd, params, ignore_status=True)
migration_obj.run_migration_back()
all_vm_disks = vm.get_blk_devices().keys()
if disk_target in all_vm_disks:
test.fail("Disk %s still exists after migration back." % disk_target)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "migration_after_hotplug.py" | head -20

Repository: autotest/tp-libvirt

Length of output: 139


🏁 Script executed:

cat -n ./libvirt/tests/src/migration/migration_with_disk/migration_after_hotplug.py

Repository: autotest/tp-libvirt

Length of output: 3064


🏁 Script executed:

find . -type f -name "base_steps.py" -path "*/provider/migration/*" | head -5

Repository: autotest/tp-libvirt

Length of output: 97


🏁 Script executed:

cat -n ./provider/migration/base_steps.py | head -200

Repository: autotest/tp-libvirt

Length of output: 11103


🏁 Script executed:

grep -n "verify_default" ./provider/migration/base_steps.py | head -10

Repository: autotest/tp-libvirt

Length of output: 96


🏁 Script executed:

sed -n '313,350p' ./provider/migration/base_steps.py

Repository: autotest/tp-libvirt

Length of output: 1454


🏁 Script executed:

grep -n "run_migration_back" ./provider/migration/base_steps.py | head -5

Repository: autotest/tp-libvirt

Length of output: 100


🏁 Script executed:

sed -n '244,260p' ./provider/migration/base_steps.py

Repository: autotest/tp-libvirt

Length of output: 811


🏁 Script executed:

find ./libvirt/tests/src/migration/migration_with_disk -type f -name "*.py" | head -20

Repository: autotest/tp-libvirt

Length of output: 217


🏁 Script executed:

cat -n ./libvirt/tests/src/migration/migration_with_disk/migration_with_vhostuser.py

Repository: autotest/tp-libvirt

Length of output: 5662


🏁 Script executed:

rg "detach-disk" --type py -B 2 -A 2 ./libvirt/tests/src/migration/ | head -50

Repository: autotest/tp-libvirt

Length of output: 709


🏁 Script executed:

rg "get_blk_devices" --type py -B 1 -A 3 | head -50

Repository: autotest/tp-libvirt

Length of output: 3959


🏁 Script executed:

rg "domblklist" --type py -B 2 -A 2 | head -40

Repository: autotest/tp-libvirt

Length of output: 3303


🏁 Script executed:

grep -n "post_migration_check" ./provider/migration/base_steps.py | head -5

Repository: autotest/tp-libvirt

Length of output: 143


🏁 Script executed:

sed -n '12,20p' ./libvirt/tests/src/migration/migration_with_disk/migration_after_hotplug.py

Repository: autotest/tp-libvirt

Length of output: 229


Prevent false‑positive: verify disk survives migration and don't ignore detach failure.

The test can pass even if the disk detach fails on destination because errors are ignored with ignore_status=True. Add an explicit disk-presence check after migration to destination and change detach to fail fast if it fails.

🛠️ Proposed fix
         migration_obj.run_migration()
         migration_obj.verify_default()
+        # Verify disk is still present on destination after migration
+        domblk = remote.run_remote_cmd(
+            "virsh domblklist %s --details" % vm_name,
+            params,
+            ignore_status=False
+        ).stdout_text
+        if disk_target not in domblk:
+            test.fail("Disk %s missing after migration." % disk_target)
         remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target)
-        remote.run_remote_cmd(remote_cmd, params, ignore_status=True)
+        remote.run_remote_cmd(remote_cmd, params, ignore_status=False)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
migration_obj.run_migration()
migration_obj.verify_default()
remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target)
remote.run_remote_cmd(remote_cmd, params, ignore_status=True)
migration_obj.run_migration_back()
all_vm_disks = vm.get_blk_devices().keys()
if disk_target in all_vm_disks:
test.fail("Disk %s still exists after migration back." % disk_target)
migration_obj.run_migration()
migration_obj.verify_default()
# Verify disk is still present on destination after migration
domblk = remote.run_remote_cmd(
"virsh domblklist %s --details" % vm_name,
params,
ignore_status=False
).stdout_text
if disk_target not in domblk:
test.fail("Disk %s missing after migration." % disk_target)
remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target)
remote.run_remote_cmd(remote_cmd, params, ignore_status=False)
migration_obj.run_migration_back()
all_vm_disks = vm.get_blk_devices().keys()
if disk_target in all_vm_disks:
test.fail("Disk %s still exists after migration back." % disk_target)
🤖 Prompt for AI Agents
In `@libvirt/tests/src/migration/migration_with_disk/migration_after_hotplug.py`
around lines 61 - 68, The detach is currently called with ignore_status=True and
there's no check that the hotplugged disk exists on the destination after
migration, which can yield false positives; update the test so after
migration_obj.run_migration() you explicitly assert the disk_target is present
by calling vm.get_blk_devices().keys() (or equivalent) and failing if missing,
and change the remote.run_remote_cmd invocation that detaches the disk to not
use ignore_status=True (let it fail fast or propagate the error) so detach
failures cause the test to fail; keep the existing
migration_obj.run_migration_back() and final vm.get_blk_devices() check but
ensure the intermediate presence check and removal call reference
migration_obj.run_migration(), remote.run_remote_cmd(..., without
ignore_status), migration_obj.run_migration_back(), vm.get_blk_devices().keys(),
and test.fail(...) to locate the edits.

Copy link
Contributor

@cliping cliping left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Others LGTM.

Setup steps before migration
"""
migration_obj.setup_connection()
if not vm.is_alive():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If set 'start_vm = "yes"' in cfg, vm will start in migration_obj.setup_connection(). The code is at line 94 in base_steps.py.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed this part now.

migration_obj.run_migration()
migration_obj.verify_default()
remote_cmd = "virsh detach-disk %s %s" % (vm_name, disk_target)
remote.run_remote_cmd(remote_cmd, params, ignore_status=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is set ignore_status to True here? If the disk detach fails, won't that have any impact?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's better to use ignore_status=False.

Automate case:
VIRT-19072 - [virtual disks] Migration after attaching disk device to guest

Signed-off-by: Meina Li <meili@redhat.com>
@meinaLi meinaLi force-pushed the disk_hotplug_migration branch from 74a3ec1 to a222b5d Compare February 11, 2026 08:26
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@libvirt/tests/cfg/migration/migration_with_disk/migration_after_hotplug.cfg`:
- Line 1: The test entry is incorrectly written as a YAML list item; remove the
leading dash so the top-level key reads exactly
migration.migration_with_disk.migration_after_hotplug: (no leading "-") to
ensure the cartesian parser registers the test entry; update the declaration in
the config where the current "-
migration.migration_with_disk.migration_after_hotplug:" appears to be plain
mapping syntax.

@@ -0,0 +1,33 @@
- migration.migration_with_disk.migration_after_hotplug:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove the leading dash from the test entry name.

The top-level test config should be declared as migration.migration_with_disk.migration_after_hotplug: without list syntax; the current leading - can prevent the cartesian parser from registering the test entry.

🔧 Proposed fix
- - migration.migration_with_disk.migration_after_hotplug:
+ migration.migration_with_disk.migration_after_hotplug:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- migration.migration_with_disk.migration_after_hotplug:
migration.migration_with_disk.migration_after_hotplug:
🤖 Prompt for AI Agents
In `@libvirt/tests/cfg/migration/migration_with_disk/migration_after_hotplug.cfg`
at line 1, The test entry is incorrectly written as a YAML list item; remove the
leading dash so the top-level key reads exactly
migration.migration_with_disk.migration_after_hotplug: (no leading "-") to
ensure the cartesian parser registers the test entry; update the declaration in
the config where the current "-
migration.migration_with_disk.migration_after_hotplug:" appears to be plain
mapping syntax.

@meinaLi meinaLi requested a review from cliping February 11, 2026 08:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants