Appdata Cleanup Plus is an Unraid plugin for finding orphaned Docker appdata folders, reviewing why they were surfaced, and then quarantining or deleting only the paths you explicitly choose.
It is built around conservative cleanup: grouped scan results, server-side action snapshots, quarantine-first workflow, restore and purge management, hard safety locks for risky filesystem targets, and ZFS-aware permanent delete for dataset-backed appdata roots.
Quick links: Install | Update | What It Detects | ZFS-Backed Appdata | Safety Model | Quarantine and Restore | Stored State | Development | Support
- Find leftover appdata folders from removed containers without manually digging through shares.
- Cross-check saved Docker templates against live container mounts when Docker is online.
- Catch direct-child orphan folders in the configured appdata share even when no saved template still references them.
- Resolve mapped
/mnt/user/...style appdata paths to exact ZFS dataset mountpoints for dataset-aware permanent delete. - Review grouped results with search, risk filtering, sorting, badges, and progressive stat loading instead of a raw folder dump.
- Default real actions to quarantine so cleanup stays reversible until you intentionally purge.
The scan combines multiple sources into one result set:
- Saved Docker template references from
/boot/config/plugins/dockerMan/templates-user/ - Live Docker host mount paths from installed containers when Docker is online
- Direct child folder discovery inside the configured appdata share when Docker is online
Rows are grouped in the UI as:
Saved template referencesAppdata share discoveryIgnored
The scan automatically excludes or blocks paths that should not be treated as normal appdata cleanup candidates, including:
- Active live container mappings
- The plugin quarantine root
- Share roots and mount points
- Paths containing symlinked segments
- Unsafe canonical targets
- VM Manager storage paths read from
domain.cfg- vdisk storage
- ISO storage
- libvirt storage
If Docker is offline, the plugin can still surface template-backed candidates, but those rows should be reviewed more carefully because active container mounts cannot be verified at scan time.
The plugin supports ZFS-backed appdata when your user-share path and your real dataset mount root are different, for example:
- User share root:
/mnt/user/appdata - Dataset mount root:
/mnt/docker_vm_nvme/appdata
Current ZFS support includes:
- A dedicated
ZFS mappingsworkflow for mapping the Unraid share root to the real dataset mount root - Exact dataset mountpoint resolution using configured mappings
- Case-sensitive dataset handling
- Dry run previews that choose standard destroy or recursive destroy only when required
- Recursive impact summaries for child datasets and snapshots
- Permanent delete using
zfs destroyinstead of normal folder deletion for resolved dataset-backed rows
Important limits:
ZFS mappingsis only shown afterEnable ZFS dataset deleteis enabled in Safety settings- ZFS-backed rows still require
Enable permanent deletebefore they become actionable - ZFS-backed rows cannot be quarantined in the current implementation
- Only exact dataset mountpoint matches are treated as ZFS-backed delete targets
- ZFS mappings affect delete resolution, not scan discovery
Recommended workflow:
- Enable
ZFS dataset deletein Safety settings. - Open
ZFS mappings. - Add a mapping from the Unraid share root to the real dataset mount root.
- Rescan.
- Use
Dry runfirst. - Enable
Enable permanent deleteonly when you intentionally want dataset destroy.
- Compact Unraid Settings page built around one scan and one global action bar
- Grouped result sections for template-backed rows, filesystem discovery rows, and ignored rows
- Search, risk filtering, sort order, and section-aware rendering
- Badge-based row summaries with
Ready,Review, andLockedaction states plus source and reason badges - Progressive stat hydration so rows can render first and fill in heavier size data afterward
- Bulk selection,
Select visible, andSelect all - Quarantine manager with bulk restore and purge actions
- Restore collision handling with
Skip conflicts,Restore with suffix, andReview conflict - Audit history for quarantine, restore, purge, and cleanup activity
- Ignore and restore controls for paths you do not want surfaced in the active list
Safety is the core behavior, not an afterthought.
- Real actions default to
Quarantine selected, not permanent delete Dry runpreviews the current action without changing anythingAllow outside-share cleanupmust be enabled before outside-share review rows can be acted onEnable permanent deletemust be enabled before irreversible delete becomes the primary actionEnable ZFS dataset deletemust be enabled before ZFS-backed rows can resolve to dataset destroy actions- Locked rows stay visible, but they are not selectable
- Actions run from server-side scan snapshots using candidate ids instead of trusting posted client paths
- CSRF validation is required for action requests
- Share roots, mount points, symlinked path segments, VM Manager managed paths, and other unsafe targets are blocked at action time
- Restore operations preflight collisions before moving folders back out of quarantine
- ZFS-backed rows remain blocked unless both the ZFS toggle and permanent delete mode are enabled
Quarantine is the default real action path for a reason: it gives you a reversible buffer before permanent removal.
Quarantine workflow:
- Move selected folders into the plugin quarantine root
- Track quarantined entries in the built-in quarantine manager
- Restore entries later to their original path
- Purge entries permanently only when you intend to
Restore behavior:
- Single and bulk restore are supported
- If the original path already exists, the plugin stops and shows a conflict flow
- You can skip the conflicting restore, restore beside it with a generated suffix, or review the conflict before continuing
Default quarantine root:
- Preferred: inside the configured appdata share at
/.appdata-cleanup-plus-quarantine - Fallback:
/mnt/user/system/.appdata-cleanup-plus-quarantine
- Unraid
7.0.0+ - Docker templates stored in the normal Unraid templates-user path for saved-template detection
- A current major browser:
- Chrome
- Edge (Chromium)
- Firefox
- Safari
- Manual review before destructive actions
Compatibility notes:
- The plugin does not depend on the Community Applications helper runtime
- Stable
mainbuilds point tomainmetadata and archives - Testing
devbuilds point todevmetadata and archives - Package versions use
YYYY.MM.DD.UUso same-day releases sort correctly in Unraid
Stable main channel:
plugin install https://raw.githubusercontent.com/alexphillips-dev/Appdata-Cleanup-Plus/main/plugins/appdata.cleanup.plus.plgDev testing channel:
plugin install https://raw.githubusercontent.com/alexphillips-dev/Appdata-Cleanup-Plus/dev/plugins/appdata.cleanup.plus.plgCommunity Applications XML:
https://raw.githubusercontent.com/alexphillips-dev/Appdata-Cleanup-Plus/main/appdata.cleanup.plus.xml
Commit-pinned install pattern:
https://raw.githubusercontent.com/alexphillips-dev/Appdata-Cleanup-Plus/<commit>/plugins/appdata.cleanup.plus.plg
- Preferred:
Plugins -> Check for Updates - Manual: rerun the same
plugin installcommand for the channel you track - If GitHub or Unraid caching delays detection, install once from a commit-pinned raw URL, then return to normal
mainordevbranch tracking
- Open
Settings -> Appdata Cleanup Plus. - Click
Rescan. - Review grouped sections, row badges, size/age metadata, and lock reasons.
- Use
Dry runif you want a no-change preview of the current action. - Leave permanent delete off unless you intentionally want irreversible removal.
- Quarantine selected folders first, then use the quarantine manager to restore or purge as needed.
Runtime state is stored under:
/boot/config/plugins/appdata.cleanup.plus/
Important files and directories:
ignored-paths.json: ignored rows hidden from the default result listcleanup-audit.jsonl: append-only audit log for cleanup, quarantine, restore, and purge activitysafety-settings.json: persisted safety toggle statequarantine-records.json: tracked quarantine entriespath-stats-cache.json: cached size and mtime lookupssnapshots/: session-scoped action snapshots used for server-side action validation
Build the package and refresh manifest/XML metadata:
bash pkg_build.shPreview the next computed package version without writing release files:
bash pkg_build.sh --dry-runRun backend behavior smoke tests:
bash scripts/test_behavior.shValidate manifest, CA metadata, archive, and branch-aware raw URLs:
bash scripts/release_guard.shValidate CA-facing repository readiness:
bash scripts/ca_readiness_guard.shEnsure the current manifest version has a top changelog entry:
bash scripts/ensure_plg_changes_entry.shPromote dev into main, build the stable package, tag the release, publish or update the GitHub release, verify live raw metadata, and print the exact cache-busting install command:
bash scripts/release_main.shAfter promoting main, sync release artifacts and branch ancestry back into dev while restoring dev feed URLs:
bash scripts/sync_main_to_dev.shplugins/appdata.cleanup.plus.plg: Unraid plugin manifest and changelogappdata.cleanup.plus.xml: Community Applications XML metadatasource/appdata.cleanup.plus/: packaged plugin sourcearchive/: built.txzpackagesdocs/images/: banner and repository documentation imagestests/: behavior smoke coverage and fixtures
General usage, screenshots, and testing feedback:
- Unraid forum thread:
https://forums.unraid.net/topic/197975-plugin-appdata-cleanup-plus/
Repository and release tracking:
- Repo:
https://github.com/alexphillips-dev/Appdata-Cleanup-Plus - Latest release:
https://github.com/alexphillips-dev/Appdata-Cleanup-Plus/releases/latest
GitHub issue forms:
Bug report: reproducible plugin bugsFeature request: workflow, safety, UI, or maintainer-tooling improvementsRelease / update problem: install failures, stale update detection, branch tracking issues, or raw URL problems
Blank issues are disabled so reports get routed into the right support path.
