Conversation
|
Thanks for taking the time to contribute to this project. Upload Assistant is currently in a complete rewrite, and no new development is being conducted on this python source at this time. If you have come this far, please feel free to leave open, any pull requests regarding new sites being added to the source, as these can serve as the baseline for later conversion. If your pull request relates to a critical bug, this will be addressed in this code base, and a new release published as needed. If your pull request only addresses a quite minor bug, it is not likely to be addressed in this code base. Details for the new code base will follow at a later date. |
📝 WalkthroughWalkthroughAdds M-Team (MTEAM) tracker support: new tracker module and registration, qbittorrent URL patterns, example-config entry, CLI Douban arg, and README “Supported Sites” table realignments. Changes
Sequence DiagramsequenceDiagram
participant Client as Client
participant App as Application
participant FS as FileSystem
participant MTEAM as M‑Team API
participant DL as TorrentDownload
Client->>App: Request upload with release meta
App->>FS: Read media files, mediainfo, screenshots
FS-->>App: mediainfo / images
App->>App: Build description (Douban/IMDb, screenshots, BBCode→Markdown)
App->>FS: Write [MTEAM]DESCRIPTION.txt
App->>MTEAM: POST search (imdb_id) / check duplicates
MTEAM-->>App: Duplicate list / details
App->>App: Map metadata (category, codecs, bitrates)
App->>MTEAM: POST create/edit (torrent + multipart metadata)
MTEAM-->>App: torrent_id and token
App->>DL: Request .torrent via token
DL-->>App: .torrent file
App-->>Client: Return upload status and torrent_id
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 342-346: The codec mapping in the codec detection block (variable
codec) incorrectly maps "h.265" to the x264/AVC branch in the if branch that
returns x264; remove "h.265" from the first tuple and ensure it (and any other
h.265 variants like "h‑265") are present in the elif tuple that returns x265 (or
add "h.265" to the list in the x265 branch) so H.265/HEVC releases are mapped to
x265 instead of x264.
- Around line 372-385: The codec-detection logic incorrectly matches "dd+"
before the Atmos-specific case, causing DD+ Atmos to be classified as eac3;
update the branch order in the function that checks the codec string so the
atmos+dd+ check (the condition that returns atmos_eac3) runs before the generic
"dd+" and "dd " checks (the branches returning eac3 and ac3), or alternatively
adjust the conditions to explicitly check for "atmos" together with "dd+" before
returning eac3/detecting dd; reference the existing symbols: codec, atmos_eac3,
eac3, ac3, dts_hd_ma, dts, true_hd.
- Around line 81-82: The code builds tmdb_poster unconditionally which yields a
truthy broken URL and prevents the fallback; change the logic so you only
construct tmdb_poster when meta.get('tmdb_poster') is present and non-empty,
e.g. check meta.get('tmdb_poster') first and then set tmdb_poster =
f"https://image.tmdb.org/t/p/w200{meta.get('tmdb_poster')}" else use
imdb.get("cover", "") when assigning poster_url (refer to tmdb_poster,
poster_url, and meta.get('tmdb_poster') in the existing code).
- Around line 409-414: Replace the current anonymous handling that uses
meta.get("anonymous") with the same pattern used by other trackers: read
meta.get("anon") and the tracker config fallback
self.config['TRACKERS'][self.tracker].get('anon', False), compute an anon flag
(e.g. set anon = 0 if meta.get('anon') == 0 and not
self.config['TRACKERS'][self.tracker].get('anon', False) else 1) and then set
"anonymous": bool(anon) in the returned metadata so the new "anon" config key is
honored by the MTEAM logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c741feb9-fe3b-4c8c-a0e2-451f21b9c8c3
📒 Files selected for processing (5)
README.mddata/example-config.pysrc/torrent_clients/qbittorrent.pysrc/trackers/MTEAM.pysrc/trackersetup.py
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 451-457: The branch in MTEAM.py currently ignores the result of
common.download_tracker_torrent and returns True immediately after calling it;
change it to await common.download_tracker_torrent(meta, self.tracker,
headers=dict(self.session.headers), downurl=final_download_url), capture its
return value (e.g., downloaded_path or result) and only return True if that
value is truthy (otherwise propagate False/None or handle failure), so that
MTEAM's code honors the Optional[str] return of download_tracker_torrent and
only signals success when the torrent was actually downloaded.
- Around line 295-300: The dupe link is hard-coded to
"https://kp.m-team.cc/detail/{t_id}" instead of using the configured base_url;
in the loop that builds dupes (the torrents loop that appends to dupes), replace
the hard-coded link with the configured base URL (the base_url defined earlier
for TRACKERS["MTEAM"] or the MTEAM class/instance attribute) and join it safely
(e.g., strip trailing slash or use urljoin) to produce
"{base_url}/detail/{t_id}" so overridden base_url values are honored.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 58c8ae79-2266-4d31-853d-88c65e538e23
📒 Files selected for processing (3)
data/example-config.pysrc/torrent_clients/qbittorrent.pysrc/trackers/MTEAM.py
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/trackers/MTEAM.py (2)
37-43: Consider adding session cleanup.The
httpx.AsyncClientis created in__init__but never explicitly closed. While this may work if tracker instances are short-lived, explicitly closing the session prevents resource leaks. You could add an async context manager or cleanup method.♻️ Optional: Add async context manager support
+ async def close(self): + await self.session.aclose() + + async def __aenter__(self): + return self + + async def __aexit__(self, exc_type, exc_val, exc_tb): + await self.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 37 - 43, The AsyncClient created in MTEAM.__init__ (self.session) is never closed; add explicit cleanup by implementing an async close method (e.g., async def close(self)) that awaits self.session.aclose(), and/or add async context manager methods (async def __aenter__ and async def __aexit__) to call await self.session.aclose() so callers can use "async with MTEAM(...)" or explicitly call close to prevent resource leaks; update any instantiation sites to use the new context manager or call the new close method.
635-636: Simplify timeout error message.Line 636 accesses
self.session.timeout.write, but since the client was created withtimeout=30.0(a scalar), this works but is accessing internal structure. Consider using the known value orstr(self.session.timeout)for clarity.♻️ Simpler approach
except httpx.TimeoutException: - meta["tracker_status"][self.tracker]["status_message"] = f"data error: Request timed out after {self.session.timeout.write} seconds" + meta["tracker_status"][self.tracker]["status_message"] = "data error: Request timed out" return False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 635 - 636, The timeout except block sets meta["tracker_status"][self.tracker]["status_message"] using internal attribute self.session.timeout.write; replace that with a clearer representation such as str(self.session.timeout) or the known scalar (e.g. 30.0) to avoid reaching into internal structures—update the except httpx.TimeoutException handler in the class/method where self.session is used so the status_message reads something like "data error: Request timed out after {str(self.session.timeout)} seconds" or uses the literal timeout value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/MTEAM.py`:
- Line 21: The class-level attribute douban_id on MTEAM creates shared state;
make it an instance attribute by removing the top-level "douban_id = ''" and
adding self.douban_id = '' in MTEAM.__init__, then update all reads/writes from
MTEAM.douban_id to self.douban_id (especially the places currently using
MTEAM.douban_id at the three spots referenced in the diff: the reference near
the beginning (formerly line ~86), the write where MTEAM.douban_id is set
(formerly line ~387), and the later read/use (formerly line ~578)) so each MTEAM
instance keeps its own douban_id.
- Around line 282-283: The current remux branch only returns movie_remux when
is_remux is true and meta["category"] == "MOVIE"; add a new tv_series_remux
category constant (with the correct M-Team category ID—look up M-Team upload
rules or the Jackett indexer for the exact ID) and update the remux check in the
is_remux handling to return movie_remux for meta["category"] == "MOVIE" and
tv_series_remux for meta["category"] == "TV" (leave tv_series_hd as the default
fallback).
---
Nitpick comments:
In `@src/trackers/MTEAM.py`:
- Around line 37-43: The AsyncClient created in MTEAM.__init__ (self.session) is
never closed; add explicit cleanup by implementing an async close method (e.g.,
async def close(self)) that awaits self.session.aclose(), and/or add async
context manager methods (async def __aenter__ and async def __aexit__) to call
await self.session.aclose() so callers can use "async with MTEAM(...)" or
explicitly call close to prevent resource leaks; update any instantiation sites
to use the new context manager or call the new close method.
- Around line 635-636: The timeout except block sets
meta["tracker_status"][self.tracker]["status_message"] using internal attribute
self.session.timeout.write; replace that with a clearer representation such as
str(self.session.timeout) or the known scalar (e.g. 30.0) to avoid reaching into
internal structures—update the except httpx.TimeoutException handler in the
class/method where self.session is used so the status_message reads something
like "data error: Request timed out after {str(self.session.timeout)} seconds"
or uses the literal timeout value.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| if is_remux and meta["category"] == "MOVIE": | ||
| return movie_remux |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
M-Team tracker category list TV remux
💡 Result:
M-Team (tp.m-team.cc) is a Chinese private BitTorrent tracker specializing in HD movies, TV shows, anime, and general content. It categorizes torrents by format and type, with Remux releases (from Blu-ray/BDISO demuxed to MKV) placed under specific subcategories. Key categories relevant to TV Remux, based on Jackett indexer mappings and upload rules (data from 2020, categories may have minor updates): - TV/Documentary: 404 ("紀錄教育" - documentaries/education) - Separate tabs for Movies, TV, 4K indicate TV has dedicated categories. - Remux: Dedicated "Remux" category for BD remuxes (exact ID not listed, but upload rules specify "分類/Remux|從原盤抽出的remux"). - TV Remux likely under TV > Remux subcategory, similar to Movie/Remux. Full category examples from sources: - Audio categories (dropped): 406 MV, 408 AAC/ALAC, 434 Lossless. - Adult categories still exist. - Upload rules: Blu-ray/Remux/HD/SD/DVDiso for video; must include screenshots, MediaInfo, BDInfo for remuxes. For TV Remux specifically, use TV section on torrents.php, filter by Remux tag/source. Tools like PT-Muxer support M-Team remux uploads. No complete public ID list post-2020 found; check site profile or Jackett source code for latest. Official wiki: wiki.m-team.cc (rules in Chinese).
Citations:
- 1: M-Team TP (MTTP) Category Mapping Needs to be Updated Jackett/Jackett#7824
- 2: https://wiki.m-team.cc/zh-tw/upload-rules
- 3: https://tp.m-team.cc/torrents.php
- 4: https://github.com/libertypi/mtsearch
- 5: https://github.com/TMD20/PT-Muxer
🏁 Script executed:
# First, let's look at the get_category_id method and surrounding category definitions
rg -n "tv_series|movie_remux|tv_series_remux|category" src/trackers/MTEAM.py | head -40Repository: Audionut/Upload-Assistant
Length of output: 842
🏁 Script executed:
# Also check what categories are defined as class variables
rg -n "^\s*(tv_series|movie|anime|dvd|sd|remux)" src/trackers/MTEAM.pyRepository: Audionut/Upload-Assistant
Length of output: 535
🏁 Script executed:
# Get the full get_category_id method
sed -n '258,292p' src/trackers/MTEAM.pyRepository: Audionut/Upload-Assistant
Length of output: 1282
🏁 Script executed:
# Search for any comments or references about M-Team remux or TV remux
rg -i "m-team.*remux|tv.*remux" src/trackers/MTEAM.pyRepository: Audionut/Upload-Assistant
Length of output: 51
🏁 Script executed:
# Check Jackett or other tracker files for M-Team category references
fd -name "*.json" -o -name "*jackett*" | head -20Repository: Audionut/Upload-Assistant
Length of output: 299
🏁 Script executed:
# Search for any issue comments or documentation about M-Team remux
rg -i "mteam.*remux" . --type md --type txt 2>/dev/null | head -20Repository: Audionut/Upload-Assistant
Length of output: 51
🏁 Script executed:
# Search for M-Team category definitions in comments or code
rg "438|435|402|403|419|420|421|439" src/trackers/MTEAM.py -B2 -A2Repository: Audionut/Upload-Assistant
Length of output: 537
🏁 Script executed:
# Check if there are any test files or examples with TV remux
find . -name "*test*" -o -name "*example*" | xargs grep -l "MTEAM\|tv.*remux" 2>/dev/null | head -10Repository: Audionut/Upload-Assistant
Length of output: 93
🏁 Script executed:
# Search for M-Team remux category ID across all files
rg "remux.*43[0-9]|43[0-9].*remux" . --type py 2>/dev/nullRepository: Audionut/Upload-Assistant
Length of output: 133
🏁 Script executed:
# Check if there's a TV remux category ID documented anywhere
rg "tv.*remux|remux.*tv" . -i --type py -A2 -B2 | head -40Repository: Audionut/Upload-Assistant
Length of output: 3421
🏁 Script executed:
# Look at the example config to see if there are any hints about categories
grep -n "remux\|category" ./data/example-config.pyRepository: Audionut/Upload-Assistant
Length of output: 143
🏁 Script executed:
# Check other trackers in the codebase to see if they handle TV remux and what pattern they use
rg "tv_.*remux|remux.*tv" src/trackers/ -i --type py -lRepository: Audionut/Upload-Assistant
Length of output: 149
🏁 Script executed:
# Look at SPD tracker to see how it handles TV remux categories
sed -n '1,150p' src/trackers/SPD.py | grep -A5 -B5 "category\|remux" -iRepository: Audionut/Upload-Assistant
Length of output: 1563
🏁 Script executed:
# Check how other trackers define TV and movie remux categories
rg "remux.*=|tv.*=|movie.*=" src/trackers/*.py | grep -i "remux\|tv.*hd\|movie.*remux"Repository: Audionut/Upload-Assistant
Length of output: 736
🏁 Script executed:
# Search for any documentation or issues about M-Team TV remux support
git log --all --oneline --grep="MTEAM\|M-Team" 2>/dev/null | head -20Repository: Audionut/Upload-Assistant
Length of output: 51
Add TV remux category support to M-Team uploader.
The remux category check (lines 282-283) only handles movie remuxes. TV remux releases will fall through to the default tv_series_hd category instead of a dedicated remux category. M-Team supports TV remux uploads, so a tv_series_remux category definition should be added with the correct category ID, and the remux check should handle both TV and MOVIE categories. Check M-Team's upload rules or the Jackett indexer source for the correct TV remux category ID.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/MTEAM.py` around lines 282 - 283, The current remux branch only
returns movie_remux when is_remux is true and meta["category"] == "MOVIE"; add a
new tv_series_remux category constant (with the correct M-Team category ID—look
up M-Team upload rules or the Jackett indexer for the exact ID) and update the
remux check in the is_remux handling to return movie_remux for meta["category"]
== "MOVIE" and tv_series_remux for meta["category"] == "TV" (leave tv_series_hd
as the default fallback).
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/trackers/MTEAM.py (1)
595-600:⚠️ Potential issue | 🟠 MajorHonor the tracker-level
anondefault in the payload.The new
TRACKERS["MTEAM"]["anon"]setting is still ignored here; this field only reflectsmeta["anon"]. If the tracker is configured to force anonymous uploads, this can still submit a non-anonymous payload.🔒 Suggested fix
- "anonymous": bool(meta.get("anon", False)), + "anonymous": bool( + meta.get("anon", False) + or self.config["TRACKERS"][self.tracker].get("anon", False) + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 595 - 600, The payload's "anonymous" field currently uses meta.get("anon", False) and ignores the tracker-level setting TRACKERS["MTEAM"]["anon"]; update the payload construction so "anonymous" honors and can override based on the tracker config: compute a boolean like anon = True if TRACKERS["MTEAM"]["anon"] is truthy (force anon) else meta.get("anon", False), then set "anonymous": anon in the payload; modify the code that builds the payload (the block referencing "imdb", "douban", "anonymous") to use that computed value instead of meta.get("anon", False).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 437-443: The code collapses API/network/JSON errors into an empty
duplicate list by returning [] after calling self.session.post and parsing
res_json; instead detect and handle failures explicitly: check response.status
(raise/propagate an exception or call the tracker's skip mechanism) when status
is not 2xx, await and catch JSON decode errors from response.json(), and when
res_json.get("code") != "0" treat it as an error (don't return [])—use the
tracker's existing skip/err handling rather than returning [] (apply same
changes around the similar block that handles lines 469-472), referencing the
variables and calls: self.session.post, response.status, response.json(),
res_json, api_url, payload.
- Around line 26-42: fetch_data() currently always calls get_douban_info() but
get_douban_id()/get_douban_info() can return without setting
self.douban_id/self.douban_info, causing AttributeError or stale data; fix by
resetting self.douban_id = None and self.douban_info = None at the start of any
per-upload flow (e.g., in fetch_data() and before upload-related methods), and
make get_douban_id() and get_douban_info() explicitly set those attributes only
when a valid match is found and otherwise leave them as None; also add guards
where self.douban_id/self.douban_info are serialized (in methods that build
description/payload) to handle None safely (skip Douban fields or use empty
values). Ensure changes reference the existing methods fetch_data,
get_douban_id, get_douban_info and attributes self.douban_id/self.douban_info.
---
Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 595-600: The payload's "anonymous" field currently uses
meta.get("anon", False) and ignores the tracker-level setting
TRACKERS["MTEAM"]["anon"]; update the payload construction so "anonymous" honors
and can override based on the tracker config: compute a boolean like anon = True
if TRACKERS["MTEAM"]["anon"] is truthy (force anon) else meta.get("anon",
False), then set "anonymous": anon in the payload; modify the code that builds
the payload (the block referencing "imdb", "douban", "anonymous") to use that
computed value instead of meta.get("anon", False).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| def __init__(self, config: Config): | ||
| self.config = config | ||
| self.common = COMMON(config) | ||
| self.tmdb_manager = TmdbManager(config) | ||
| self.tracker = "MTEAM" | ||
| self.base_url = f"https://{self.config['TRACKERS'][self.tracker].get('base_url', 'kp.m-team.cc')}" | ||
| self.api_base_url = "https://api.m-team.cc/api" | ||
| self.torrent_url = f"{self.base_url}/detail/" | ||
| self.banned_groups = ["FGT"] | ||
| self.api_key = self.config["TRACKERS"][self.tracker].get("api_key") | ||
| self.session = httpx.AsyncClient( | ||
| headers={ | ||
| "x-api-key": self.api_key, | ||
| "Accept": "*/*", | ||
| }, | ||
| timeout=30.0, | ||
| ) |
There was a problem hiding this comment.
Reset and guard Douban state per upload.
fetch_data() always calls get_douban_info() and later serializes self.douban_id, but get_douban_id() returns without setting either field when the IMDb lookup has no Douban match. On a fresh instance that raises AttributeError; on a reused instance it can reuse the previous upload's Douban ID/info and generate the wrong description/payload.
🐛 Suggested fix
class MTEAM:
@@
def __init__(self, config: Config):
self.config = config
self.common = COMMON(config)
self.tmdb_manager = TmdbManager(config)
self.tracker = "MTEAM"
+ self.douban_id: Optional[int] = None
+ self.douban_info: Optional[dict[str, Any]] = None
self.base_url = f"https://{self.config['TRACKERS'][self.tracker].get('base_url', 'kp.m-team.cc')}"
@@
async def get_douban_info(self) -> None:
+ if self.douban_id is None:
+ return
+
api_url = f"{self.api_base_url}/media/douban/infoV2"
@@
async def fetch_data(self, meta: Meta) -> dict[str, Any]:
+ self.douban_id = None
+ self.douban_info = None
await self.get_douban_id(meta)
- await self.get_douban_info()
+ if self.douban_id is not None:
+ await self.get_douban_info()
@@
- "douban": self.douban_id,
+ "douban": self.douban_id or "",Also applies to: 81-87, 385-406, 573-597
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/MTEAM.py` around lines 26 - 42, fetch_data() currently always
calls get_douban_info() but get_douban_id()/get_douban_info() can return without
setting self.douban_id/self.douban_info, causing AttributeError or stale data;
fix by resetting self.douban_id = None and self.douban_info = None at the start
of any per-upload flow (e.g., in fetch_data() and before upload-related
methods), and make get_douban_id() and get_douban_info() explicitly set those
attributes only when a valid match is found and otherwise leave them as None;
also add guards where self.douban_id/self.douban_info are serialized (in methods
that build description/payload) to handle None safely (skip Douban fields or use
empty values). Ensure changes reference the existing methods fetch_data,
get_douban_id, get_douban_info and attributes self.douban_id/self.douban_info.
…or related methods
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
src/trackers/MTEAM.py (3)
400-423:⚠️ Potential issue | 🟠 MajorReset Douban state before each upload.
Line 422 only updates
self.douban_idon a hit, but Line 604 later builds the description from that instance field. After one successful upload, a later miss can reuse the previous release's Douban description while Line 615 sends0in the payload. Clear the field before lookup, or pass the ID through explicitly instead of storing per-upload state on the tracker.Also applies to: 597-605
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 400 - 423, The tracker stores Douban state on the instance (self.douban_id) only when a lookup hits, causing stale data to be reused on subsequent misses; in get_douban_id (and the similar lookup at the other block) clear or reset self.douban_id to 0 at the start of the lookup and only set it when you have a successful match, or better yet stop using the instance field entirely: return the found douban_id from get_douban_id and update the caller to use that return value to build the description instead of reading self.douban_id.
453-490:⚠️ Potential issue | 🟠 MajorDon't treat a failed dupe check as "no dupes".
Lines 457-459 and 485-492 both return an empty result on API/search errors. That makes an M-Team outage look like a clean dupe check and can let the upload continue. Mark the tracker as skipped/failed in these branches instead of returning
[].🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 453 - 490, The code currently treats API/search failures as "no dupes" by returning an empty dupes list; instead when the response code is not "0" or an exception is raised, set meta["skipping"] = self.tracker (to mark the tracker skipped/failed) and return None (or another sentinel that callers treat as a failed check) rather than returning []. Update the block that handles res_json.get("code") != "0" and the exception handler to assign meta["skipping"] = self.tracker (use imdb_id/self.tracker for context if desired), avoid returning dupes, and return None; keep references to dupes, meta, imdb_id, self.tracker, the async post call (self.session.post) and get_dupe_bdinfo to locate the logic.
296-297:⚠️ Potential issue | 🟠 MajorHandle TV remuxes explicitly.
This branch only gives REMUX a dedicated category for movies. TV remux releases currently fall through to
tv_series_hd, so they'll be miscategorized unless TV remux is intentionally unsupported. Please add the TV remux mapping or reject TV remux uploads here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 296 - 297, The current is_remux branch only handles movies (returns movie_remux) and lets TV remuxes fall through to tv_series_hd; update the is_remux handling to explicitly map TV remuxes or reject them. Modify the block that checks is_remux and meta["category"] to: if meta["category"] indicates TV (e.g., "TV" or the codebase's TV constant), return the appropriate tv remux category (e.g., tv_remux) or raise/return an explicit error for TV remux uploads; keep existing movie_remux return for meta["category"] == "MOVIE". Ensure you reference is_remux, meta["category"], movie_remux and tv_series_hd (and add tv_remux or the rejection path) so TV remuxes are not miscategorized.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 46-52: The client currently sets the M-Team API key as a default
header on self.session which causes the Douban GET to forward the private key;
remove the x-api-key from the shared client and either (A) pass the API key
explicitly in per-request headers when calling M-Team endpoints (i.e. supply
headers={"x-api-key": self.api_key} on requests that call the M-Team API), or
(B) create a second AsyncClient (e.g. self.unauthenticated_session) without the
x-api-key and use that for the Douban GET and any third‑party hosts; update all
methods that call M-Team APIs to use per-request headers or the authenticated
client and make sure the Douban scraping method uses the unauthenticated client
so the key is never sent to external hosts.
---
Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 400-423: The tracker stores Douban state on the instance
(self.douban_id) only when a lookup hits, causing stale data to be reused on
subsequent misses; in get_douban_id (and the similar lookup at the other block)
clear or reset self.douban_id to 0 at the start of the lookup and only set it
when you have a successful match, or better yet stop using the instance field
entirely: return the found douban_id from get_douban_id and update the caller to
use that return value to build the description instead of reading
self.douban_id.
- Around line 453-490: The code currently treats API/search failures as "no
dupes" by returning an empty dupes list; instead when the response code is not
"0" or an exception is raised, set meta["skipping"] = self.tracker (to mark the
tracker skipped/failed) and return None (or another sentinel that callers treat
as a failed check) rather than returning []. Update the block that handles
res_json.get("code") != "0" and the exception handler to assign meta["skipping"]
= self.tracker (use imdb_id/self.tracker for context if desired), avoid
returning dupes, and return None; keep references to dupes, meta, imdb_id,
self.tracker, the async post call (self.session.post) and get_dupe_bdinfo to
locate the logic.
- Around line 296-297: The current is_remux branch only handles movies (returns
movie_remux) and lets TV remuxes fall through to tv_series_hd; update the
is_remux handling to explicitly map TV remuxes or reject them. Modify the block
that checks is_remux and meta["category"] to: if meta["category"] indicates TV
(e.g., "TV" or the codebase's TV constant), return the appropriate tv remux
category (e.g., tv_remux) or raise/return an explicit error for TV remux
uploads; keep existing movie_remux return for meta["category"] == "MOVIE".
Ensure you reference is_remux, meta["category"], movie_remux and tv_series_hd
(and add tv_remux or the rejection path) so TV remuxes are not miscategorized.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
There was a problem hiding this comment.
♻️ Duplicate comments (3)
src/trackers/MTEAM.py (3)
400-412:⚠️ Potential issue | 🟠 MajorReset
self.douban_idat the start of each lookup to prevent stale cross-upload state.At Line 410/Line 434 this method can return
0without updatingself.douban_id, so later description generation (Line 121) may use a previous upload’s Douban ID.🔧 Proposed fix
async def get_douban_id(self, meta: Meta) -> int: douban_id: int = 0 + self.douban_id = 0 douban_manual = int(meta.get("douban_manual", 0)) if douban_manual: console.print(f"{self.tracker}: Using manual Douban ID: {douban_manual}") self.douban_id = douban_manual return douban_manual🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 400 - 412, In get_douban_id, reset self.douban_id to 0 at the start of the method to avoid carrying a previous upload's value into later steps; specifically, in the async def get_douban_id(self, meta: Meta) method set self.douban_id = 0 before checking douban_manual or imdb_info so any early return (return 0) leaves self.douban_id consistent for downstream code that generates descriptions.
273-297:⚠️ Potential issue | 🟠 MajorTV remux currently falls into HD category instead of remux.
Line 296 only maps remux when category is
MOVIE; TV remuxes fall through to default HD (Line 306), which can miscategorize uploads.🔧 Proposed fix shape (after confirming the TV remux category ID)
movie_remux = 439 # Movie/Remux + tv_series_remux = <VERIFY_MTEAM_TV_REMUX_ID> @@ - if is_remux and meta["category"] == "MOVIE": - return movie_remux + if is_remux: + return tv_series_remux if meta["category"] == "TV" else movie_remuxWhat is the current official M-Team category ID for TV Series Remux uploads?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 273 - 297, The remux mapping only handles MOVIE remuxes and misses TV remuxes; add a tv_series_remux constant (with the official M-Team category ID) alongside movie_remux and then change the remux logic in the tracker to return the appropriate remux category for TV: e.g. update the condition that now reads "if is_remux and meta['category'] == 'MOVIE': return movie_remux" so it handles TV as well (either by branching: if meta.get('category') == 'MOVIE' return movie_remux elif meta.get('category') == 'TV' return tv_series_remux, or by a single expression that selects movie_remux vs tv_series_remux), and use meta.get('category') for safety.
462-468:⚠️ Potential issue | 🟠 MajorTreat dupe-check API failures as tracker-skip, not “no dupes.”
At Line 467/Line 495, API or request failures return an empty dupe list without consistently setting
meta["skipping"]. That can be interpreted as a clean dupe check.🛡️ Proposed fix
try: response = await self.session.post(api_url, json=payload, timeout=15) + response.raise_for_status() res_json = response.json() if res_json.get("code") != "0": console.print(f"[bold red]API Error: {res_json.get('message')}[/bold red]") + meta["skipping"] = f"{self.tracker}" return dupes @@ except Exception as e: console.print(f"[bold red]Error searching for IMDb ID {imdb_id} on {self.tracker}: {e}[/bold red]") - if not meta["unattended"] or (meta["unattended"] and meta.get("unattended_confirm", False)): - pass - else: - meta["skipping"] = f"{self.tracker}" + meta["skipping"] = f"{self.tracker}" return dupesAlso applies to: 494-501
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 462 - 468, The dupe-check currently returns an empty dupes list on API or request failures without marking the tracker as skipped; update the error branches where you check res_json.get("code") != "0" (and the similar block at lines ~494-501) to set meta["skipping"] = True (e.g., self.meta["skipping"] = True) before returning dupes, and do the same in any exception/timeout handlers for self.session.post so API/request failures are treated as tracker-skip rather than “no dupes.” Ensure you update both the res_json error path and the other failure path in this method so skipping is consistently recorded.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 400-412: In get_douban_id, reset self.douban_id to 0 at the start
of the method to avoid carrying a previous upload's value into later steps;
specifically, in the async def get_douban_id(self, meta: Meta) method set
self.douban_id = 0 before checking douban_manual or imdb_info so any early
return (return 0) leaves self.douban_id consistent for downstream code that
generates descriptions.
- Around line 273-297: The remux mapping only handles MOVIE remuxes and misses
TV remuxes; add a tv_series_remux constant (with the official M-Team category
ID) alongside movie_remux and then change the remux logic in the tracker to
return the appropriate remux category for TV: e.g. update the condition that now
reads "if is_remux and meta['category'] == 'MOVIE': return movie_remux" so it
handles TV as well (either by branching: if meta.get('category') == 'MOVIE'
return movie_remux elif meta.get('category') == 'TV' return tv_series_remux, or
by a single expression that selects movie_remux vs tv_series_remux), and use
meta.get('category') for safety.
- Around line 462-468: The dupe-check currently returns an empty dupes list on
API or request failures without marking the tracker as skipped; update the error
branches where you check res_json.get("code") != "0" (and the similar block at
lines ~494-501) to set meta["skipping"] = True (e.g., self.meta["skipping"] =
True) before returning dupes, and do the same in any exception/timeout handlers
for self.session.post so API/request failures are treated as tracker-skip rather
than “no dupes.” Ensure you update both the res_json error path and the other
failure path in this method so skipping is consistently recorded.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7e530dc8-7f16-4c71-a9e8-cd1a11e52843
📒 Files selected for processing (2)
src/args.pysrc/trackers/MTEAM.py
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/trackers/MTEAM.py (2)
347-348:⚠️ Potential issue | 🟡 MinorTV remux releases not properly categorized.
The remux check only handles
meta["category"] == "MOVIE". TV remux releases will incorrectly fall through totv_series_hdinstead of a dedicated TV remux category.Verify if M-Team has a separate TV remux category ID:
M-Team tracker category ID TV series remux🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 347 - 348, The current conditional only returns movie_remux for remuxes, causing TV remuxes to fall through to tv_series_hd; update the remux handling in the selector that uses is_remux and meta["category"] to also detect TV remuxes (e.g., when meta["category"] is the TV/series value) and return a tv_remux/series_remux category variable instead of tv_series_hd; locate the code around the is_remux check and adjust the branch to check meta["category"] for the TV series identifier and map it to the dedicated tv remux constant (and if that constant/mapping doesn't exist, add it and ensure it matches M-Team's TV remux category ID by verifying the tracker category ID for TV series remux).
513-520:⚠️ Potential issue | 🟡 MinorAdd
raise_for_status()after dupe search request.Line 514 doesn't call
raise_for_status(), so HTTP errors (4xx/5xx) won't raise exceptions and may result in JSON parse errors or unexpected behavior.Suggested fix
try: response = await self.session.post(api_url, json=payload, timeout=15) + response.raise_for_status() res_json = response.json()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 513 - 520, The HTTP response from self.session.post (variable response) is not checked for HTTP errors before parsing JSON; call response.raise_for_status() immediately after receiving the response (right after await self.session.post(...)) and before calling response.json()/await response.json() so 4xx/5xx errors raise exceptions instead of causing JSON parse or logic errors when checking res_json; update the code around the post call in MTEAM.py where response and res_json are used to include this check.
🧹 Nitpick comments (2)
src/trackers/MTEAM.py (2)
73-73: Remove or guard debug print statement.
console.print(res_json)will output raw API response data on every request search, which may be noisy in production. Consider removing it or gating it behindmeta.get('debug').Suggested fix
response.raise_for_status() res_json = response.json() - console.print(res_json) + if meta.get('debug'): + console.print(res_json)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` at line 73, Remove the unguarded debug print of the API response: stop calling console.print(res_json) on every request search (remove the line or wrap it in a conditional). Instead, only emit the raw response when debugging is enabled by checking a debug flag like meta.get('debug') (or similar request-level flag) before printing; locate the console.print(res_json) call (which references res_json) in the tracker function in MTEAM.py and replace it with a guarded print or delete it for production.
716-718: Variabledatais shadowed.Line 717 reassigns
data(originally the upload payload from line 692) to the download API response. This could cause confusion during debugging.Suggested fix
download_api_url = f"{self.api_base_url}/torrent/genDlToken?id={torrent_id}" response = await self.session.post(download_api_url) - data = response.json() - final_download_url = data.get("data") + dl_response = response.json() + final_download_url = dl_response.get("data")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 716 - 718, The code reuses the variable name data (originally the upload payload) for the download API response, causing shadowing and confusion; change the response parsing to use a distinct name (e.g., download_json or download_data) when doing response.json() and assign final_download_url = download_data.get("data") (and ensure you await response.json() if necessary) so upload payload and download response remain separate; update references around self.session.post(download_api_url) and final_download_url accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 451-458: In get_douban_id, avoid calling
int(meta.get("douban_manual", 0)) directly; instead retrieve the raw value from
meta (use meta.get("douban_manual")), treat missing/None as 0, and attempt to
convert to int inside a try/except (or use a safe parse) so non-numeric strings
won't raise ValueError; on parse failure log/warn via console or process logger
that the provided douban_manual is invalid and fall back to 0 (do not crash),
then continue to set self.douban_id and return the validated integer; update
references to douban_manual in get_douban_id to use this validated value.
---
Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 347-348: The current conditional only returns movie_remux for
remuxes, causing TV remuxes to fall through to tv_series_hd; update the remux
handling in the selector that uses is_remux and meta["category"] to also detect
TV remuxes (e.g., when meta["category"] is the TV/series value) and return a
tv_remux/series_remux category variable instead of tv_series_hd; locate the code
around the is_remux check and adjust the branch to check meta["category"] for
the TV series identifier and map it to the dedicated tv remux constant (and if
that constant/mapping doesn't exist, add it and ensure it matches M-Team's TV
remux category ID by verifying the tracker category ID for TV series remux).
- Around line 513-520: The HTTP response from self.session.post (variable
response) is not checked for HTTP errors before parsing JSON; call
response.raise_for_status() immediately after receiving the response (right
after await self.session.post(...)) and before calling response.json()/await
response.json() so 4xx/5xx errors raise exceptions instead of causing JSON parse
or logic errors when checking res_json; update the code around the post call in
MTEAM.py where response and res_json are used to include this check.
---
Nitpick comments:
In `@src/trackers/MTEAM.py`:
- Line 73: Remove the unguarded debug print of the API response: stop calling
console.print(res_json) on every request search (remove the line or wrap it in a
conditional). Instead, only emit the raw response when debugging is enabled by
checking a debug flag like meta.get('debug') (or similar request-level flag)
before printing; locate the console.print(res_json) call (which references
res_json) in the tracker function in MTEAM.py and replace it with a guarded
print or delete it for production.
- Around line 716-718: The code reuses the variable name data (originally the
upload payload) for the download API response, causing shadowing and confusion;
change the response parsing to use a distinct name (e.g., download_json or
download_data) when doing response.json() and assign final_download_url =
download_data.get("data") (and ensure you await response.json() if necessary) so
upload payload and download response remain separate; update references around
self.session.post(download_api_url) and final_download_url accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b0f2364e-0ab0-4f79-80a1-50ec439bc69c
📒 Files selected for processing (2)
src/trackers/MTEAM.pysrc/trackersetup.py
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/trackers/MTEAM.py (1)
520-526:⚠️ Potential issue | 🟡 MinorAdd
raise_for_status()before parsing dupe search response.Unlike other API calls in this file,
search_existing()doesn't callresponse.raise_for_status()before parsing JSON. A 4xx/5xx response will causeresponse.json()to either fail or return unexpected data.Suggested fix
try: response = await self.session.post(api_url, json=payload, timeout=15) + response.raise_for_status() res_json = response.json()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 520 - 526, In search_existing(), call response.raise_for_status() immediately after the await self.session.post(...) returns and before attempting to parse the response, then parse JSON from the successful response (use the appropriate async/await form for response.json() if the client is async); this ensures HTTP 4xx/5xx errors are raised instead of causing response.json() to fail or return unexpected data and keeps the existing checks on res_json.get("code") intact.
🧹 Nitpick comments (5)
src/trackers/MTEAM.py (5)
176-177: Minor: Use unpacking for list concatenation.Static analysis suggests using
[title, *aka]instead of[title] + akafor cleaner syntax.Suggested fix
- translated_names = " / ".join([title] + aka) if title else " / ".join(aka) + translated_names = " / ".join([title, *aka]) if title else " / ".join(aka)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 176 - 177, The concatenation [title] + aka used to build translated_names is less idiomatic; replace it with list unpacking [title, *aka] when title is present so translated_names = " / ".join([title, *aka]) (reference variables: title, aka, translated_names in MTEAM.py).
118-121: Add error handling for missing mediainfo file.If the mediainfo file doesn't exist (e.g., for a new upload or if generation failed),
aiofiles.open()will raiseFileNotFoundError, causing the upload to fail ungracefully.Suggested fix
if mi_path: - async with aiofiles.open(mi_path, encoding="utf-8") as f: - mediainfo = await f.read() + try: + async with aiofiles.open(mi_path, encoding="utf-8") as f: + mediainfo = await f.read() + except FileNotFoundError: + console.print(f"{self.tracker}: [bold yellow]Mediainfo file not found at {mi_path}[/bold yellow]") return mediainfo🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 118 - 121, Wrap the aiofiles.open(mi_path, ...) read in a try/except to handle FileNotFoundError so a missing mediainfo file doesn't crash the upload: around the existing mi_path check in the function (where mi_path is used and mediainfo assigned) catch FileNotFoundError from aiofiles.open/read, set mediainfo to None or an empty string, and optionally log a warning; keep the rest of the code using mediainfo tolerant of None/"" so the upload can continue gracefully.
45-45: Unusedbanned_groupsvariable.The
banned_groupslist is defined but never referenced anywhere in this module. If it's intended for upload validation, implement the check; otherwise, remove the dead code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` at line 45, The attribute self.banned_groups in class MTEAM is dead code; either remove it or use it to block uploads—if you intend to validate uploads, add a check in the upload/validation path (e.g., in methods named validate_upload, process_upload, or upload) that checks if the incoming item's group is in self.banned_groups and then reject/log/raise (consistent with existing validation behavior), otherwise delete the self.banned_groups definition to remove the unused variable.
698-720: Addsetdefaultguards fortracker_statusbefore writes.Per the UNIT3D pattern in the codebase, tracker modules should guard against missing
tracker_statuskeys before writing. This preventsKeyErrorif the status dict isn't pre-initialized.Suggested fix
async def upload(self, meta: Meta, _) -> bool: + meta.setdefault("tracker_status", {}) + meta["tracker_status"].setdefault(self.tracker, {}) + data = await self.fetch_data(meta) response = NoneBased on relevant code snippet from
src/trackers/UNIT3D.py:49-51showing the defensivesetdefault()pattern used by other trackers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 698 - 720, The upload method writes into meta["tracker_status"][self.tracker] without ensuring that tracker_status and the per-tracker dict exist; before assigning meta["tracker_status"][self.tracker]["torrent_id"] and ["status_message"] in upload, call setdefault on meta for "tracker_status" and on meta["tracker_status"] for self.tracker (mirroring the UNIT3D pattern) so the dicts are created if missing; update the upload function to perform these setdefault guards just prior to the assignments to avoid KeyError.
309-311: No circular dependency—late import is acceptable.The
from src.bbcode import BBCODEimport at line 309 is not a circular import workaround;src/bbcode.pyhas no dependencies onMTEAM.py. BBCODE is used only in this single function, and the late import pattern (lazy loading) is a valid optimization that avoids loading the module unnecessarily. Moving it to module level is optional and would not significantly improve performance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 309 - 311, The local import "from src.bbcode import BBCODE" inside MTEAM.py is fine—there's no circular dependency and BBCODE is used only in this function, so keep the late import and the bbcode = BBCODE() instantiation as-is; if you prefer a micro-optimization you may optionally move the import to module level and instantiate as a module-level singleton, but no code change is required to address circular-import concerns.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 60-78: get_category_id() can return None, so current filtering "if
item.get('category') != category: continue" will drop all items when category is
None; change the loop in the async block that iterates over data_list to only
apply the category check when category is not None (e.g., use a guard like "if
category is not None and item.get('category') != category: continue"), leaving
items unfiltered when get_category_id() returns None; update any related
comments and ensure you reference get_category_id and the data_list iteration in
MTEAM.py.
- Around line 552-558: The exception handler in the IMDb search block has
inverted logic for setting meta["skipping"]; change it so that skipping is set
when we're running unattended without confirmation. Specifically, in the except
Exception block that references imdb_id and self.tracker, replace the current
condition with one that sets meta["skipping"] = f"{self.tracker}" when
meta.get("unattended", False) is True and meta.get("unattended_confirm", False)
is False; leave interactive/unconfirmed flows alone (i.e., do not set skipping
when not unattended or when unattended_confirm is True).
---
Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 520-526: In search_existing(), call response.raise_for_status()
immediately after the await self.session.post(...) returns and before attempting
to parse the response, then parse JSON from the successful response (use the
appropriate async/await form for response.json() if the client is async); this
ensures HTTP 4xx/5xx errors are raised instead of causing response.json() to
fail or return unexpected data and keeps the existing checks on
res_json.get("code") intact.
---
Nitpick comments:
In `@src/trackers/MTEAM.py`:
- Around line 176-177: The concatenation [title] + aka used to build
translated_names is less idiomatic; replace it with list unpacking [title, *aka]
when title is present so translated_names = " / ".join([title, *aka]) (reference
variables: title, aka, translated_names in MTEAM.py).
- Around line 118-121: Wrap the aiofiles.open(mi_path, ...) read in a try/except
to handle FileNotFoundError so a missing mediainfo file doesn't crash the
upload: around the existing mi_path check in the function (where mi_path is used
and mediainfo assigned) catch FileNotFoundError from aiofiles.open/read, set
mediainfo to None or an empty string, and optionally log a warning; keep the
rest of the code using mediainfo tolerant of None/"" so the upload can continue
gracefully.
- Line 45: The attribute self.banned_groups in class MTEAM is dead code; either
remove it or use it to block uploads—if you intend to validate uploads, add a
check in the upload/validation path (e.g., in methods named validate_upload,
process_upload, or upload) that checks if the incoming item's group is in
self.banned_groups and then reject/log/raise (consistent with existing
validation behavior), otherwise delete the self.banned_groups definition to
remove the unused variable.
- Around line 698-720: The upload method writes into
meta["tracker_status"][self.tracker] without ensuring that tracker_status and
the per-tracker dict exist; before assigning
meta["tracker_status"][self.tracker]["torrent_id"] and ["status_message"] in
upload, call setdefault on meta for "tracker_status" and on
meta["tracker_status"] for self.tracker (mirroring the UNIT3D pattern) so the
dicts are created if missing; update the upload function to perform these
setdefault guards just prior to the assignments to avoid KeyError.
- Around line 309-311: The local import "from src.bbcode import BBCODE" inside
MTEAM.py is fine—there's no circular dependency and BBCODE is used only in this
function, so keep the late import and the bbcode = BBCODE() instantiation as-is;
if you prefer a micro-optimization you may optionally move the import to module
level and instantiate as a module-level singleton, but no code change is
required to address circular-import concerns.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| category = self.get_category_id(meta) | ||
|
|
||
| payload = { | ||
| "pageNumber": 1, | ||
| "pageSize": 10, | ||
| "keyword": meta["title"], | ||
| "take": False, | ||
| } | ||
|
|
||
| try: | ||
| response = await self.session.post(self.requests_url, json=payload, timeout=15) | ||
| response.raise_for_status() | ||
| res_json = response.json() | ||
|
|
||
| data_list = res_json.get("data", {}).get("data", []) | ||
|
|
||
| for item in data_list: | ||
| if item.get("category") != category: | ||
| continue |
There was a problem hiding this comment.
Handle None category in request filtering.
get_category_id() returns Optional[int], so category can be None. Comparing item.get("category") != category when category is None will always be truthy (since M-Team categories are integers), causing all requests to be skipped.
Suggested fix
async def get_requests(self, meta: dict[str, Any]) -> list[dict[str, str]]:
requests: list[dict[str, str]] = []
category = self.get_category_id(meta)
+ if category is None:
+ console.print(f"{self.tracker}: [bold yellow]Could not determine category, skipping request search.[/bold yellow]")
+ return requests
payload = {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/MTEAM.py` around lines 60 - 78, get_category_id() can return
None, so current filtering "if item.get('category') != category: continue" will
drop all items when category is None; change the loop in the async block that
iterates over data_list to only apply the category check when category is not
None (e.g., use a guard like "if category is not None and item.get('category')
!= category: continue"), leaving items unfiltered when get_category_id() returns
None; update any related comments and ensure you reference get_category_id and
the data_list iteration in MTEAM.py.
| except Exception as e: | ||
| console.print(f"[bold red]Error searching for IMDb ID {imdb_id} on {self.tracker}: {e}[/bold red]") | ||
| if not meta["unattended"] or (meta["unattended"] and meta.get("unattended_confirm", False)): | ||
| pass | ||
| else: | ||
| meta["skipping"] = f"{self.tracker}" | ||
|
|
There was a problem hiding this comment.
Inverted skipping logic in exception handler.
The condition if not meta["unattended"] or (meta["unattended"] and meta.get("unattended_confirm", False)) means skipping is set only when NOT in unattended mode or when unattended with confirm. This seems backwards—API errors in unattended mode should trigger skipping to prevent silent failures.
Suggested fix
except Exception as e:
console.print(f"[bold red]Error searching for IMDb ID {imdb_id} on {self.tracker}: {e}[/bold red]")
- if not meta["unattended"] or (meta["unattended"] and meta.get("unattended_confirm", False)):
- pass
- else:
- meta["skipping"] = f"{self.tracker}"
+ if meta["unattended"] and not meta.get("unattended_confirm", False):
+ meta["skipping"] = self.tracker
return dupes🧰 Tools
🪛 Ruff (0.15.7)
[warning] 552-552: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/trackers/MTEAM.py` around lines 552 - 558, The exception handler in the
IMDb search block has inverted logic for setting meta["skipping"]; change it so
that skipping is set when we're running unattended without confirmation.
Specifically, in the except Exception block that references imdb_id and
self.tracker, replace the current condition with one that sets meta["skipping"]
= f"{self.tracker}" when meta.get("unattended", False) is True and
meta.get("unattended_confirm", False) is False; leave interactive/unconfirmed
flows alone (i.e., do not set skipping when not unattended or when
unattended_confirm is True).
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/trackers/MTEAM.py (1)
726-736:⚠️ Potential issue | 🟠 MajorCheck the return value of
download_tracker_torrent.The code calls
download_tracker_torrent()but ignores its return value (which isOptional[str]). If the upload succeeds but the torrent download fails, the method still returnsTrue, potentially causing the downstream client injection to fail because no[MTEAM].torrentfile exists.Suggested fix
if final_download_url: - await self.common.download_tracker_torrent( + downloaded = await self.common.download_tracker_torrent( meta, self.tracker, headers=dict(self.session.headers), downurl=final_download_url, ) - return True + if downloaded: + return True + console.print(f"{self.tracker}: Upload succeeded but failed to download tracker torrent.") + meta["tracker_status"][self.tracker]["status_message"] = "Upload succeeded but torrent download failed" + return False console.print(f"{self.tracker}: Failed to get download URL from API response.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` around lines 726 - 736, The code currently ignores download_tracker_torrent's return (Optional[str]) and returns True even if the torrent download failed; update the block in the MTEAM upload/download flow to capture the result of download_tracker_torrent(meta, self.tracker, headers=..., downurl=final_download_url), check that it returned a non-empty path/string, and only return True when that result is truthy; if it is falsy, set meta["tracker_status"][self.tracker]["status_message"] to an appropriate failure message (e.g., "Torrent download failed") and return False so downstream logic won't assume the .torrent exists. Ensure you reference the existing variables final_download_url, download_tracker_torrent, and meta/self.tracker when making the change.
🧹 Nitpick comments (1)
src/trackers/MTEAM.py (1)
177-177: Consider using list unpacking for cleaner concatenation.Per Ruff suggestion, list unpacking is preferred over concatenation.
Suggested fix
- translated_names = " / ".join([title] + aka) if title else " / ".join(aka) + translated_names = " / ".join([title, *aka]) if title else " / ".join(aka)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/trackers/MTEAM.py` at line 177, Replace the conditional list concatenation used to build translated_names with list unpacking: construct a single list that includes title only if present followed by all items from aka, then join with " / ". Update the assignment to translated_names (using title and aka) to use unpacking so the expression is cleaner and avoids explicit list concatenation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 726-736: The code currently ignores download_tracker_torrent's
return (Optional[str]) and returns True even if the torrent download failed;
update the block in the MTEAM upload/download flow to capture the result of
download_tracker_torrent(meta, self.tracker, headers=...,
downurl=final_download_url), check that it returned a non-empty path/string, and
only return True when that result is truthy; if it is falsy, set
meta["tracker_status"][self.tracker]["status_message"] to an appropriate failure
message (e.g., "Torrent download failed") and return False so downstream logic
won't assume the .torrent exists. Ensure you reference the existing variables
final_download_url, download_tracker_torrent, and meta/self.tracker when making
the change.
---
Nitpick comments:
In `@src/trackers/MTEAM.py`:
- Line 177: Replace the conditional list concatenation used to build
translated_names with list unpacking: construct a single list that includes
title only if present followed by all items from aka, then join with " / ".
Update the assignment to translated_names (using title and aka) to use unpacking
so the expression is cleaner and avoids explicit list concatenation.
Summary by CodeRabbit
New Features
Documentation