Skip to content

Add support for M-Team / lajidui / LongPT / ptcafe / PTFans / PT GTK / RailgunPT#1324

Open
wastaken7 wants to merge 16 commits into
masterfrom
mteam
Open

Add support for M-Team / lajidui / LongPT / ptcafe / PTFans / PT GTK / RailgunPT#1324
wastaken7 wants to merge 16 commits into
masterfrom
mteam

Conversation

@wastaken7
Copy link
Copy Markdown
Collaborator

@wastaken7 wastaken7 commented Mar 27, 2026

Summary by CodeRabbit

  • New Features

    • Added M-Team (MTEAM) tracker support: upload workflow, Douban lookup, duplicate detection, rich release descriptions and mediainfo handling.
    • Improved torrent-client tracker recognition to match M-Team host patterns.
    • Added CLI option to supply a Douban ID.
  • Documentation

    • Updated "Supported Sites" table with reorganized site/acronym pairings.
    • Extended example configuration to include MTEAM settings.

@github-actions
Copy link
Copy Markdown

Thanks for taking the time to contribute to this project. Upload Assistant is currently in a complete rewrite, and no new development is being conducted on this python source at this time.

If you have come this far, please feel free to leave open, any pull requests regarding new sites being added to the source, as these can serve as the baseline for later conversion.

If your pull request relates to a critical bug, this will be addressed in this code base, and a new release published as needed.

If your pull request only addresses a quite minor bug, it is not likely to be addressed in this code base.

Details for the new code base will follow at a later date.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 27, 2026

📝 Walkthrough

Walkthrough

Adds M-Team (MTEAM) tracker support: new tracker module and registration, qbittorrent URL patterns, example-config entry, CLI Douban arg, and README “Supported Sites” table realignments.

Changes

Cohort / File(s) Summary
Documentation
README.md
Reordered/right-column pairings in the "Supported Sites" table; adjusted site name ↔ acronym alignments.
Configuration
data/example-config.py
Added MTEAM to TRACKERS comment list and a config["TRACKERS"]["MTEAM"] block (link_dir_name, api_key, anon, base_url).
Torrent client URL mapping
src/torrent_clients/qbittorrent.py
Added mteam hostname patterns mapping several host variants to the mteam tracker id; minor quoting style changes.
Tracker registration
src/trackersetup.py
Imported MTEAM, registered 'MTEAM': MTEAM in tracker_class_map, added 'MTEAM' to other_api_trackers, and extended custom-request dispatch to call get_requests.
CLI args
src/args.py
Added -douban/--douban argument (stored as douban_manual, int, single value).
Tracker implementation
src/trackers/MTEAM.py
New MTEAM tracker class: async httpx client, mediainfo/BDMV parsing, BBCode→Markdown conversion, description generation (Douban/IMDb fallbacks), category/codec/bitrate mapping, duplicate search/BDInfo enrichment, fetch_data/upload flows (create/edit API, token download), debug-mode simulation, and detailed error handling.

Sequence Diagram

sequenceDiagram
    participant Client as Client
    participant App as Application
    participant FS as FileSystem
    participant MTEAM as M‑Team API
    participant DL as TorrentDownload

    Client->>App: Request upload with release meta
    App->>FS: Read media files, mediainfo, screenshots
    FS-->>App: mediainfo / images
    App->>App: Build description (Douban/IMDb, screenshots, BBCode→Markdown)
    App->>FS: Write [MTEAM]DESCRIPTION.txt
    App->>MTEAM: POST search (imdb_id) / check duplicates
    MTEAM-->>App: Duplicate list / details
    App->>App: Map metadata (category, codecs, bitrates)
    App->>MTEAM: POST create/edit (torrent + multipart metadata)
    MTEAM-->>App: torrent_id and token
    App->>DL: Request .torrent via token
    DL-->>App: .torrent file
    App-->>Client: Return upload status and torrent_id
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • Audionut

"I nibble code and hop with glee,
M-Team joins our upload tree,
BBCode turns to markdown bright,
Torrents hum into the night,
Hooray — another tracker party!" 🐇✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 4.17% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title 'Add support for M-Team' clearly and directly describes the main change: introducing a new tracker module for M-Team with corresponding configuration and integration.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch mteam

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 342-346: The codec mapping in the codec detection block (variable
codec) incorrectly maps "h.265" to the x264/AVC branch in the if branch that
returns x264; remove "h.265" from the first tuple and ensure it (and any other
h.265 variants like "h‑265") are present in the elif tuple that returns x265 (or
add "h.265" to the list in the x265 branch) so H.265/HEVC releases are mapped to
x265 instead of x264.
- Around line 372-385: The codec-detection logic incorrectly matches "dd+"
before the Atmos-specific case, causing DD+ Atmos to be classified as eac3;
update the branch order in the function that checks the codec string so the
atmos+dd+ check (the condition that returns atmos_eac3) runs before the generic
"dd+" and "dd " checks (the branches returning eac3 and ac3), or alternatively
adjust the conditions to explicitly check for "atmos" together with "dd+" before
returning eac3/detecting dd; reference the existing symbols: codec, atmos_eac3,
eac3, ac3, dts_hd_ma, dts, true_hd.
- Around line 81-82: The code builds tmdb_poster unconditionally which yields a
truthy broken URL and prevents the fallback; change the logic so you only
construct tmdb_poster when meta.get('tmdb_poster') is present and non-empty,
e.g. check meta.get('tmdb_poster') first and then set tmdb_poster =
f"https://image.tmdb.org/t/p/w200{meta.get('tmdb_poster')}" else use
imdb.get("cover", "") when assigning poster_url (refer to tmdb_poster,
poster_url, and meta.get('tmdb_poster') in the existing code).
- Around line 409-414: Replace the current anonymous handling that uses
meta.get("anonymous") with the same pattern used by other trackers: read
meta.get("anon") and the tracker config fallback
self.config['TRACKERS'][self.tracker].get('anon', False), compute an anon flag
(e.g. set anon = 0 if meta.get('anon') == 0 and not
self.config['TRACKERS'][self.tracker].get('anon', False) else 1) and then set
"anonymous": bool(anon) in the returned metadata so the new "anon" config key is
honored by the MTEAM logic.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c741feb9-fe3b-4c8c-a0e2-451f21b9c8c3

📥 Commits

Reviewing files that changed from the base of the PR and between 71960c0 and 076394d.

📒 Files selected for processing (5)
  • README.md
  • data/example-config.py
  • src/torrent_clients/qbittorrent.py
  • src/trackers/MTEAM.py
  • src/trackersetup.py

Comment thread src/trackers/MTEAM.py Outdated
Comment thread src/trackers/MTEAM.py
Comment thread src/trackers/MTEAM.py Outdated
Comment thread src/trackers/MTEAM.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 451-457: The branch in MTEAM.py currently ignores the result of
common.download_tracker_torrent and returns True immediately after calling it;
change it to await common.download_tracker_torrent(meta, self.tracker,
headers=dict(self.session.headers), downurl=final_download_url), capture its
return value (e.g., downloaded_path or result) and only return True if that
value is truthy (otherwise propagate False/None or handle failure), so that
MTEAM's code honors the Optional[str] return of download_tracker_torrent and
only signals success when the torrent was actually downloaded.
- Around line 295-300: The dupe link is hard-coded to
"https://kp.m-team.cc/detail/{t_id}" instead of using the configured base_url;
in the loop that builds dupes (the torrents loop that appends to dupes), replace
the hard-coded link with the configured base URL (the base_url defined earlier
for TRACKERS["MTEAM"] or the MTEAM class/instance attribute) and join it safely
(e.g., strip trailing slash or use urljoin) to produce
"{base_url}/detail/{t_id}" so overridden base_url values are honored.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 58c8ae79-2266-4d31-853d-88c65e538e23

📥 Commits

Reviewing files that changed from the base of the PR and between 076394d and 9962227.

📒 Files selected for processing (3)
  • data/example-config.py
  • src/torrent_clients/qbittorrent.py
  • src/trackers/MTEAM.py

Comment thread src/trackers/MTEAM.py Outdated
Comment thread src/trackers/MTEAM.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/trackers/MTEAM.py (2)

37-43: Consider adding session cleanup.

The httpx.AsyncClient is created in __init__ but never explicitly closed. While this may work if tracker instances are short-lived, explicitly closing the session prevents resource leaks. You could add an async context manager or cleanup method.

♻️ Optional: Add async context manager support
+    async def close(self):
+        await self.session.aclose()
+
+    async def __aenter__(self):
+        return self
+
+    async def __aexit__(self, exc_type, exc_val, exc_tb):
+        await self.close()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 37 - 43, The AsyncClient created in
MTEAM.__init__ (self.session) is never closed; add explicit cleanup by
implementing an async close method (e.g., async def close(self)) that awaits
self.session.aclose(), and/or add async context manager methods (async def
__aenter__ and async def __aexit__) to call await self.session.aclose() so
callers can use "async with MTEAM(...)" or explicitly call close to prevent
resource leaks; update any instantiation sites to use the new context manager or
call the new close method.

635-636: Simplify timeout error message.

Line 636 accesses self.session.timeout.write, but since the client was created with timeout=30.0 (a scalar), this works but is accessing internal structure. Consider using the known value or str(self.session.timeout) for clarity.

♻️ Simpler approach
             except httpx.TimeoutException:
-                meta["tracker_status"][self.tracker]["status_message"] = f"data error: Request timed out after {self.session.timeout.write} seconds"
+                meta["tracker_status"][self.tracker]["status_message"] = "data error: Request timed out"
                 return False
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 635 - 636, The timeout except block sets
meta["tracker_status"][self.tracker]["status_message"] using internal attribute
self.session.timeout.write; replace that with a clearer representation such as
str(self.session.timeout) or the known scalar (e.g. 30.0) to avoid reaching into
internal structures—update the except httpx.TimeoutException handler in the
class/method where self.session is used so the status_message reads something
like "data error: Request timed out after {str(self.session.timeout)} seconds"
or uses the literal timeout value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/trackers/MTEAM.py`:
- Line 21: The class-level attribute douban_id on MTEAM creates shared state;
make it an instance attribute by removing the top-level "douban_id = ''" and
adding self.douban_id = '' in MTEAM.__init__, then update all reads/writes from
MTEAM.douban_id to self.douban_id (especially the places currently using
MTEAM.douban_id at the three spots referenced in the diff: the reference near
the beginning (formerly line ~86), the write where MTEAM.douban_id is set
(formerly line ~387), and the later read/use (formerly line ~578)) so each MTEAM
instance keeps its own douban_id.
- Around line 282-283: The current remux branch only returns movie_remux when
is_remux is true and meta["category"] == "MOVIE"; add a new tv_series_remux
category constant (with the correct M-Team category ID—look up M-Team upload
rules or the Jackett indexer for the exact ID) and update the remux check in the
is_remux handling to return movie_remux for meta["category"] == "MOVIE" and
tv_series_remux for meta["category"] == "TV" (leave tv_series_hd as the default
fallback).

---

Nitpick comments:
In `@src/trackers/MTEAM.py`:
- Around line 37-43: The AsyncClient created in MTEAM.__init__ (self.session) is
never closed; add explicit cleanup by implementing an async close method (e.g.,
async def close(self)) that awaits self.session.aclose(), and/or add async
context manager methods (async def __aenter__ and async def __aexit__) to call
await self.session.aclose() so callers can use "async with MTEAM(...)" or
explicitly call close to prevent resource leaks; update any instantiation sites
to use the new context manager or call the new close method.
- Around line 635-636: The timeout except block sets
meta["tracker_status"][self.tracker]["status_message"] using internal attribute
self.session.timeout.write; replace that with a clearer representation such as
str(self.session.timeout) or the known scalar (e.g. 30.0) to avoid reaching into
internal structures—update the except httpx.TimeoutException handler in the
class/method where self.session is used so the status_message reads something
like "data error: Request timed out after {str(self.session.timeout)} seconds"
or uses the literal timeout value.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a2602ccb-4701-4ee6-9028-a7ce4edea9f0

📥 Commits

Reviewing files that changed from the base of the PR and between 9962227 and 3efc759.

📒 Files selected for processing (1)
  • src/trackers/MTEAM.py

Comment thread src/trackers/MTEAM.py Outdated
Comment thread src/trackers/MTEAM.py
Comment on lines +282 to +283
if is_remux and meta["category"] == "MOVIE":
return movie_remux
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

M-Team tracker category list TV remux

💡 Result:

M-Team (tp.m-team.cc) is a Chinese private BitTorrent tracker specializing in HD movies, TV shows, anime, and general content. It categorizes torrents by format and type, with Remux releases (from Blu-ray/BDISO demuxed to MKV) placed under specific subcategories. Key categories relevant to TV Remux, based on Jackett indexer mappings and upload rules (data from 2020, categories may have minor updates): - TV/Documentary: 404 ("紀錄教育" - documentaries/education) - Separate tabs for Movies, TV, 4K indicate TV has dedicated categories. - Remux: Dedicated "Remux" category for BD remuxes (exact ID not listed, but upload rules specify "分類/Remux|從原盤抽出的remux"). - TV Remux likely under TV > Remux subcategory, similar to Movie/Remux. Full category examples from sources: - Audio categories (dropped): 406 MV, 408 AAC/ALAC, 434 Lossless. - Adult categories still exist. - Upload rules: Blu-ray/Remux/HD/SD/DVDiso for video; must include screenshots, MediaInfo, BDInfo for remuxes. For TV Remux specifically, use TV section on torrents.php, filter by Remux tag/source. Tools like PT-Muxer support M-Team remux uploads. No complete public ID list post-2020 found; check site profile or Jackett source code for latest. Official wiki: wiki.m-team.cc (rules in Chinese).

Citations:


🏁 Script executed:

# First, let's look at the get_category_id method and surrounding category definitions
rg -n "tv_series|movie_remux|tv_series_remux|category" src/trackers/MTEAM.py | head -40

Repository: Audionut/Upload-Assistant

Length of output: 842


🏁 Script executed:

# Also check what categories are defined as class variables
rg -n "^\s*(tv_series|movie|anime|dvd|sd|remux)" src/trackers/MTEAM.py

Repository: Audionut/Upload-Assistant

Length of output: 535


🏁 Script executed:

# Get the full get_category_id method
sed -n '258,292p' src/trackers/MTEAM.py

Repository: Audionut/Upload-Assistant

Length of output: 1282


🏁 Script executed:

# Search for any comments or references about M-Team remux or TV remux
rg -i "m-team.*remux|tv.*remux" src/trackers/MTEAM.py

Repository: Audionut/Upload-Assistant

Length of output: 51


🏁 Script executed:

# Check Jackett or other tracker files for M-Team category references
fd -name "*.json" -o -name "*jackett*" | head -20

Repository: Audionut/Upload-Assistant

Length of output: 299


🏁 Script executed:

# Search for any issue comments or documentation about M-Team remux
rg -i "mteam.*remux" . --type md --type txt 2>/dev/null | head -20

Repository: Audionut/Upload-Assistant

Length of output: 51


🏁 Script executed:

# Search for M-Team category definitions in comments or code
rg "438|435|402|403|419|420|421|439" src/trackers/MTEAM.py -B2 -A2

Repository: Audionut/Upload-Assistant

Length of output: 537


🏁 Script executed:

# Check if there are any test files or examples with TV remux
find . -name "*test*" -o -name "*example*" | xargs grep -l "MTEAM\|tv.*remux" 2>/dev/null | head -10

Repository: Audionut/Upload-Assistant

Length of output: 93


🏁 Script executed:

# Search for M-Team remux category ID across all files
rg "remux.*43[0-9]|43[0-9].*remux" . --type py 2>/dev/null

Repository: Audionut/Upload-Assistant

Length of output: 133


🏁 Script executed:

# Check if there's a TV remux category ID documented anywhere
rg "tv.*remux|remux.*tv" . -i --type py -A2 -B2 | head -40

Repository: Audionut/Upload-Assistant

Length of output: 3421


🏁 Script executed:

# Look at the example config to see if there are any hints about categories
grep -n "remux\|category" ./data/example-config.py

Repository: Audionut/Upload-Assistant

Length of output: 143


🏁 Script executed:

# Check other trackers in the codebase to see if they handle TV remux and what pattern they use
rg "tv_.*remux|remux.*tv" src/trackers/ -i --type py -l

Repository: Audionut/Upload-Assistant

Length of output: 149


🏁 Script executed:

# Look at SPD tracker to see how it handles TV remux categories
sed -n '1,150p' src/trackers/SPD.py | grep -A5 -B5 "category\|remux" -i

Repository: Audionut/Upload-Assistant

Length of output: 1563


🏁 Script executed:

# Check how other trackers define TV and movie remux categories
rg "remux.*=|tv.*=|movie.*=" src/trackers/*.py | grep -i "remux\|tv.*hd\|movie.*remux"

Repository: Audionut/Upload-Assistant

Length of output: 736


🏁 Script executed:

# Search for any documentation or issues about M-Team TV remux support
git log --all --oneline --grep="MTEAM\|M-Team" 2>/dev/null | head -20

Repository: Audionut/Upload-Assistant

Length of output: 51


Add TV remux category support to M-Team uploader.

The remux category check (lines 282-283) only handles movie remuxes. TV remux releases will fall through to the default tv_series_hd category instead of a dedicated remux category. M-Team supports TV remux uploads, so a tv_series_remux category definition should be added with the correct category ID, and the remux check should handle both TV and MOVIE categories. Check M-Team's upload rules or the Jackett indexer source for the correct TV remux category ID.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 282 - 283, The current remux branch only
returns movie_remux when is_remux is true and meta["category"] == "MOVIE"; add a
new tv_series_remux category constant (with the correct M-Team category ID—look
up M-Team upload rules or the Jackett indexer for the exact ID) and update the
remux check in the is_remux handling to return movie_remux for meta["category"]
== "MOVIE" and tv_series_remux for meta["category"] == "TV" (leave tv_series_hd
as the default fallback).

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/trackers/MTEAM.py (1)

595-600: ⚠️ Potential issue | 🟠 Major

Honor the tracker-level anon default in the payload.

The new TRACKERS["MTEAM"]["anon"] setting is still ignored here; this field only reflects meta["anon"]. If the tracker is configured to force anonymous uploads, this can still submit a non-anonymous payload.

🔒 Suggested fix
-            "anonymous": bool(meta.get("anon", False)),
+            "anonymous": bool(
+                meta.get("anon", False)
+                or self.config["TRACKERS"][self.tracker].get("anon", False)
+            ),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 595 - 600, The payload's "anonymous"
field currently uses meta.get("anon", False) and ignores the tracker-level
setting TRACKERS["MTEAM"]["anon"]; update the payload construction so
"anonymous" honors and can override based on the tracker config: compute a
boolean like anon = True if TRACKERS["MTEAM"]["anon"] is truthy (force anon)
else meta.get("anon", False), then set "anonymous": anon in the payload; modify
the code that builds the payload (the block referencing "imdb", "douban",
"anonymous") to use that computed value instead of meta.get("anon", False).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 437-443: The code collapses API/network/JSON errors into an empty
duplicate list by returning [] after calling self.session.post and parsing
res_json; instead detect and handle failures explicitly: check response.status
(raise/propagate an exception or call the tracker's skip mechanism) when status
is not 2xx, await and catch JSON decode errors from response.json(), and when
res_json.get("code") != "0" treat it as an error (don't return [])—use the
tracker's existing skip/err handling rather than returning [] (apply same
changes around the similar block that handles lines 469-472), referencing the
variables and calls: self.session.post, response.status, response.json(),
res_json, api_url, payload.
- Around line 26-42: fetch_data() currently always calls get_douban_info() but
get_douban_id()/get_douban_info() can return without setting
self.douban_id/self.douban_info, causing AttributeError or stale data; fix by
resetting self.douban_id = None and self.douban_info = None at the start of any
per-upload flow (e.g., in fetch_data() and before upload-related methods), and
make get_douban_id() and get_douban_info() explicitly set those attributes only
when a valid match is found and otherwise leave them as None; also add guards
where self.douban_id/self.douban_info are serialized (in methods that build
description/payload) to handle None safely (skip Douban fields or use empty
values). Ensure changes reference the existing methods fetch_data,
get_douban_id, get_douban_info and attributes self.douban_id/self.douban_info.

---

Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 595-600: The payload's "anonymous" field currently uses
meta.get("anon", False) and ignores the tracker-level setting
TRACKERS["MTEAM"]["anon"]; update the payload construction so "anonymous" honors
and can override based on the tracker config: compute a boolean like anon = True
if TRACKERS["MTEAM"]["anon"] is truthy (force anon) else meta.get("anon",
False), then set "anonymous": anon in the payload; modify the code that builds
the payload (the block referencing "imdb", "douban", "anonymous") to use that
computed value instead of meta.get("anon", False).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6f2918b4-7b91-4385-9480-aafd8a8245f8

📥 Commits

Reviewing files that changed from the base of the PR and between 3efc759 and 50a0217.

📒 Files selected for processing (1)
  • src/trackers/MTEAM.py

Comment thread src/trackers/MTEAM.py
Comment on lines +26 to +42
def __init__(self, config: Config):
self.config = config
self.common = COMMON(config)
self.tmdb_manager = TmdbManager(config)
self.tracker = "MTEAM"
self.base_url = f"https://{self.config['TRACKERS'][self.tracker].get('base_url', 'kp.m-team.cc')}"
self.api_base_url = "https://api.m-team.cc/api"
self.torrent_url = f"{self.base_url}/detail/"
self.banned_groups = ["FGT"]
self.api_key = self.config["TRACKERS"][self.tracker].get("api_key")
self.session = httpx.AsyncClient(
headers={
"x-api-key": self.api_key,
"Accept": "*/*",
},
timeout=30.0,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Reset and guard Douban state per upload.

fetch_data() always calls get_douban_info() and later serializes self.douban_id, but get_douban_id() returns without setting either field when the IMDb lookup has no Douban match. On a fresh instance that raises AttributeError; on a reused instance it can reuse the previous upload's Douban ID/info and generate the wrong description/payload.

🐛 Suggested fix
 class MTEAM:
@@
     def __init__(self, config: Config):
         self.config = config
         self.common = COMMON(config)
         self.tmdb_manager = TmdbManager(config)
         self.tracker = "MTEAM"
+        self.douban_id: Optional[int] = None
+        self.douban_info: Optional[dict[str, Any]] = None
         self.base_url = f"https://{self.config['TRACKERS'][self.tracker].get('base_url', 'kp.m-team.cc')}"
@@
     async def get_douban_info(self) -> None:
+        if self.douban_id is None:
+            return
+
         api_url = f"{self.api_base_url}/media/douban/infoV2"
@@
     async def fetch_data(self, meta: Meta) -> dict[str, Any]:
+        self.douban_id = None
+        self.douban_info = None
         await self.get_douban_id(meta)
-        await self.get_douban_info()
+        if self.douban_id is not None:
+            await self.get_douban_info()
@@
-            "douban": self.douban_id,
+            "douban": self.douban_id or "",

Also applies to: 81-87, 385-406, 573-597

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 26 - 42, fetch_data() currently always
calls get_douban_info() but get_douban_id()/get_douban_info() can return without
setting self.douban_id/self.douban_info, causing AttributeError or stale data;
fix by resetting self.douban_id = None and self.douban_info = None at the start
of any per-upload flow (e.g., in fetch_data() and before upload-related
methods), and make get_douban_id() and get_douban_info() explicitly set those
attributes only when a valid match is found and otherwise leave them as None;
also add guards where self.douban_id/self.douban_info are serialized (in methods
that build description/payload) to handle None safely (skip Douban fields or use
empty values). Ensure changes reference the existing methods fetch_data,
get_douban_id, get_douban_info and attributes self.douban_id/self.douban_info.

Comment thread src/trackers/MTEAM.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
src/trackers/MTEAM.py (3)

400-423: ⚠️ Potential issue | 🟠 Major

Reset Douban state before each upload.

Line 422 only updates self.douban_id on a hit, but Line 604 later builds the description from that instance field. After one successful upload, a later miss can reuse the previous release's Douban description while Line 615 sends 0 in the payload. Clear the field before lookup, or pass the ID through explicitly instead of storing per-upload state on the tracker.

Also applies to: 597-605

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 400 - 423, The tracker stores Douban
state on the instance (self.douban_id) only when a lookup hits, causing stale
data to be reused on subsequent misses; in get_douban_id (and the similar lookup
at the other block) clear or reset self.douban_id to 0 at the start of the
lookup and only set it when you have a successful match, or better yet stop
using the instance field entirely: return the found douban_id from get_douban_id
and update the caller to use that return value to build the description instead
of reading self.douban_id.

453-490: ⚠️ Potential issue | 🟠 Major

Don't treat a failed dupe check as "no dupes".

Lines 457-459 and 485-492 both return an empty result on API/search errors. That makes an M-Team outage look like a clean dupe check and can let the upload continue. Mark the tracker as skipped/failed in these branches instead of returning [].

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 453 - 490, The code currently treats
API/search failures as "no dupes" by returning an empty dupes list; instead when
the response code is not "0" or an exception is raised, set meta["skipping"] =
self.tracker (to mark the tracker skipped/failed) and return None (or another
sentinel that callers treat as a failed check) rather than returning []. Update
the block that handles res_json.get("code") != "0" and the exception handler to
assign meta["skipping"] = self.tracker (use imdb_id/self.tracker for context if
desired), avoid returning dupes, and return None; keep references to dupes,
meta, imdb_id, self.tracker, the async post call (self.session.post) and
get_dupe_bdinfo to locate the logic.

296-297: ⚠️ Potential issue | 🟠 Major

Handle TV remuxes explicitly.

This branch only gives REMUX a dedicated category for movies. TV remux releases currently fall through to tv_series_hd, so they'll be miscategorized unless TV remux is intentionally unsupported. Please add the TV remux mapping or reject TV remux uploads here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 296 - 297, The current is_remux branch
only handles movies (returns movie_remux) and lets TV remuxes fall through to
tv_series_hd; update the is_remux handling to explicitly map TV remuxes or
reject them. Modify the block that checks is_remux and meta["category"] to: if
meta["category"] indicates TV (e.g., "TV" or the codebase's TV constant), return
the appropriate tv remux category (e.g., tv_remux) or raise/return an explicit
error for TV remux uploads; keep existing movie_remux return for
meta["category"] == "MOVIE". Ensure you reference is_remux, meta["category"],
movie_remux and tv_series_hd (and add tv_remux or the rejection path) so TV
remuxes are not miscategorized.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 46-52: The client currently sets the M-Team API key as a default
header on self.session which causes the Douban GET to forward the private key;
remove the x-api-key from the shared client and either (A) pass the API key
explicitly in per-request headers when calling M-Team endpoints (i.e. supply
headers={"x-api-key": self.api_key} on requests that call the M-Team API), or
(B) create a second AsyncClient (e.g. self.unauthenticated_session) without the
x-api-key and use that for the Douban GET and any third‑party hosts; update all
methods that call M-Team APIs to use per-request headers or the authenticated
client and make sure the Douban scraping method uses the unauthenticated client
so the key is never sent to external hosts.

---

Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 400-423: The tracker stores Douban state on the instance
(self.douban_id) only when a lookup hits, causing stale data to be reused on
subsequent misses; in get_douban_id (and the similar lookup at the other block)
clear or reset self.douban_id to 0 at the start of the lookup and only set it
when you have a successful match, or better yet stop using the instance field
entirely: return the found douban_id from get_douban_id and update the caller to
use that return value to build the description instead of reading
self.douban_id.
- Around line 453-490: The code currently treats API/search failures as "no
dupes" by returning an empty dupes list; instead when the response code is not
"0" or an exception is raised, set meta["skipping"] = self.tracker (to mark the
tracker skipped/failed) and return None (or another sentinel that callers treat
as a failed check) rather than returning []. Update the block that handles
res_json.get("code") != "0" and the exception handler to assign meta["skipping"]
= self.tracker (use imdb_id/self.tracker for context if desired), avoid
returning dupes, and return None; keep references to dupes, meta, imdb_id,
self.tracker, the async post call (self.session.post) and get_dupe_bdinfo to
locate the logic.
- Around line 296-297: The current is_remux branch only handles movies (returns
movie_remux) and lets TV remuxes fall through to tv_series_hd; update the
is_remux handling to explicitly map TV remuxes or reject them. Modify the block
that checks is_remux and meta["category"] to: if meta["category"] indicates TV
(e.g., "TV" or the codebase's TV constant), return the appropriate tv remux
category (e.g., tv_remux) or raise/return an explicit error for TV remux
uploads; keep existing movie_remux return for meta["category"] == "MOVIE".
Ensure you reference is_remux, meta["category"], movie_remux and tv_series_hd
(and add tv_remux or the rejection path) so TV remuxes are not miscategorized.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 19e03353-0627-498c-80ee-d77ed31b9186

📥 Commits

Reviewing files that changed from the base of the PR and between 50a0217 and ba6b3f5.

📒 Files selected for processing (1)
  • src/trackers/MTEAM.py

Comment thread src/trackers/MTEAM.py
@wastaken7 wastaken7 requested a review from Audionut March 27, 2026 23:32
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (3)
src/trackers/MTEAM.py (3)

400-412: ⚠️ Potential issue | 🟠 Major

Reset self.douban_id at the start of each lookup to prevent stale cross-upload state.

At Line 410/Line 434 this method can return 0 without updating self.douban_id, so later description generation (Line 121) may use a previous upload’s Douban ID.

🔧 Proposed fix
 async def get_douban_id(self, meta: Meta) -> int:
     douban_id: int = 0
+    self.douban_id = 0
     douban_manual = int(meta.get("douban_manual", 0))
 
     if douban_manual:
         console.print(f"{self.tracker}: Using manual Douban ID: {douban_manual}")
         self.douban_id = douban_manual
         return douban_manual
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 400 - 412, In get_douban_id, reset
self.douban_id to 0 at the start of the method to avoid carrying a previous
upload's value into later steps; specifically, in the async def
get_douban_id(self, meta: Meta) method set self.douban_id = 0 before checking
douban_manual or imdb_info so any early return (return 0) leaves self.douban_id
consistent for downstream code that generates descriptions.

273-297: ⚠️ Potential issue | 🟠 Major

TV remux currently falls into HD category instead of remux.

Line 296 only maps remux when category is MOVIE; TV remuxes fall through to default HD (Line 306), which can miscategorize uploads.

🔧 Proposed fix shape (after confirming the TV remux category ID)
         movie_remux = 439  # Movie/Remux
+        tv_series_remux = <VERIFY_MTEAM_TV_REMUX_ID>
@@
-        if is_remux and meta["category"] == "MOVIE":
-            return movie_remux
+        if is_remux:
+            return tv_series_remux if meta["category"] == "TV" else movie_remux
What is the current official M-Team category ID for TV Series Remux uploads?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 273 - 297, The remux mapping only handles
MOVIE remuxes and misses TV remuxes; add a tv_series_remux constant (with the
official M-Team category ID) alongside movie_remux and then change the remux
logic in the tracker to return the appropriate remux category for TV: e.g.
update the condition that now reads "if is_remux and meta['category'] ==
'MOVIE': return movie_remux" so it handles TV as well (either by branching: if
meta.get('category') == 'MOVIE' return movie_remux elif meta.get('category') ==
'TV' return tv_series_remux, or by a single expression that selects movie_remux
vs tv_series_remux), and use meta.get('category') for safety.

462-468: ⚠️ Potential issue | 🟠 Major

Treat dupe-check API failures as tracker-skip, not “no dupes.”

At Line 467/Line 495, API or request failures return an empty dupe list without consistently setting meta["skipping"]. That can be interpreted as a clean dupe check.

🛡️ Proposed fix
         try:
             response = await self.session.post(api_url, json=payload, timeout=15)
+            response.raise_for_status()
             res_json = response.json()
 
             if res_json.get("code") != "0":
                 console.print(f"[bold red]API Error: {res_json.get('message')}[/bold red]")
+                meta["skipping"] = f"{self.tracker}"
                 return dupes
@@
         except Exception as e:
             console.print(f"[bold red]Error searching for IMDb ID {imdb_id} on {self.tracker}: {e}[/bold red]")
-            if not meta["unattended"] or (meta["unattended"] and meta.get("unattended_confirm", False)):
-                pass
-            else:
-                meta["skipping"] = f"{self.tracker}"
+            meta["skipping"] = f"{self.tracker}"
 
         return dupes

Also applies to: 494-501

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 462 - 468, The dupe-check currently
returns an empty dupes list on API or request failures without marking the
tracker as skipped; update the error branches where you check
res_json.get("code") != "0" (and the similar block at lines ~494-501) to set
meta["skipping"] = True (e.g., self.meta["skipping"] = True) before returning
dupes, and do the same in any exception/timeout handlers for self.session.post
so API/request failures are treated as tracker-skip rather than “no dupes.”
Ensure you update both the res_json error path and the other failure path in
this method so skipping is consistently recorded.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 400-412: In get_douban_id, reset self.douban_id to 0 at the start
of the method to avoid carrying a previous upload's value into later steps;
specifically, in the async def get_douban_id(self, meta: Meta) method set
self.douban_id = 0 before checking douban_manual or imdb_info so any early
return (return 0) leaves self.douban_id consistent for downstream code that
generates descriptions.
- Around line 273-297: The remux mapping only handles MOVIE remuxes and misses
TV remuxes; add a tv_series_remux constant (with the official M-Team category
ID) alongside movie_remux and then change the remux logic in the tracker to
return the appropriate remux category for TV: e.g. update the condition that now
reads "if is_remux and meta['category'] == 'MOVIE': return movie_remux" so it
handles TV as well (either by branching: if meta.get('category') == 'MOVIE'
return movie_remux elif meta.get('category') == 'TV' return tv_series_remux, or
by a single expression that selects movie_remux vs tv_series_remux), and use
meta.get('category') for safety.
- Around line 462-468: The dupe-check currently returns an empty dupes list on
API or request failures without marking the tracker as skipped; update the error
branches where you check res_json.get("code") != "0" (and the similar block at
lines ~494-501) to set meta["skipping"] = True (e.g., self.meta["skipping"] =
True) before returning dupes, and do the same in any exception/timeout handlers
for self.session.post so API/request failures are treated as tracker-skip rather
than “no dupes.” Ensure you update both the res_json error path and the other
failure path in this method so skipping is consistently recorded.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7e530dc8-7f16-4c71-a9e8-cd1a11e52843

📥 Commits

Reviewing files that changed from the base of the PR and between 50a0217 and 0f56a9b.

📒 Files selected for processing (2)
  • src/args.py
  • src/trackers/MTEAM.py

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/trackers/MTEAM.py (2)

347-348: ⚠️ Potential issue | 🟡 Minor

TV remux releases not properly categorized.

The remux check only handles meta["category"] == "MOVIE". TV remux releases will incorrectly fall through to tv_series_hd instead of a dedicated TV remux category.

Verify if M-Team has a separate TV remux category ID:

M-Team tracker category ID TV series remux
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 347 - 348, The current conditional only
returns movie_remux for remuxes, causing TV remuxes to fall through to
tv_series_hd; update the remux handling in the selector that uses is_remux and
meta["category"] to also detect TV remuxes (e.g., when meta["category"] is the
TV/series value) and return a tv_remux/series_remux category variable instead of
tv_series_hd; locate the code around the is_remux check and adjust the branch to
check meta["category"] for the TV series identifier and map it to the dedicated
tv remux constant (and if that constant/mapping doesn't exist, add it and ensure
it matches M-Team's TV remux category ID by verifying the tracker category ID
for TV series remux).

513-520: ⚠️ Potential issue | 🟡 Minor

Add raise_for_status() after dupe search request.

Line 514 doesn't call raise_for_status(), so HTTP errors (4xx/5xx) won't raise exceptions and may result in JSON parse errors or unexpected behavior.

Suggested fix
         try:
             response = await self.session.post(api_url, json=payload, timeout=15)
+            response.raise_for_status()
             res_json = response.json()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 513 - 520, The HTTP response from
self.session.post (variable response) is not checked for HTTP errors before
parsing JSON; call response.raise_for_status() immediately after receiving the
response (right after await self.session.post(...)) and before calling
response.json()/await response.json() so 4xx/5xx errors raise exceptions instead
of causing JSON parse or logic errors when checking res_json; update the code
around the post call in MTEAM.py where response and res_json are used to include
this check.
🧹 Nitpick comments (2)
src/trackers/MTEAM.py (2)

73-73: Remove or guard debug print statement.

console.print(res_json) will output raw API response data on every request search, which may be noisy in production. Consider removing it or gating it behind meta.get('debug').

Suggested fix
             response.raise_for_status()
             res_json = response.json()
-            console.print(res_json)
+            if meta.get('debug'):
+                console.print(res_json)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` at line 73, Remove the unguarded debug print of the
API response: stop calling console.print(res_json) on every request search
(remove the line or wrap it in a conditional). Instead, only emit the raw
response when debugging is enabled by checking a debug flag like
meta.get('debug') (or similar request-level flag) before printing; locate the
console.print(res_json) call (which references res_json) in the tracker function
in MTEAM.py and replace it with a guarded print or delete it for production.

716-718: Variable data is shadowed.

Line 717 reassigns data (originally the upload payload from line 692) to the download API response. This could cause confusion during debugging.

Suggested fix
                     download_api_url = f"{self.api_base_url}/torrent/genDlToken?id={torrent_id}"
                     response = await self.session.post(download_api_url)
-                    data = response.json()
-                    final_download_url = data.get("data")
+                    dl_response = response.json()
+                    final_download_url = dl_response.get("data")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 716 - 718, The code reuses the variable
name data (originally the upload payload) for the download API response, causing
shadowing and confusion; change the response parsing to use a distinct name
(e.g., download_json or download_data) when doing response.json() and assign
final_download_url = download_data.get("data") (and ensure you await
response.json() if necessary) so upload payload and download response remain
separate; update references around self.session.post(download_api_url) and
final_download_url accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 451-458: In get_douban_id, avoid calling
int(meta.get("douban_manual", 0)) directly; instead retrieve the raw value from
meta (use meta.get("douban_manual")), treat missing/None as 0, and attempt to
convert to int inside a try/except (or use a safe parse) so non-numeric strings
won't raise ValueError; on parse failure log/warn via console or process logger
that the provided douban_manual is invalid and fall back to 0 (do not crash),
then continue to set self.douban_id and return the validated integer; update
references to douban_manual in get_douban_id to use this validated value.

---

Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 347-348: The current conditional only returns movie_remux for
remuxes, causing TV remuxes to fall through to tv_series_hd; update the remux
handling in the selector that uses is_remux and meta["category"] to also detect
TV remuxes (e.g., when meta["category"] is the TV/series value) and return a
tv_remux/series_remux category variable instead of tv_series_hd; locate the code
around the is_remux check and adjust the branch to check meta["category"] for
the TV series identifier and map it to the dedicated tv remux constant (and if
that constant/mapping doesn't exist, add it and ensure it matches M-Team's TV
remux category ID by verifying the tracker category ID for TV series remux).
- Around line 513-520: The HTTP response from self.session.post (variable
response) is not checked for HTTP errors before parsing JSON; call
response.raise_for_status() immediately after receiving the response (right
after await self.session.post(...)) and before calling response.json()/await
response.json() so 4xx/5xx errors raise exceptions instead of causing JSON parse
or logic errors when checking res_json; update the code around the post call in
MTEAM.py where response and res_json are used to include this check.

---

Nitpick comments:
In `@src/trackers/MTEAM.py`:
- Line 73: Remove the unguarded debug print of the API response: stop calling
console.print(res_json) on every request search (remove the line or wrap it in a
conditional). Instead, only emit the raw response when debugging is enabled by
checking a debug flag like meta.get('debug') (or similar request-level flag)
before printing; locate the console.print(res_json) call (which references
res_json) in the tracker function in MTEAM.py and replace it with a guarded
print or delete it for production.
- Around line 716-718: The code reuses the variable name data (originally the
upload payload) for the download API response, causing shadowing and confusion;
change the response parsing to use a distinct name (e.g., download_json or
download_data) when doing response.json() and assign final_download_url =
download_data.get("data") (and ensure you await response.json() if necessary) so
upload payload and download response remain separate; update references around
self.session.post(download_api_url) and final_download_url accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b0f2364e-0ab0-4f79-80a1-50ec439bc69c

📥 Commits

Reviewing files that changed from the base of the PR and between 0f56a9b and 307b431.

📒 Files selected for processing (2)
  • src/trackers/MTEAM.py
  • src/trackersetup.py

Comment thread src/trackers/MTEAM.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/trackers/MTEAM.py (1)

520-526: ⚠️ Potential issue | 🟡 Minor

Add raise_for_status() before parsing dupe search response.

Unlike other API calls in this file, search_existing() doesn't call response.raise_for_status() before parsing JSON. A 4xx/5xx response will cause response.json() to either fail or return unexpected data.

Suggested fix
         try:
             response = await self.session.post(api_url, json=payload, timeout=15)
+            response.raise_for_status()
             res_json = response.json()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 520 - 526, In search_existing(), call
response.raise_for_status() immediately after the await self.session.post(...)
returns and before attempting to parse the response, then parse JSON from the
successful response (use the appropriate async/await form for response.json() if
the client is async); this ensures HTTP 4xx/5xx errors are raised instead of
causing response.json() to fail or return unexpected data and keeps the existing
checks on res_json.get("code") intact.
🧹 Nitpick comments (5)
src/trackers/MTEAM.py (5)

176-177: Minor: Use unpacking for list concatenation.

Static analysis suggests using [title, *aka] instead of [title] + aka for cleaner syntax.

Suggested fix
-            translated_names = " / ".join([title] + aka) if title else " / ".join(aka)
+            translated_names = " / ".join([title, *aka]) if title else " / ".join(aka)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 176 - 177, The concatenation [title] +
aka used to build translated_names is less idiomatic; replace it with list
unpacking [title, *aka] when title is present so translated_names = " /
".join([title, *aka]) (reference variables: title, aka, translated_names in
MTEAM.py).

118-121: Add error handling for missing mediainfo file.

If the mediainfo file doesn't exist (e.g., for a new upload or if generation failed), aiofiles.open() will raise FileNotFoundError, causing the upload to fail ungracefully.

Suggested fix
         if mi_path:
-            async with aiofiles.open(mi_path, encoding="utf-8") as f:
-                mediainfo = await f.read()
+            try:
+                async with aiofiles.open(mi_path, encoding="utf-8") as f:
+                    mediainfo = await f.read()
+            except FileNotFoundError:
+                console.print(f"{self.tracker}: [bold yellow]Mediainfo file not found at {mi_path}[/bold yellow]")
 
         return mediainfo
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 118 - 121, Wrap the
aiofiles.open(mi_path, ...) read in a try/except to handle FileNotFoundError so
a missing mediainfo file doesn't crash the upload: around the existing mi_path
check in the function (where mi_path is used and mediainfo assigned) catch
FileNotFoundError from aiofiles.open/read, set mediainfo to None or an empty
string, and optionally log a warning; keep the rest of the code using mediainfo
tolerant of None/"" so the upload can continue gracefully.

45-45: Unused banned_groups variable.

The banned_groups list is defined but never referenced anywhere in this module. If it's intended for upload validation, implement the check; otherwise, remove the dead code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` at line 45, The attribute self.banned_groups in class
MTEAM is dead code; either remove it or use it to block uploads—if you intend to
validate uploads, add a check in the upload/validation path (e.g., in methods
named validate_upload, process_upload, or upload) that checks if the incoming
item's group is in self.banned_groups and then reject/log/raise (consistent with
existing validation behavior), otherwise delete the self.banned_groups
definition to remove the unused variable.

698-720: Add setdefault guards for tracker_status before writes.

Per the UNIT3D pattern in the codebase, tracker modules should guard against missing tracker_status keys before writing. This prevents KeyError if the status dict isn't pre-initialized.

Suggested fix
     async def upload(self, meta: Meta, _) -> bool:
+        meta.setdefault("tracker_status", {})
+        meta["tracker_status"].setdefault(self.tracker, {})
+
         data = await self.fetch_data(meta)
         response = None

Based on relevant code snippet from src/trackers/UNIT3D.py:49-51 showing the defensive setdefault() pattern used by other trackers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 698 - 720, The upload method writes into
meta["tracker_status"][self.tracker] without ensuring that tracker_status and
the per-tracker dict exist; before assigning
meta["tracker_status"][self.tracker]["torrent_id"] and ["status_message"] in
upload, call setdefault on meta for "tracker_status" and on
meta["tracker_status"] for self.tracker (mirroring the UNIT3D pattern) so the
dicts are created if missing; update the upload function to perform these
setdefault guards just prior to the assignments to avoid KeyError.

309-311: No circular dependency—late import is acceptable.

The from src.bbcode import BBCODE import at line 309 is not a circular import workaround; src/bbcode.py has no dependencies on MTEAM.py. BBCODE is used only in this single function, and the late import pattern (lazy loading) is a valid optimization that avoids loading the module unnecessarily. Moving it to module level is optional and would not significantly improve performance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 309 - 311, The local import "from
src.bbcode import BBCODE" inside MTEAM.py is fine—there's no circular dependency
and BBCODE is used only in this function, so keep the late import and the bbcode
= BBCODE() instantiation as-is; if you prefer a micro-optimization you may
optionally move the import to module level and instantiate as a module-level
singleton, but no code change is required to address circular-import concerns.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/trackers/MTEAM.py`:
- Around line 60-78: get_category_id() can return None, so current filtering "if
item.get('category') != category: continue" will drop all items when category is
None; change the loop in the async block that iterates over data_list to only
apply the category check when category is not None (e.g., use a guard like "if
category is not None and item.get('category') != category: continue"), leaving
items unfiltered when get_category_id() returns None; update any related
comments and ensure you reference get_category_id and the data_list iteration in
MTEAM.py.
- Around line 552-558: The exception handler in the IMDb search block has
inverted logic for setting meta["skipping"]; change it so that skipping is set
when we're running unattended without confirmation. Specifically, in the except
Exception block that references imdb_id and self.tracker, replace the current
condition with one that sets meta["skipping"] = f"{self.tracker}" when
meta.get("unattended", False) is True and meta.get("unattended_confirm", False)
is False; leave interactive/unconfirmed flows alone (i.e., do not set skipping
when not unattended or when unattended_confirm is True).

---

Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 520-526: In search_existing(), call response.raise_for_status()
immediately after the await self.session.post(...) returns and before attempting
to parse the response, then parse JSON from the successful response (use the
appropriate async/await form for response.json() if the client is async); this
ensures HTTP 4xx/5xx errors are raised instead of causing response.json() to
fail or return unexpected data and keeps the existing checks on
res_json.get("code") intact.

---

Nitpick comments:
In `@src/trackers/MTEAM.py`:
- Around line 176-177: The concatenation [title] + aka used to build
translated_names is less idiomatic; replace it with list unpacking [title, *aka]
when title is present so translated_names = " / ".join([title, *aka]) (reference
variables: title, aka, translated_names in MTEAM.py).
- Around line 118-121: Wrap the aiofiles.open(mi_path, ...) read in a try/except
to handle FileNotFoundError so a missing mediainfo file doesn't crash the
upload: around the existing mi_path check in the function (where mi_path is used
and mediainfo assigned) catch FileNotFoundError from aiofiles.open/read, set
mediainfo to None or an empty string, and optionally log a warning; keep the
rest of the code using mediainfo tolerant of None/"" so the upload can continue
gracefully.
- Line 45: The attribute self.banned_groups in class MTEAM is dead code; either
remove it or use it to block uploads—if you intend to validate uploads, add a
check in the upload/validation path (e.g., in methods named validate_upload,
process_upload, or upload) that checks if the incoming item's group is in
self.banned_groups and then reject/log/raise (consistent with existing
validation behavior), otherwise delete the self.banned_groups definition to
remove the unused variable.
- Around line 698-720: The upload method writes into
meta["tracker_status"][self.tracker] without ensuring that tracker_status and
the per-tracker dict exist; before assigning
meta["tracker_status"][self.tracker]["torrent_id"] and ["status_message"] in
upload, call setdefault on meta for "tracker_status" and on
meta["tracker_status"] for self.tracker (mirroring the UNIT3D pattern) so the
dicts are created if missing; update the upload function to perform these
setdefault guards just prior to the assignments to avoid KeyError.
- Around line 309-311: The local import "from src.bbcode import BBCODE" inside
MTEAM.py is fine—there's no circular dependency and BBCODE is used only in this
function, so keep the late import and the bbcode = BBCODE() instantiation as-is;
if you prefer a micro-optimization you may optionally move the import to module
level and instantiate as a module-level singleton, but no code change is
required to address circular-import concerns.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 84388580-2e87-4810-bcec-3cec265ed83d

📥 Commits

Reviewing files that changed from the base of the PR and between 307b431 and 753aeaf.

📒 Files selected for processing (1)
  • src/trackers/MTEAM.py

Comment thread src/trackers/MTEAM.py
Comment on lines +60 to +78
category = self.get_category_id(meta)

payload = {
"pageNumber": 1,
"pageSize": 10,
"keyword": meta["title"],
"take": False,
}

try:
response = await self.session.post(self.requests_url, json=payload, timeout=15)
response.raise_for_status()
res_json = response.json()

data_list = res_json.get("data", {}).get("data", [])

for item in data_list:
if item.get("category") != category:
continue
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Handle None category in request filtering.

get_category_id() returns Optional[int], so category can be None. Comparing item.get("category") != category when category is None will always be truthy (since M-Team categories are integers), causing all requests to be skipped.

Suggested fix
     async def get_requests(self, meta: dict[str, Any]) -> list[dict[str, str]]:
         requests: list[dict[str, str]] = []
 
         category = self.get_category_id(meta)
+        if category is None:
+            console.print(f"{self.tracker}: [bold yellow]Could not determine category, skipping request search.[/bold yellow]")
+            return requests
 
         payload = {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 60 - 78, get_category_id() can return
None, so current filtering "if item.get('category') != category: continue" will
drop all items when category is None; change the loop in the async block that
iterates over data_list to only apply the category check when category is not
None (e.g., use a guard like "if category is not None and item.get('category')
!= category: continue"), leaving items unfiltered when get_category_id() returns
None; update any related comments and ensure you reference get_category_id and
the data_list iteration in MTEAM.py.

Comment thread src/trackers/MTEAM.py
Comment on lines +552 to +558
except Exception as e:
console.print(f"[bold red]Error searching for IMDb ID {imdb_id} on {self.tracker}: {e}[/bold red]")
if not meta["unattended"] or (meta["unattended"] and meta.get("unattended_confirm", False)):
pass
else:
meta["skipping"] = f"{self.tracker}"

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inverted skipping logic in exception handler.

The condition if not meta["unattended"] or (meta["unattended"] and meta.get("unattended_confirm", False)) means skipping is set only when NOT in unattended mode or when unattended with confirm. This seems backwards—API errors in unattended mode should trigger skipping to prevent silent failures.

Suggested fix
         except Exception as e:
             console.print(f"[bold red]Error searching for IMDb ID {imdb_id} on {self.tracker}: {e}[/bold red]")
-            if not meta["unattended"] or (meta["unattended"] and meta.get("unattended_confirm", False)):
-                pass
-            else:
-                meta["skipping"] = f"{self.tracker}"
+            if meta["unattended"] and not meta.get("unattended_confirm", False):
+                meta["skipping"] = self.tracker
 
         return dupes
🧰 Tools
🪛 Ruff (0.15.7)

[warning] 552-552: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 552 - 558, The exception handler in the
IMDb search block has inverted logic for setting meta["skipping"]; change it so
that skipping is set when we're running unattended without confirmation.
Specifically, in the except Exception block that references imdb_id and
self.tracker, replace the current condition with one that sets meta["skipping"]
= f"{self.tracker}" when meta.get("unattended", False) is True and
meta.get("unattended_confirm", False) is False; leave interactive/unconfirmed
flows alone (i.e., do not set skipping when not unattended or when
unattended_confirm is True).

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/trackers/MTEAM.py (1)

726-736: ⚠️ Potential issue | 🟠 Major

Check the return value of download_tracker_torrent.

The code calls download_tracker_torrent() but ignores its return value (which is Optional[str]). If the upload succeeds but the torrent download fails, the method still returns True, potentially causing the downstream client injection to fail because no [MTEAM].torrent file exists.

Suggested fix
                     if final_download_url:
-                        await self.common.download_tracker_torrent(
+                        downloaded = await self.common.download_tracker_torrent(
                             meta,
                             self.tracker,
                             headers=dict(self.session.headers),
                             downurl=final_download_url,
                         )
-                        return True
+                        if downloaded:
+                            return True
+                        console.print(f"{self.tracker}: Upload succeeded but failed to download tracker torrent.")
+                        meta["tracker_status"][self.tracker]["status_message"] = "Upload succeeded but torrent download failed"
+                        return False
                     console.print(f"{self.tracker}: Failed to get download URL from API response.")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` around lines 726 - 736, The code currently ignores
download_tracker_torrent's return (Optional[str]) and returns True even if the
torrent download failed; update the block in the MTEAM upload/download flow to
capture the result of download_tracker_torrent(meta, self.tracker, headers=...,
downurl=final_download_url), check that it returned a non-empty path/string, and
only return True when that result is truthy; if it is falsy, set
meta["tracker_status"][self.tracker]["status_message"] to an appropriate failure
message (e.g., "Torrent download failed") and return False so downstream logic
won't assume the .torrent exists. Ensure you reference the existing variables
final_download_url, download_tracker_torrent, and meta/self.tracker when making
the change.
🧹 Nitpick comments (1)
src/trackers/MTEAM.py (1)

177-177: Consider using list unpacking for cleaner concatenation.

Per Ruff suggestion, list unpacking is preferred over concatenation.

Suggested fix
-            translated_names = " / ".join([title] + aka) if title else " / ".join(aka)
+            translated_names = " / ".join([title, *aka]) if title else " / ".join(aka)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/trackers/MTEAM.py` at line 177, Replace the conditional list
concatenation used to build translated_names with list unpacking: construct a
single list that includes title only if present followed by all items from aka,
then join with " / ". Update the assignment to translated_names (using title and
aka) to use unpacking so the expression is cleaner and avoids explicit list
concatenation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/trackers/MTEAM.py`:
- Around line 726-736: The code currently ignores download_tracker_torrent's
return (Optional[str]) and returns True even if the torrent download failed;
update the block in the MTEAM upload/download flow to capture the result of
download_tracker_torrent(meta, self.tracker, headers=...,
downurl=final_download_url), check that it returned a non-empty path/string, and
only return True when that result is truthy; if it is falsy, set
meta["tracker_status"][self.tracker]["status_message"] to an appropriate failure
message (e.g., "Torrent download failed") and return False so downstream logic
won't assume the .torrent exists. Ensure you reference the existing variables
final_download_url, download_tracker_torrent, and meta/self.tracker when making
the change.

---

Nitpick comments:
In `@src/trackers/MTEAM.py`:
- Line 177: Replace the conditional list concatenation used to build
translated_names with list unpacking: construct a single list that includes
title only if present followed by all items from aka, then join with " / ".
Update the assignment to translated_names (using title and aka) to use unpacking
so the expression is cleaner and avoids explicit list concatenation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 04d40bc6-76ae-45f3-bd77-a750ce5e6e0e

📥 Commits

Reviewing files that changed from the base of the PR and between 753aeaf and 4aaef6b.

📒 Files selected for processing (1)
  • src/trackers/MTEAM.py

@wastaken7 wastaken7 changed the title Add support for M-Team Add support for M-Team / lajidui / LongPT / ptcafe / PTFans / PT GTK / RailgunPT May 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant